Neurosymbolic Machine Learning, Generative Models, commonsense reasoning
https://www.emilevankrieken.com/
Read more π
go.bsky.app/RMJ8q3i
Let me know whom I missed!
go.bsky.app/2Bqtn6T
go.bsky.app/RMJ8q3i
- PhD 1: jobs.helsinki.fi/job/Helsinki...
- Postdoc 1: jobs.helsinki.fi/job/Helsinki...
-Postdoc 2: jobs.helsinki.fi/job/Helsinki...
- PhD 1: jobs.helsinki.fi/job/Helsinki...
- Postdoc 1: jobs.helsinki.fi/job/Helsinki...
-Postdoc 2: jobs.helsinki.fi/job/Helsinki...
We are organizing this #ICLR2026 workshop to bring these three communities together and learn from each other π¦Ύπ₯π₯
Submission deadline: 30 Jan 2026
When? 26 or 27 April 2026
Where? Rio de Janeiro, Brazil
Call for papers, schedule, invited speakers & more:
ucrl-iclr26.github.io
Looking forward to your submissions!
We are organizing this #ICLR2026 workshop to bring these three communities together and learn from each other π¦Ύπ₯π₯
Submission deadline: 30 Jan 2026
We show how linearity prevent KGEs from scaling to larger graphs + propose a simple solution using a Mixture of Softmaxes (see LLM literature) to break the limitations at a low parameter cost. π¨
π¨ Rank bottlenecks in KGEs:
At Friday's "Salon des Refuses" I will present @sbadredd.bsky.social 's new work on how rank bottlenecks limit knowledge graph embeddings
arxiv.org/abs/2506.22271
We show how linearity prevent KGEs from scaling to larger graphs + propose a simple solution using a Mixture of Softmaxes (see LLM literature) to break the limitations at a low parameter cost. π¨
Check out insightful talks from @guyvdb.bsky.social, @tkipf.bsky.social and D McGuinness on our new Youtube channel www.youtube.com/@NeSyconfere...
Topics include using symbolic reasoning for LLM, and object-centric representations!
Check out insightful talks from @guyvdb.bsky.social, @tkipf.bsky.social and D McGuinness on our new Youtube channel www.youtube.com/@NeSyconfere...
Topics include using symbolic reasoning for LLM, and object-centric representations!
We introduce Vision-Language Programs (VLP), a neuro-symbolic framework that combines the perceptual power of VLMs with program synthesis for robust visual reasoning.
We introduce Vision-Language Programs (VLP), a neuro-symbolic framework that combines the perceptual power of VLMs with program synthesis for robust visual reasoning.
π¨ Rank bottlenecks in KGEs:
At Friday's "Salon des Refuses" I will present @sbadredd.bsky.social 's new work on how rank bottlenecks limit knowledge graph embeddings
arxiv.org/abs/2506.22271
π¨ Rank bottlenecks in KGEs:
At Friday's "Salon des Refuses" I will present @sbadredd.bsky.social 's new work on how rank bottlenecks limit knowledge graph embeddings
arxiv.org/abs/2506.22271
π GRAPES: At Tuesday's ELLIS Unconference poster session.
We study adaptive graph sampling for scaling GNNs!
Work with Taraneh Younesian, Daniel Daza, @thiviyan.bsky.social, @pbloem.sigmoid.social.ap.brid.gy
arxiv.org/abs/2310.03399
π GRAPES: At Tuesday's ELLIS Unconference poster session.
We study adaptive graph sampling for scaling GNNs!
Work with Taraneh Younesian, Daniel Daza, @thiviyan.bsky.social, @pbloem.sigmoid.social.ap.brid.gy
arxiv.org/abs/2310.03399
π§ Neurosymbolic Diffusion Models: Thursday's poster session.
Going to NeurIPS? @edoardo-ponti.bsky.social and @nolovedeeplearning.bsky.social will present the paper in San Diego Thu 13:00
arxiv.org/abs/2505.13138
π§ Neurosymbolic Diffusion Models: Thursday's poster session.
Going to NeurIPS? @edoardo-ponti.bsky.social and @nolovedeeplearning.bsky.social will present the paper in San Diego Thu 13:00
arxiv.org/abs/2505.13138
We invented a new algorithm analysis framework to find out.
We invented a new algorithm analysis framework to find out.
Fear notπͺπ»In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
Fear notπͺπ»In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
Come check it out if your interested in multilingual linguistic evaluation of LLMs (there will be parse trees on the slides! There's still use for syntactic structure!)
arxiv.org/abs/2504.02768
Come check it out if your interested in multilingual linguistic evaluation of LLMs (there will be parse trees on the slides! There's still use for syntactic structure!)
arxiv.org/abs/2504.02768
LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data
We extend this effort to 45 new languages!
LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data
We extend this effort to 45 new languages!
We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!
π arxiv.org/abs/2412.06014
We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!
π arxiv.org/abs/2412.06014
We will present Neurosymbolic Diffusion Models in San Diego πΊπΈ and Copenhagen π©π° thanks to @euripsconf.bsky.social πͺπΊ
Read more π
We will present Neurosymbolic Diffusion Models in San Diego πΊπΈ and Copenhagen π©π° thanks to @euripsconf.bsky.social πͺπΊ
Arxiv: arxiv.org/abs/2508.18853
#statssky #mlsky
Arxiv: arxiv.org/abs/2508.18853
#statssky #mlsky
Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h
Well, several scales more than the compute.
Luckily (?), companies don't pay for the data
π€ππ§
Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h
Well, several scales more than the compute.
Luckily (?), companies don't pay for the data
π€ππ§
We will see you 1-4 September in another beautiful place: Lisbon! π΅πΉ
nesy-ai.org/conferences/...
Do objects need a special treatment for generative AI and world models? π€ We will hear on Monday!
Do objects need a special treatment for generative AI and world models? π€ We will hear on Monday!