Emile van Krieken
emilevankrieken.com
Emile van Krieken
@emilevankrieken.com
Post-doc @ VU Amsterdam, prev University of Edinburgh.
Neurosymbolic Machine Learning, Generative Models, commonsense reasoning

https://www.emilevankrieken.com/
Pinned
We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, combining powerful multimodal understanding with symbolic reasoning πŸš€

Read more πŸ‘‡
Reposted by Emile van Krieken
We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7
January 7, 2026 at 5:28 PM
Good call! I maintain a list of Neurosymbolic folks on Bsky, see here πŸ¦•
go.bsky.app/RMJ8q3i
Seems like a bunch of people are joining BlueSky recently after the whole X situation, so maybe it's worth highlighting this again. Also, if you feel like you should be on this list, send me a message!
I made a starter pack for Bayesian ML and stats (mostly to see how this starter pack business works).

Let me know whom I missed!

go.bsky.app/2Bqtn6T
January 13, 2026 at 9:36 AM
Reposted by Emile van Krieken
I am recruiting 1 PhD student (4-year position) and 2 postdocs (3-year positions) to work on logic and machine learning at the University of Helsinki:
- PhD 1: jobs.helsinki.fi/job/Helsinki...
- Postdoc 1: jobs.helsinki.fi/job/Helsinki...
-Postdoc 2: jobs.helsinki.fi/job/Helsinki...
January 10, 2026 at 3:01 PM
Reposted by Emile van Krieken
#XAI, #neurosymbolic methods #nesy and #causal #representation #learning #CRL all care about learning #interpretable #concepts, but in different ways.

We are organizing this #ICLR2026 workshop to bring these three communities together and learn from each other 🦾πŸ”₯πŸ’₯

Submission deadline: 30 Jan 2026
πŸ“£ Announcing the Workshop on **Unifying Concept Representation Learning** (UCRL) at ICLR’26 (@iclr-conf.bsky.social ).

When? 26 or 27 April 2026
Where? Rio de Janeiro, Brazil

Call for papers, schedule, invited speakers & more:

ucrl-iclr26.github.io

Looking forward to your submissions!
Home
Workshop on Unifying Concept Representation Learning (ICLR 2026)
ucrl-iclr26.github.io
December 22, 2025 at 4:41 PM
Thanks for the fantastic talk, and totally agree! (Writing this in the train from Copenhagen :-))
December 8, 2025 at 1:37 PM
Reposted by Emile van Krieken
Emile will present our work on Knowledge Graph Embeddings at Eurips' Salon des RefusΓ©s on Friday!
We show how linearity prevent KGEs from scaling to larger graphs + propose a simple solution using a Mixture of Softmaxes (see LLM literature) to break the limitations at a low parameter cost. πŸ”¨
And finally #3

πŸ”¨ Rank bottlenecks in KGEs:

At Friday's "Salon des Refuses" I will present @sbadredd.bsky.social 's new work on how rank bottlenecks limit knowledge graph embeddings
arxiv.org/abs/2506.22271
December 3, 2025 at 4:12 PM
Reposted by Emile van Krieken
Recordings of the NeSy 2025 keynotes are now available! πŸŽ₯

Check out insightful talks from @guyvdb.bsky.social, @tkipf.bsky.social and D McGuinness on our new Youtube channel www.youtube.com/@NeSyconfere...

Topics include using symbolic reasoning for LLM, and object-centric representations!
NeSy conference
The NeSy conference studies the integration of deep learning and symbolic AI, combining neural network-based statistical machine learning with knowledge representation and reasoning from symbolic appr...
www.youtube.com
November 29, 2025 at 8:21 AM
Reposted by Emile van Krieken
🚨 New paper alert!
We introduce Vision-Language Programs (VLP), a neuro-symbolic framework that combines the perceptual power of VLMs with program synthesis for robust visual reasoning.
November 30, 2025 at 1:32 AM
Interested in meeting up in Copenhagen? Do shoot a message!
November 28, 2025 at 5:31 PM
And finally #3

πŸ”¨ Rank bottlenecks in KGEs:

At Friday's "Salon des Refuses" I will present @sbadredd.bsky.social 's new work on how rank bottlenecks limit knowledge graph embeddings
arxiv.org/abs/2506.22271
November 28, 2025 at 5:31 PM
#2
πŸ‡ GRAPES: At Tuesday's ELLIS Unconference poster session.
We study adaptive graph sampling for scaling GNNs!

Work with Taraneh Younesian, Daniel Daza, @thiviyan.bsky.social, @pbloem.sigmoid.social.ap.brid.gy

arxiv.org/abs/2310.03399
November 28, 2025 at 5:31 PM
Almost off to @euripsconf.bsky.social in Copenhagen πŸ‡©πŸ‡° πŸ‡ͺπŸ‡Ί! I'll present 3 posters:

🧠 Neurosymbolic Diffusion Models: Thursday's poster session.

Going to NeurIPS? @edoardo-ponti.bsky.social and @nolovedeeplearning.bsky.social will present the paper in San Diego Thu 13:00
arxiv.org/abs/2505.13138
November 28, 2025 at 5:31 PM
Reposted by Emile van Krieken
The simplex algorithm is super efficient. 80 years of experience says it runs in linear time. Nobody can explain _why_ it is so fast.

We invented a new algorithm analysis framework to find out.
Beyond Smoothed Analysis: Analyzing the Simplex Method by the Book
Narrowing the gap between theory and practice is a longstanding goal of the algorithm analysis community. To further progress our understanding of how algorithms work in practice, we propose a new alg...
arxiv.org
October 27, 2025 at 1:43 AM
Precies hetzelfde hier...
November 8, 2025 at 10:26 PM
Reposted by Emile van Krieken
Want to use your favourite #NeSy model but afraid of the reasoning shortcuts?🫣

Fear notπŸ’ͺ🏻In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
November 6, 2025 at 10:40 AM
Reposted by Emile van Krieken
I'm in Suzhou to present our work on MultiBLiMP, Friday @ 11:45 in the Multilinguality session (A301)!

Come check it out if your interested in multilingual linguistic evaluation of LLMs (there will be parse trees on the slides! There's still use for syntactic structure!)

arxiv.org/abs/2504.02768
November 6, 2025 at 7:08 AM
Reposted by Emile van Krieken
🌍Introducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!

LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data

We extend this effort to 45 new languages!
October 15, 2025 at 10:53 AM
Reposted by Emile van Krieken
Unfortunately, our submission to #NeurIPS didn’t go through with (5,4,4,3). But because I think it’s an excellent paper, I decided to share it anyway.

We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!

πŸ“ arxiv.org/abs/2412.06014
Post-hoc Probabilistic Vision-Language Models
Vision-language models (VLMs), such as CLIP and SigLIP, have found remarkable success in classification, retrieval, and generative tasks. For this, VLMs deterministically map images and text descripti...
arxiv.org
September 18, 2025 at 8:34 PM
Ohhh this is very relevant to what I'm currently working on! Thanks for sharing :)
September 19, 2025 at 9:04 AM
What's wrong with this?
September 19, 2025 at 9:02 AM
Accepted to NeurIPS! 😁

We will present Neurosymbolic Diffusion Models in San Diego πŸ‡ΊπŸ‡Έ and Copenhagen πŸ‡©πŸ‡° thanks to @euripsconf.bsky.social πŸ‡ͺπŸ‡Ί
We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, combining powerful multimodal understanding with symbolic reasoning πŸš€

Read more πŸ‘‡
September 19, 2025 at 9:01 AM
Reposted by Emile van Krieken
A really neat paper thinking through what "Identifiability" means, how we can determine it, and implications for modeling.

Arxiv: arxiv.org/abs/2508.18853

#statssky #mlsky
September 16, 2025 at 1:49 AM
Reposted by Emile van Krieken
The most expensive part of training is the data, not the compute
Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h
Well, several scales more than the compute.
Luckily (?), companies don't pay for the data
πŸ€–πŸ“ˆπŸ§ 
September 12, 2025 at 2:20 PM
Organising NeSy 2025 together with some of my favourite people (@e-giunchiglia.bsky.social @lgilpin.bsky.social @pascalhitzler.bsky.social) really was a dream. Super proud of what we've achieved! See you next year 😘
πŸ¦•NeSy 2025 is officially closed! Thanks again to everyone for attending this successful edition 😊

We will see you 1-4 September in another beautiful place: Lisbon! πŸ‡΅πŸ‡Ή
nesy-ai.org/conferences/...
September 11, 2025 at 3:47 AM
Reposted by Emile van Krieken
Our second keynote speaker is @tkipf.bsky.social, who will discuss object-centric representation learning!

Do objects need a special treatment for generative AI and world models? πŸ€” We will hear on Monday!
September 6, 2025 at 8:51 PM