Michael Hahn
@m-hahn.bsky.social
82 followers 250 following 8 posts
Prof at Saarland. NLP and machine learning. Theory and interpretability of LLMs. https://www.mhahn.info
Posts Media Videos Starter Packs
Reposted by Michael Hahn
harveyfu.bsky.social
LLMs excel at finding surprising “needles” in very long documents, but can they detect when information is conspicuously missing?

🫥AbsenceBench🫥 shows that even SoTA LLMs struggle on this task, suggesting that LLMs have trouble perceiving “negative spaces”.
Paper: arxiv.org/abs/2506.11440

🧵[1/n]
Reposted by Michael Hahn
guydav.bsky.social
New preprint alert! We often prompt ICL tasks using either demonstrations or instructions. How much does the form of the prompt matter to the task representation formed by a language model? Stick around to find out 1/N
m-hahn.bsky.social
7/8 Takeaway: There’s no free lunch: long CoTs are information-theoretically necessary for these tasks. Any compression tricks will hit these hard limits unless we revamp the model architecture itself.
m-hahn.bsky.social
6/8 Graph reachability: Given a DAG and two nodes, is there a path? You might try to “cheat” with clever CoTs—but in fact you need Ω(|E| log |V|) steps, matching BFS’s runtime (plus indexing overhead).
m-hahn.bsky.social
5/8 Multiplication: LLMs notoriously flub middle digits of big-integer products. Those digits hinge on all input bits, so you get a linear CoT lower bound. Our best upper bound uses Schönhage–Strassen in O(n log n) steps— closing the log(n) gap is open.
m-hahn.bsky.social
4/8 Regular languages: Hard-attention Transformers without CoT sit in AC⁰. To recognize anything beyond AC⁰—e.g., Parity—you need at least linear-length CoTs. Sublinear won’t cut it.
m-hahn.bsky.social
3/8 Yes! We derive tight lower bounds for several algorithmic tasks. The theory assumes hard attention, but soft-attention experiments paint the same picture.
m-hahn.bsky.social
2/8 Prior work shows that CoT-equipped Transformers can simulate P-time computations via polynomial-length traces. But those broad results don’t pin down exactly how long you need to solve a specific problem. We ask: can we get fine-grained lower bounds on CoT length?
m-hahn.bsky.social
Chain-of-Thought (CoT) reasoning lets LLMs solve complex tasks, but long CoTs are expensive. How short can they be while still working? Our new ICML paper tackles this foundational question.
Reposted by Michael Hahn
mariusmosbach.bsky.social
Check out our new paper on unlearning for LLMs 🤖. We show that *not all data are unlearned equally* and argue that future work on LLM unlearning should take properties of the data to be unlearned into account. This work was lead by my intern @a-krishnan.bsky.social
🔗: arxiv.org/abs/2504.05058
Diagram illustrating a hypothesis about knowledge unlearning in language models. The left side shows a training corpus with varying frequencies of facts, such as 'Montreal is a city in Quebec' (high frequency) and 'Atlantis is a city in the ocean' (lower frequency). The center shows a language model being trained on this data, then undergoing unlearning. The right side demonstrates the 'Forget Quality' results, where the model more effectively unlearns the less frequent fact ('Atlantis is in Greece') while retaining the more frequent knowledge. Labels A, B, and C mark key points in the hypothesis: A (frequency variations in training data), B (influence of frequency), and C (unlearning effectiveness).
Reposted by Michael Hahn
suryaganguli.bsky.social
Our new paper! "Analytic theory of creativity in convolutional diffusion models" lead expertly by @masonkamb.bsky.social
arxiv.org/abs/2412.20292
Our closed-form theory needs no training, is mechanistically interpretable & accurately predicts diffusion model outputs with high median r^2~0.9
Reposted by Michael Hahn
preetumnakkiran.bsky.social
Just read this, neat paper! I really enjoyed Figure 3 illustrating the basic idea: Suppose you train a diffusion model where the denoiser is restricted to be "local" (each pixel i only depends on its 3x3 neighborhood N(i)). The optimal local denoiser for pixel i is E[ x_0[i] | x_t[ N(i) ] ]...cont