Valentin Hofmann
@valentinhofmann.bsky.social
2.3K followers 160 following 36 posts
Postdoc @ai2.bsky.social & @uwnlp.bsky.social
Posts Media Videos Starter Packs
valentinhofmann.bsky.social
Thanks, Jordan! Your ACL 2021 paper was a huge source of inspiration for us!
valentinhofmann.bsky.social
We did not specifically analyze novel models as your paper did. While I am optimistic that Fluid Benchmarking improves over static IRT-based methods in this regime as well, there are definitely limitations, which we discuss in the paragraph below.

Would be exciting to run more experiments on this!
valentinhofmann.bsky.social
In our experiments, we find that this dynamic approach consistently outperforms static IRT-based methods. The improvements are especially pronounced in terms of variance, which poses a major challenge for static IRT-based methods. We discuss this in more detail in the paragraph below.
valentinhofmann.bsky.social
Great question! The key difference is that we use IRT to dynamically adapt the subset of items to a model's capability, rather than to determine a static, "globally optimal" subset of items as in prior work. With Fluid Benchmarking, each model is evaluated on a different subset of items.
Reposted by Valentin Hofmann
kylelo.bsky.social
LM benchmark design requires 3 decisions, how to:
🐟 select test cases
🐠 score LM on each test
🦈 aggregate scores to estimate perf

fluid benchmarking is simple:
🍣 find max informative test cases
🍥 estimate 'ability', not simple avg perf

why care? turn ur grey noisy benchmarks to red ones!
valentinhofmann.bsky.social
For details, check out our paper, blog, code, and data:

📄 arxiv.org/abs/2509.11106
✍️ allenai.org/blog/fluid-b...
💻 github.com/allenai/flui...
📊 huggingface.co/datasets/all...

Looking forward to chatting more at #COLM2025! 👋
valentinhofmann.bsky.social
Overall, our work shows that LLM evaluations can be substantially improved by moving beyond the until-now universal practice of static benchmarking, which assumes a globally optimal set of evaluation questions for all models.
valentinhofmann.bsky.social
These (and more) advantages are achieved while at the same time reducing evaluation cost.

Example: on MMLU, Fluid Benchmarking results in lower step-to-step variance and higher validity than standard methods while using 50 times fewer questions. ⚡
valentinhofmann.bsky.social
Fluid Benchmarking substantially reduces step-to-step variance during pretraining.

It also increases validity: results generalize better to other benchmarks targeting the same capability. One reason: it automatically avoids mislabeled questions, cutting label errors by 99%! 🤯
valentinhofmann.bsky.social
In our experiments, we apply Fluid Benchmarking to evaluation during pretraining, a setting where capabilities evolve rapidly.

We find that Fluid Benchmarking dynamically adapts to these changes, administering easier questions early in training and more difficult ones later.
valentinhofmann.bsky.social
Fluid Benchmarking repeats this loop until the number of administered questions reaches the allotted budget.

Adaptive question selection means that LLMs face different sets of questions, but ability estimation aligns results in a common space.
valentinhofmann.bsky.social
In Fluid Benchmarking, we start with an initial ability estimate from one question.

To select the next question, we use Fisher information. Essentially: a question close in difficulty (b) to the ability estimate (θ) and with high discrimination (a).

Then we update the estimate.
valentinhofmann.bsky.social
In addition, IRT models each LLM's ability, which can be estimated from its responses to questions with known difficulty and discrimination.

The IRT ability estimate can be used to summarize performance like accuracy, and it accounts for question characteristics.
valentinhofmann.bsky.social
To get a question's difficulty, we use item response theory (IRT): we analyze responses of hundreds of LLMs to see how often a question is answered correctly.

IRT also measures the discrimination of a question, meaning how reliably it separates stronger from weaker LLMs.
valentinhofmann.bsky.social
Test theory says: questions are most informative when matched to a test taker's ability.

For LLMs, that means evaluating weaker models on easier questions and stronger models on harder ones.

But how do we know a question's difficulty, or an LLM's ability, before evaluation? 🤔
valentinhofmann.bsky.social
📢 New #COLM2025 paper 📢

Standard benchmarks give every LLM the same questions. This is like testing 5th graders and college seniors with *one* exam! 🥴

Meet Fluid Benchmarking, a capability-adaptive eval method delivering lower variance, higher validity, and reduced cost.

🧵
Reposted by Valentin Hofmann
dallascard.bsky.social
I am delighted to share our new #PNAS paper, with @grvkamath.bsky.social @msonderegger.bsky.social and @sivareddyg.bsky.social, on whether age matters for the adoption of new meanings. That is, as words change meaning, does the rate of adoption vary across generations? www.pnas.org/doi/epdf/10....
valentinhofmann.bsky.social
Attending #ICML2025? Don't miss this TokShop panel, which will explore:

🔮 The Future of Tokenization 🔮

Featuring a stellar lineup of panelists - mark your calendar! ✨
tokshop.bsky.social
🎤 Meet our expert panelists! Join Albert Gu, Alisa Liu, Kris Cao, Sander Land, and Yuval Pinter as they discuss the Future of Tokenization on July 18 at 3:30 PM at TokShop at #ICML2025.
valentinhofmann.bsky.social
LLMs can appear unbiased on the surface but still perpetuate racist views in subtle ways.

What causes this discrepancy? 🔍

In our upcoming #ACL2025 paper, we find a pattern akin to racial colorblindness: LLMs suppress race in ambiguous contexts, leading to biased outcomes.
1e0sun.bsky.social
🚨New #ACL2025 paper!

Today’s “safe” language models can look unbiased—but alignment can actually make them more biased implicitly by reducing their sensitivity to race-related associations.

🧵Find out more below!
Reposted by Valentin Hofmann
tokshop.bsky.social
📣 We extend the submission deadline by 24 hours to avoid conflict with ACL camera-ready deadline.

📅 New Submission Deadline: May 31, 2025 (23:59 AoE)

📩 OpenReview: openreview.net/group?id=ICM...
valentinhofmann.bsky.social
Huge congrats, Adam!!! 🎉
Reposted by Valentin Hofmann
tokshop.bsky.social
Got a good tokenization paper under review at COLM, but the scores were a letdown? 😬

Why bother with rebuttal when the perfect venue is right around the corner!

Submit your paper to the #ICML2025 Tokenization Workshop (TokShop) by May 30! 🚀
Reposted by Valentin Hofmann
tokshop.bsky.social
Beyond text: Modern AI tokenizes images too! Vision models split photos into patches, treating each 16x16 pixel square as a "token." 🖼️➡️🔤 #VisualTokenization

Interested in tokenization? Join our workshop tokenization-workshop.github.io
The submission deadline is already May 30!
tokenization-workshop.github.io
valentinhofmann.bsky.social
Yes, exactly! And we make sure that the words do not appear in the language model's pretraining data.