Keyon Vafa
@keyonv.bsky.social
170 followers 100 following 28 posts
Postdoctoral fellow at Harvard Data Science Initiative | Former computer science PhD at Columbia University | ML + NLP + social sciences https://keyonvafa.com
Posts Media Videos Starter Packs
Reposted by Keyon Vafa
crahal.com
💡🤖🔥 @keyonv.bsky.social's talk at metrics-and-models.github.io was brilliant, posing epistemic questions about what Artificial Intelligence "understands".

Next (two weeks): Alexander Vezhnevets talks about a new multi-actor generative agent based model. As usual, *all welcome* #datascience #css💡🤖🔥
Reposted by Keyon Vafa
crahal.com
💡🤖🔥The talk by Juan Carlos Perdomo at metrics-and-models.github.io was so thought provoking that the convenors stayed to discuss it in the room afterwards for quite some time!

Next, we have @keyonv.bsky.social asking: "What are AI's World Models?". Exciting times over here, all welcome!💡🤖🔥
keyonv.bsky.social
This is one way to evaluate world models. But there are many other interesting approaches!

Plug: If you're interested in more, check out the Workshop on Assessing World Models I'm co-organizing Friday at ICML www.worldmodelworkshop.org
ICML Workshop on Assessing World Models
Date: Friday, July 18 2025 Location: Ballroom B at ICML 2025 in Vancouver, Canada
www.worldmodelworkshop.org
keyonv.bsky.social
Last year we proposed different tests that studied single tasks.

We now think that studying behavior on new tasks better captures what we want from foundation models: tools for new problems.

It's what separates Newton's laws from Kepler's predictions.
arxiv.org/abs/2406.03689
Evaluating the World Model Implicit in a Generative Model
Recent work suggests that large language models may implicitly learn world models. How should we assess this possibility? We formalize this question for the case where the underlying reality is govern...
arxiv.org
keyonv.bsky.social
Summary:
1. We propose inductive bias probes: a model's inductive bias reveals its world model

2. Foundation models can have great predictions with poor world models

3. One reason world models are poor: models group together distinct states that have similar allowed next-tokens
keyonv.bsky.social
Inductive bias probes can test this hypothesis more generally.

Models are much likelier to conflate two separate states when they share the same legal next-tokens.
keyonv.bsky.social
We fine-tune an Othello next-token prediction model to reconstruct boards.

Even when the model reconstructs boards incorrectly, the reconstructed boards often get the legal next moves right.

Models seem to construct "enough of" the board to calculate single next moves.
keyonv.bsky.social
If a foundation model's inductive bias isn't toward a given world model, what is it toward?

One hypothesis: models confuse sequences that belong to different states but have the same legal *next* tokens.

Example: Two different Othello boards can have the same legal next moves.
keyonv.bsky.social
We also apply these probes to lattice problems (think gridworld).

Inductive biases are great when the number of states is small. But they deteriorate quickly.

Recurrent and state-space models like Mamba consistently have better inductive biases than transformers.
keyonv.bsky.social
Would more general models like LLMs do better?

We tried providing o3, Claude Sonnet 4, and Gemini 2.5 Pro with a small number of force magnitudes in-context w/o saying what they are.

These LLMs are explicitly trained on Newton's laws. But they can't get the rest of the forces.
keyonv.bsky.social
We then fine-tuned the model on a larger scale, to predict forces across 10K solar systems.

We used a symbolic regression to compare the recovered force law to Newton's law.

It not only recovered a nonsensical law—it recovered different laws for different galaxies.
keyonv.bsky.social
To demonstrate, we fine-tuned the model to predict force vectors on a small dataset of planets in our solar system.

A model that understands Newtonian mechanics should get these. But the transformer struggles.
keyonv.bsky.social
But has the model discovered Newton's laws?

When we fine-tune it to new tasks, its inductive bias isn't toward Newtonian states.

When it extrapolates, it makes similar predictions for orbits with very different states, and different predictions for orbits with similar states.
keyonv.bsky.social
We apply these probes to orbital, lattice, and Othello problems.

Starting with orbits: we encode solar systems as sequences and train a transformer on 10M solar systems (20B tokens)

The model makes accurate predictions many timesteps ahead. Predictions for our solar system:
keyonv.bsky.social
We propose a method to measure these inductive biases. We call it an inductive bias probe.

Two steps:
1. Fit a foundation model to many new, very small synthetic datasets
2. Analyze patterns in the functions it learns to find the model's inductive bias
keyonv.bsky.social
Newton's laws are a kind of foundation model. They provide a place to start when working on new problems.

A good foundation model should do the same.

The No Free Lunch Theorem motivates a test: Every foundation model has an inductive bias. This bias reveals its world model.
keyonv.bsky.social
If you only care about orbits, Newton didn't add much. His laws give the same predictions.

But Newton's laws went beyond orbits: the same laws explain pendulums, cannonballs, and rockets.

This motivates our framework: Predictions apply to one task. World models generalize to many
keyonv.bsky.social
Perhaps the most influential world model had its start as a predictive model.

Before we had Newton's laws of gravity, we had Kepler's predictions of planetary orbits.

Kepler's predictions led to Newton's laws. So what did Newton add?
keyonv.bsky.social
Our paper aims to answer two questions:

1. What's the difference between prediction and world models?
2. Are there straightforward metrics that can test this distinction?

Our paper is about AI. But it's helpful to go back 400 years to answer these questions.
keyonv.bsky.social
Can an AI model predict perfectly and still have a terrible world model?

What would that even mean?

Our new ICML paper (poster tomorrow!) formalizes these questions.

One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵
Reposted by Keyon Vafa
gsbsilab.bsky.social
If we know someone’s career history, how well can we predict which jobs they’ll have next? Read our profile of @keyonv.bsky.social to learn how ML models can be used to predict workers’ career trajectories & better understand labor markets.

medium.com/@gsb_silab/k...
Keyon Vafa: Predicting Workers’ Career Trajectories to Better Understand Labor Markets
If we know someone’s career history, how well can we predict which job they’ll have next?
medium.com
Reposted by Keyon Vafa
gsbsilab.bsky.social
Foundation models make great predictions. How should we use them for estimation problems in social science?

New PNAS paper @susanathey.bsky.social & @keyonv.bsky.social & @Blei Lab:
Bad news: Good predictions ≠ good estimates.
Good news: Good estimates possible by fine-tuning models differently 🧵
Reposted by Keyon Vafa
nkgarg.bsky.social
*Please repost* @sjgreenwood.bsky.social and I just launched a new personalized feed (*please pin*) that we hope will become a "must use" for #academicsky. The feed shows posts about papers filtered by *your* follower network. It's become my default Bluesky experience bsky.app/profile/pape...