Linas Nasvytis
@linasnasvytis.bsky.social
41 followers 31 following 10 posts
PhD @Stanford studying cognitive science & AI Prev: Pre-doc Fellow @Harvard, Econ & CS research with Paul Romer, Stats & ML @UniofOxford, Econ @Columbia
Posts Media Videos Starter Packs
linasnasvytis.bsky.social
This has implications for AI and cognitive modeling:

When designing systems to reason socially, we shouldn’t assume full inference is always used — or always needed.

Humans strike a balance between accuracy and efficiency.
linasnasvytis.bsky.social
We model this in a Bayesian framework, comparing 3 hypotheses:
1. Full ToM: preference + belief (inferred from environment) → action
2. Correspondance bias: preference → action
3. Belief neglect: preference + environment (ignoring beliefs) → action

People flexibly switch depending on context!
linasnasvytis.bsky.social
With minimal training, participants started engaging in full joint inference over beliefs and preferences.

But without that training, belief neglect was common.

This suggests people adaptively allocate cognitive effort, depending on task structure.
linasnasvytis.bsky.social
Belief neglect is different from correspondence bias:

People DO account for environmental constraints (e.g., locked doors).

But they skip reasoning about what the agent believes about the environment.

It’s a mid-level shortcut.
linasnasvytis.bsky.social
We find that, by default, people often neglect the agent’s beliefs.

They infer preferences as if the agent’s beliefs were correct — even when they’re not.

This is what we call belief neglect.
linasnasvytis.bsky.social
In our task, participants watched agents navigate grid worlds to collect gems.

Sometimes, gems are hidden behind doors. Participants were told that some agents falsely believed that they couldn't open these doors.

They then had to infer which gem the agents preferred.
linasnasvytis.bsky.social
The question we ask is: When do people actually engage in full ToM reasoning?

And when do they fall back on faster heuristics?
linasnasvytis.bsky.social
Theory of mind (ToM) — reasoning about others’ beliefs and desires — is central to human intelligence.

It's often framed as Bayesian inverse planning: we observe a person's action, then infer their beliefs and desires.

But that kind of reasoning is computationally costly.
linasnasvytis.bsky.social
🚨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!

Humans are capable of sophisticated theory of mind, but when do we use it?

We formalize & document a new cognitive shortcut: belief neglect — inferring others' preferences, as if their beliefs are correct🧵