Erik Brockbank
@erikbrockbank.bsky.social
110 followers 220 following 31 posts
Postdoc @Stanford Psychology
Posts Media Videos Starter Packs
erikbrockbank.bsky.social
Hmm poster image seems to be a black square. That's fun.
Here's the intended image?
erikbrockbank.bsky.social
This project came together with a wonderful crew of collaborators, co-led by Logan Cross with @tobigerstenberg.bsky.social, @judithfan.bsky.social, @dyamins.bsky.social, and Nick Haber
erikbrockbank.bsky.social
Our work shows how LLM-based agents can serve as models of human cognition, helping us pinpoint the bottlenecks in our own learning.
Read the full paper here: tinyurl.com/mr356hyv
Code & Data: tinyurl.com/3napnpsm
Come check out our poster at CCN on Wednesday!
erikbrockbank.bsky.social
In sum: limitations in pattern learning in this setting aren't just about memory or reasoning power, but about considering the right strategy space.
These results also make a prediction: the same kind of verbal scaffolding might help humans overcome cognitive bottlenecks in the same task.
erikbrockbank.bsky.social
So, can we "teach" the model to think of better hypotheses?
By giving the model verbal scaffolding that directed its attention to relevant features (e.g., "pay attention to how the opponent's move changes after a win vs. a loss"), it discovered complex patterns it had previously missed.
erikbrockbank.bsky.social
How can we help the model generate the right hypotheses? We started with simple interventions that people could do too: making the model generate more hypotheses or more diverse ones (increasing LLM temp.). Neither worked. HM was stuck searching in the wrong part of the hypothesis space.
erikbrockbank.bsky.social
The answer seems to be Hypothesis Generation.

When we gave HM an explicit description of the opponent's strategy, its performance soared to >80% win rates against almost all bots. When we gave it a list of possible strategies, HM was able to accurately evaluate which one fit the data best.
erikbrockbank.bsky.social
This led to our central question: What is the main bottleneck for both humans and our model?
* Coming up with the right idea? (Hypothesis Generation)
* Figuring out if an idea is correct? (Hypothesis Evaluation)
* Knowing what move to make with the right idea? (Strategy Implementation)
erikbrockbank.bsky.social
Here's where it gets interesting. When we put HM in the same experiment, it closely mirrored human performance: It succeeded against simple opponents and performed around chance against complex ones, suggesting HM may be mirroring key aspects of the cognitive processes in this task.
erikbrockbank.bsky.social
To find out, we deployed an LLM-based agent called Hypothetical Minds (HM) as a model of the cognitive processes needed to adapt to RPS opponents.
HM tries to outwit its opponent by generating and testing natural language hypotheses about their strategy (e.g., "the opponent copies my last move")
erikbrockbank.bsky.social
In RPS, you win by exploiting patterns in your opponent’s moves. We tested people’s ability to do this by having them play 300 rounds of RPS against bots with algorithmic strategies. The finding? People are great at exploiting simple patterns but struggle to detect more complex ones. Why?
erikbrockbank.bsky.social
How do we predict what others will do next? 🤔
We look for patterns. But what are the limits of this ability?
In our new paper at CCN 2025 (@cogcompneuro.bsky.social), we explore the computational constraints of human pattern recognition using the classic game of Rock, Paper, Scissors 🗿📄✂️
erikbrockbank.bsky.social
Awesome work led by @kristinezheng.bsky.social on how we can predict learning in data science classes.
Data science is a field that should be accessible and understandable for everyone, but many people struggle with it.
Come check out Kristine's poster at CogSci this week to learn about why :)
kristinezheng.bsky.social
Linking student psychological orientation, engagement & learning in intro college-level data science

New work ‪‪at @cogscisociety.bsky.social w/ @erikbrockbank.bsky.social @shawnschwartz.bsky.social, C.Bryan, D.Yeager, C.Dweck & @judithfan.bsky.social

poster 8/1 @ 10:30
tinyurl.com/solds-cogsci25
erikbrockbank.bsky.social
Since then, we’ve also run a study exploring how good *people* are at this same prediction task.
Come check out our poster at CogSci (poster session 1 on Thursday), or check out our video summary for virtual attendees, to get the full story :)
erikbrockbank.bsky.social
We had GPT-4o use each person’s written answers to guess their responses on the personality scales.
GPT does well, even when we correct for guessing the most typical responses.
This means in some cases, people’s answers to the questions contain information about what they are like.
erikbrockbank.bsky.social
Psychologists often use sliding scale personality surveys to learn about people’s traits.
Do people learn the same thing about others from their answers to “deep” questions?
In our second study, online participants wrote answers to some of these questions and also completed a personality survey.
erikbrockbank.bsky.social
We find that the question ratings tended to be similar across all 9 scales and between different people.
If we combine the ratings for each question, we get a pretty good measure of its “interpersonal depth”, with “small talk” Qs at the low end and more “personal” Qs at the high end.
erikbrockbank.bsky.social
In our first experiment, we developed a corpus of 235 open-ended questions: half were “small talk” (“favorite sports team?”) and half were “personal” (“greatest fear?”).
We asked online participants to rate the Qs on different scales related to whether they would help them get to know a stranger.
erikbrockbank.bsky.social
This project asks what kind of questions are most useful for getting to know others.
We made a bank of questions and in two studies:
1) people evaluated the questions for whether they would help get to know somebody
2) we measured what people’s answers reveal about their personality
erikbrockbank.bsky.social
"36 Questions That Lead To Love" was the most viewed article in NYT Modern Love.
Excited to share new results investigating these and other “deep questions” with @tobigerstenberg.bsky.social @judithfan.bsky.social & @rdhawkins.bsky.social
Preprint: tinyurl.com/bdfx5smk
Code: tinyurl.com/3v6pws4s
Reposted by Erik Brockbank
tobigerstenberg.bsky.social
The Causality in Cognition Lab is pumped for #cogsci2025 💪
erikbrockbank.bsky.social
Really fun project spearheaded by @veronateo.bsky.social, come check out our poster at CogSci!
veronateo.bsky.social
Excited to share our new work at #CogSci2025!

We explore how people plan deceptive actions, and how detectives try to see through the ruse and infer what really happened based on the traces left behind. 🕵️‍♀️

Paper: osf.io/preprints/osf/vqgz5_v1
Code: github.com/cicl-stanford/recursive_deception

1/
erikbrockbank.bsky.social
This is heartbreaking and barbaric
mariannazhang.bsky.social
yesterday, my postdoc funding (salary and research funds) was cancelled by the National Science Foundation, effective immediately. I received the same generic, vaguely threatening, typo-ridden email as many of my colleagues who have had their awards terminated recently. (1/n)
email starting, "The U.S. National Science Foundation (NSF) has undertaken a review of its award portfolio. Each award was carefully and individually reviewed, and the agency has determined that termination of certain awards is necessary because they are not in alignment with current NSF priorites."
erikbrockbank.bsky.social
Whoops I apparently have no idea how graphics work, please enjoy this hilarious inverted SVG situation and head to project-nightingale.stanford.edu to see the *real* graphic
Project Nightingale
project-nightingale.stanford.edu
erikbrockbank.bsky.social
We’re working on developing those now! Stay tuned for updates from us at Project Nightingale (project-nightingale.stanford.edu), a new collaborative effort to advance the science of how people reason about data!
Project Nightingale logo