Ben Prystawski
@benpry.bsky.social
830 followers 360 following 5 posts
Cognitive science PhD student at Stanford, studying iterated learning and reasoning.
Posts Media Videos Starter Packs
Reposted by Ben Prystawski
yangxiang.bsky.social
Now out in Cognition, work with the great @gershbrain.bsky.social @tobigerstenberg.bsky.social on formalizing self-handicapping as rational signaling!
📃 authors.elsevier.com/a/1lo8f2Hx2-...
Reposted by Ben Prystawski
erikbrockbank.bsky.social
How do we predict what others will do next? 🤔
We look for patterns. But what are the limits of this ability?
In our new paper at CCN 2025 (@cogcompneuro.bsky.social), we explore the computational constraints of human pattern recognition using the classic game of Rock, Paper, Scissors 🗿📄✂️
Reposted by Ben Prystawski
rebeccazoo.bsky.social
My final project from grad school is out now in Dev Psych! Mombasa County preschoolers were more accurate on object-based than picture-based vocabulary assessments, whereas Bay Area preschoolers were equally accurate on object-based and picture-based assessments.

psycnet.apa.org/doiLanding?d...
APA PsycNet
psycnet.apa.org
Reposted by Ben Prystawski
lampinen.bsky.social
In neuroscience, we often try to understand systems by analyzing their representations — using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns
Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices
benpry.bsky.social
How do people trade off between speed and accuracy in reasoning tasks without easy heuristics? Come to my talk, "Thinking fast, slow, and everywhere in between in humans and language models," in the Reasoning session this afternoon #CogSci2025 to find out!
paper: escholarship.org/uc/item/5td9...
Thinking fast, slow, and everywhere in between in humans and language models
Author(s): Prystawski, Ben; Goodman, Noah | Abstract: How do humans adapt how they reason to varying circumstances? Prior research has argued that reasoning comes in two types: a fast, intuitive type ...
escholarship.org
Reposted by Ben Prystawski
danielwurgaft.bsky.social
🚨New paper! We know models learn distinct in-context learning strategies, but *why*? Why generalize instead of memorize to lower loss? And why is generalization transient?

Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧵

1/
benpry.bsky.social
How can we combine the process-level insight that think-aloud studies give us with the large scale that modern online experiments permit? In our new CogSci paper, we show that speech-to-text models and LLMs enable us to scale up the think-aloud method to large experiments!
danielwurgaft.bsky.social
Excited to share a new CogSci paper co-led with @benpry.bsky.social!

Once a cornerstone for studying human reasoning, the think-aloud method declined in popularity as manual coding limited its scale. We introduce a method to automate analysis of verbal reports and scale think-aloud studies. (1/8)🧵
Reposted by Ben Prystawski
junyi.bsky.social
Delighted to announce our CogSci '25 workshop at the interface between cognitive science and design 🧠🖌️!

We're calling it: 🏺Minds in the Making🏺
🔗 minds-making.github.io

June – July 2024, free & open to the public
(all career stages, all disciplines)
Reposted by Ben Prystawski
xrg.bsky.social
the functional form of moral judgment is (sometimes) the nash bargaining solution

new preprint👇
figure 2 from our preprint, reporting the results from two experiments 

we measure moral judgments about dividing money between two parties and manipulate the degree of asymmetry in the outside options each party has

we find that moral judgments track predictions from rational bargaining models like the nash bargaining solution and the kalai-smorodinsky solution in a negotiation context

by contrast, in a donation context, moral intuitions completely reverse, instead tracking redistributive and egalitarian principles

preprint link: https://osf.io/preprints/psyarxiv/3uqks_v1
Reposted by Ben Prystawski
fredcallaway.bsky.social
Despite the world being on fire, I can't help but be thrilled to announce that I'll be starting as an Assistant Professor in the Cognitive Science Program at Dartmouth in Fall '26. I'll be recruiting grad students this upcoming cycle—get in touch if you're interested!
Reposted by Ben Prystawski
mcxfrank.bsky.social
Super excited to submit a big sabbatical project this year: "Continuous developmental changes in word
recognition support language learning across early
childhood": osf.io/preprints/ps...
title of paper (in text) plus author list Time course of word recognition for kids at different ages.
Reposted by Ben Prystawski
erikbrockbank.bsky.social
Hello bluesky world :) excited to share a new paper on data visualization literacy 📈 🧠 w/ @judithfan.bsky.social, @arnavverma.bsky.social, Holly Huey, Hannah Lloyd, @lacepadilla.bsky.social!

📝 preprint: osf.io/preprints/ps...
💻 code: github.com/cogtoolslab/...
OSF
osf.io
Reposted by Ben Prystawski
mcxfrank.bsky.social
AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?

In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...
Figure 1. A schematic depiction of a model-mechanism mapping between a human learning system (left side) and a cognitive model (right side). Candidate model mechanism mappings are pictured as mapping between representations but also can be in terms of input data, architecture, or learning objective. Figure 2. Data efficiency in human learning. (left) Order of magnitude of LLM vs. human training data, plotted by human age. Ranges are approximated from Frank (2023a). (right) A schematic depiction of evaluation scaling curves for human learners vs. models plotted by training data
quantity. Paper abstract
Reposted by Ben Prystawski
gandhikanishk.bsky.social
1/13 New Paper!! We try to understand why some LMs self-improve their reasoning while others hit a wall. The key? Cognitive behaviors! Read our paper on how the right cognitive behaviors can make all the difference in a model's ability to improve with RL! 🧵
Reposted by Ben Prystawski
tobigerstenberg.bsky.social
New paper in Psychological Review!

In "Causation, Meaning, and Communication" Ari Beller (cicl.stanford.edu/member/ari_b...) develops a computational model of how people use & understand expressions like "caused", "enabled", and "affected".

📃 osf.io/preprints/ps...
📎 github.com/cicl-stanfor...
🧵
Reposted by Ben Prystawski
lampinen.bsky.social
What counts as in-context learning (ICL)? Typically, you might think of it as learning a task from a few examples. However, we’ve just written a perspective (arxiv.org/abs/2412.03782) suggesting interpreting a much broader spectrum of behaviors as ICL! Quick summary thread: 1/7
The broader spectrum of in-context learning
The ability of language models to learn a task from a few examples in context has generated substantial interest. Here, we provide a perspective that situates this type of supervised few-shot learning...
arxiv.org
benpry.bsky.social
Hey! Could you add me?
Reposted by Ben Prystawski
isabelpapad.bsky.social
Do you want to understand how language models work, and how they can change language science? I'm recruiting PhD students at UBC Linguistics! The research will be fun, and Vancouver is lovely. So much cool NLP happening at UBC across both Ling and CS! linguistics.ubc.ca/graduate/adm...
Aerial picture of the UBC campus, with an arrow pointing to a building and text asking "Your PhD lab?"
Reposted by Ben Prystawski
mcxfrank.bsky.social
If you try to replicate a finding so you can build on it, but your study fails, what should you do? Should you follow up and try to "rescue" the failed rep, or should you move on? Boyce et al. tried to answer this question; in our sample, 5 of 17 rescue projects succeeded.

osf.io/preprints/ps...
OSF
osf.io
Reposted by Ben Prystawski
natvelali.bsky.social
Preprint alert! After 4 years, I’m super excited to share work with @thecharleywu.bsky.social @gershbrain.bsky.social and Eric Schulz on the rise and fall of technological development in virtual communities in #OneHourOneLife #ohol
doi.org/10.31234/osf...
A promotional image of One Hour One Life, showing a character growing up from a baby, to a child, to an adult, to an old woman, to a pile of bones. This work is not affiliated with One Hour One Life; we are grateful to Jason Rohrer, the game's developer, for making the game open data and open source.
Reposted by Ben Prystawski
lampinen.bsky.social
How well can we understand an LLM by interpreting its representations? What can we learn by comparing brain and model representations? Our new paper highlights intriguing biases in learned feature representations that make interpreting them more challenging! 1/
Clear clusters in model representations driven by some features (plot colors) but neglecting other more complex ones (plotted as shapes) which are mixed within the color clusters.
Reposted by Ben Prystawski
mcxfrank.bsky.social
When a replication fails, researchers have to decide whether to make another attempt or move on. How should we think about this decision? Here's a new paper trying to answer this question, led by Veronica Boyce and featuring student authors from my class!

osf.io/preprints/ps...