Griffiths Computational Cognitive Science Lab
@cocoscilab.bsky.social
2.4K followers 220 following 10 posts
Tom Griffiths' Computational Cognitive Science Lab at Princeton. Studying the computational problems human minds have to solve.
Posts Media Videos Starter Packs
Pinned
cocoscilab.bsky.social
(1/5) Very excited to announce the publication of Bayesian Models of Cognition: Reverse Engineering the Mind. More than a decade in the making, it's a big (600+ pages) beautiful book covering both the basics and recent work: mitpress.mit.edu/978026204941...
cocoscilab.bsky.social
Our new preprint explores how advances in AI change how we think about the role of symbols in human cognition. As neural networks show capabilities once used to argue for symbolic processes, we need to revisit how we can identify the level of analysis at which symbols are useful.
rtommccoy.bsky.social
🤖 🧠 NEW PAPER ON COGSCI & AI 🧠 🤖

Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning

So what role should symbols play in theories of the mind? For our answer...read on!

Paper: arxiv.org/abs/2508.05776

1/n
The top shows the title and authors of the paper: "Whither symbols in the era of advanced neural networks?" by Tom Griffiths, Brenden Lake, Tom McCoy, Ellie Pavlick, and Taylor Webb.

At the bottom is text saying "Modern neural networks display capacities traditionally believed to require symbolic systems. This motivates a re-assessment of the role of symbols in cognitive theories."

In the middle is a graphic illustrating this text by showing three capacities: compositionality, productivity, and inductive biases. For each one, there is an illustration of a neural network displaying it. For compositionality, the illustration is DALL-E 3 creating an image of a teddy bear skateboarding in Times Square. For productivity, the illustration is novel words produced by GPT-2: "IKEA-ness", "nonneotropical", "Brazilianisms", "quackdom", "Smurfverse". For inductive biases, the illustration is a graph showing that a meta-learned neural network can learn formal languages from a small number of examples.
Reposted by Griffiths Computational Cognitive Science Lab
rtommccoy.bsky.social
🤖🧠 Paper out in Nature Communications! 🧠🤖

Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?

Our answer: Use meta-learning to distill Bayesian priors into a neural network!

www.nature.com/articles/s41...

1/n
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayes’ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled “meta-learning” combines Bayesian inference and neural networks into a “prior-trained neural network”, described as a neural network that has the priors of a Bayesian model – visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled “learning” goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence “colorless green ideas sleep furiously”).
Reposted by Griffiths Computational Cognitive Science Lab
harootonian.bsky.social
🚨 New preprint alert! 🚨

Thrilled to share new research on teaching!
Work supervised by
@cocoscilab.bsky.social, @yaelniv.bsky.social, and @markkho.bsky.social.

This project asks:
When do people teach by mentalizing vs with heuristics? 1/3

osf.io/preprints/os...
Reposted by Griffiths Computational Cognitive Science Lab
rachitdubey.bsky.social
🚨 New in Nature Human Behavior! 🚨

Binary climate data visuals amplify perceived impact of climate change.

Both graphs in this image reflect equivalent climate change trends over time, yet people consistently perceive climate change as having a greater impact in the right plot than the left.

👇1/n
cocoscilab.bsky.social
New preprint shows that ideas from distributed systems can be used to predict when agents will adopt specialized strategies when working together to perform a task
emiecz.bsky.social
We often assume that specialized roles improve performance in multi-agent systems, but when does specialization emerge based on a given task and environment? 🧵👇

⭐️ New preprint w/ Ruaridh Mon-Williams, @neilbramley.bsky.social, Chris Lucas, @natvelali.bsky.social & @cocoscilab.bsky.social
cocoscilab.bsky.social
The new AI Lab at Princeton has positions for AI Postdoctoral Research Fellows for three research initiatives: AI for Accelerating Invention, Natural and Artificial Minds, and Princeton Language and Intelligence. Deadline is 12/31. More information here: ai.princeton.edu/ai-lab/emplo...
Employment Opportunities
Find and learn more about our open positions.Join our team
ai.princeton.edu
Reposted by Griffiths Computational Cognitive Science Lab
cgcorrea.bsky.social
My paper on hierarchical plans is out in Cognition!🎉

tldr: We ask participants to generate hierarchical plans in a programming game. People prefer to reuse beyond what standard accounts predict, which we formalize as induction of a grammar over actions.

authors.elsevier.com/a/1kBQr2Hx2x...
cocoscilab.bsky.social
(4/5) Here's the table of contents. An Open Access version of the book is available through the MIT Press website.
cocoscilab.bsky.social
(3/5) That same perspective is valuable for understanding modern AI systems. In particular, Bayesian models highlight the inductive biases that make it possible for humans to learn from small amounts of data, and give us tools for building machines with the same capacity.
cocoscilab.bsky.social
(2/5) Bayesian models start by considering the abstract computational problems intelligent systems have to solve and then identifying their optimal solutions. Those solutions can help us understand why people do the things we do.
cocoscilab.bsky.social
(1/5) Very excited to announce the publication of Bayesian Models of Cognition: Reverse Engineering the Mind. More than a decade in the making, it's a big (600+ pages) beautiful book covering both the basics and recent work: mitpress.mit.edu/978026204941...
Reposted by Griffiths Computational Cognitive Science Lab
thisisadax.bsky.social
(1) Vision language models can explain complex charts & decode memes, but struggle with simple tasks young kids find easy - like counting objects or finding items in cluttered scenes! Our 🆒🆕 #NeurIPS2024 paper shows why: they face the same 'binding problem' that constrains human vision! 🧵👇
cocoscilab.bsky.social
We are advertising a new postdoctoral position in computational cognitive science, with specific interest in applications of large language models in cognitive science and use of Bayesian methods and metalearning to understand human cognition and AI systems. www.princeton.edu/acad-positio...
Application for Postdoctoral Research Associate
www.princeton.edu
cocoscilab.bsky.social
First post! Does the success of deep neural networks in creating AI systems mean Bayesian models are no longer relevant? Our new paper argues the opposite: these approaches are complementary, creating new opportunities to use Bayes to understand intelligent machines
arxiv.org/abs/2311.10206