Taylor Webb
@taylorwwebb.bsky.social
1.3K followers 460 following 85 posts
Studying cognition in humans and machines https://scholar.google.com/citations?user=WCmrJoQAAAAJ&hl=en
Posts Media Videos Starter Packs
Pinned
taylorwwebb.bsky.social
LLMs have shown impressive performance in some reasoning tasks, but what internal mechanisms do they use to solve these tasks? In a new preprint, we find evidence that abstract reasoning in LLMs depends on an emergent form of symbol processing arxiv.org/abs/2502.20332 (1/N)
Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models
Many recent studies have found evidence for emergent reasoning capabilities in large language models, but debate persists concerning the robustness of these capabilities, and the extent to which they ...
arxiv.org
taylorwwebb.bsky.social
Agreed! I just meant that you can parameterize a task rep by giving someone verbal instructions (which is a very convenient way to specify tasks zero-shot). I agree the reps themselves aren’t likely to be encoded in natural langage.
taylorwwebb.bsky.social
To be more precise, language is used in this model to 1) specify the functional role of each module, 2) mediate communication between modules, and 3) specify the task. 3 is certainly the case for PFC, but I assume not 1 and 2.
taylorwwebb.bsky.social
Good question! Natural language is useful here primarily because it’s a convenient way to specify planning tasks without needing domain-specific planning languages, which are a major bottleneck in classical planning methods.
taylorwwebb.bsky.social
Overall, these results highlight the potential of a factorized brain-like approach to improve planning in LLMs. Please check out the paper for many more analyses and results!
taylorwwebb.bsky.social
We also found benefits in a range of other planning problems (including standardized benchmarks such as PlanBench), and even some degree of transfer between different planning problems.
taylorwwebb.bsky.social
Particularly notable was the fact that this approach completely eliminated invalid moves (i.e. moves that violate the constraints of the planning problem), even in out-of-distribution settings.
taylorwwebb.bsky.social
We tested this approach on classic planning problems (e.g. tower of hanoi) that still pose significant challenges for LLMs and related models, finding that this modular approach significantly improved performance.
taylorwwebb.bsky.social
To address this, we developed an agentic architecture in which planning is factorized into subprocesses such as action selection, error monitoring etc (each implemented by a separate LLM), taking inspiration from both reinforcement learning and theories of decision-making in the human brain.
taylorwwebb.bsky.social
Interestingly, we found that key subprocesses could often reliably be carried out by LLMs, but the coordination of multiple subprocesses was challenging. For instance, LLMs can often accurately identify when plans involve errors (monitoring), but persist in making those errors when planning.
taylorwwebb.bsky.social
The motivation for this work was the observation that LLMs display very limited planning abilities, as demonstrated e.g. by work from @neuroai.bsky.social neurips.cc/virtual/2023... showing difficulty with even relatively simple planning problems.
NeurIPS Poster Evaluating Cognitive Maps and Planning in Large Language Models with CogEvalNeurIPS 2023
neurips.cc
taylorwwebb.bsky.social
Very nice commentary arguing that binding is still a problem, for both biological and artificial neural networks www.sciencedirect.com/science/arti...
Feature binding in biological and artificial vision
www.sciencedirect.com
Reposted by Taylor Webb
matthiasmichel.bsky.social
Very happy to announce that our paper “Sensory Horizons and the Functions of Conscious Vision” is now out as a target article in BBS!! @smfleming.bsky.social and I present a new theory of the evolution and functions of visual consciousness. Article here: doi.org/10.1017/S014.... A (long) thread 🧵
Sensory Horizons and the Functions of Conscious Vision | Behavioral and Brain Sciences | Cambridge Core
Sensory Horizons and the Functions of Conscious Vision
doi.org
Reposted by Taylor Webb
bernsteinneuro.bsky.social
🔍 Large language models, similar to those behind ChatGPT, can predict how the human brain responds to visual stimuli

New study by @adriendoerig.bsky.social @freieuniversitaet.bsky.social with colleagues from Osnabrück, Minnesota and @umontreal-en.bsky.social

Read the whole story 👉 bit.ly/3JXlYmO
Reposted by Taylor Webb
ericelmoznino.bsky.social
Very excited to release a new blog post that formalizes what it means for data to be compositional, and shows how compositionality can exist at multiple scales. Early days, but I think there may be significant implications for AI. Check it out! ericelmoznino.github.io/blog/2025/08...
Defining and quantifying compositional structure
What is compositionality? For those of us working in AI or cognitive neuroscience this question can appear easy at first, but becomes increasingly perplexing the more we think about it. We aren’t shor...
ericelmoznino.github.io
Reposted by Taylor Webb
cocoscilab.bsky.social
Our new preprint explores how advances in AI change how we think about the role of symbols in human cognition. As neural networks show capabilities once used to argue for symbolic processes, we need to revisit how we can identify the level of analysis at which symbols are useful.
rtommccoy.bsky.social
🤖 🧠 NEW PAPER ON COGSCI & AI 🧠 🤖

Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning

So what role should symbols play in theories of the mind? For our answer...read on!

Paper: arxiv.org/abs/2508.05776

1/n
The top shows the title and authors of the paper: "Whither symbols in the era of advanced neural networks?" by Tom Griffiths, Brenden Lake, Tom McCoy, Ellie Pavlick, and Taylor Webb.

At the bottom is text saying "Modern neural networks display capacities traditionally believed to require symbolic systems. This motivates a re-assessment of the role of symbols in cognitive theories."

In the middle is a graphic illustrating this text by showing three capacities: compositionality, productivity, and inductive biases. For each one, there is an illustration of a neural network displaying it. For compositionality, the illustration is DALL-E 3 creating an image of a teddy bear skateboarding in Times Square. For productivity, the illustration is novel words produced by GPT-2: "IKEA-ness", "nonneotropical", "Brazilianisms", "quackdom", "Smurfverse". For inductive biases, the illustration is a graph showing that a meta-learned neural network can learn formal languages from a small number of examples.
taylorwwebb.bsky.social
New position paper! We argue that symbolic and neural network models are not in opposition to each other, but occupy different levels of analysis, and also outline a new research agenda for better understanding the relationship between them. Please check out the paper / thread for more details!
rtommccoy.bsky.social
🤖 🧠 NEW PAPER ON COGSCI & AI 🧠 🤖

Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning

So what role should symbols play in theories of the mind? For our answer...read on!

Paper: arxiv.org/abs/2508.05776

1/n
The top shows the title and authors of the paper: "Whither symbols in the era of advanced neural networks?" by Tom Griffiths, Brenden Lake, Tom McCoy, Ellie Pavlick, and Taylor Webb.

At the bottom is text saying "Modern neural networks display capacities traditionally believed to require symbolic systems. This motivates a re-assessment of the role of symbols in cognitive theories."

In the middle is a graphic illustrating this text by showing three capacities: compositionality, productivity, and inductive biases. For each one, there is an illustration of a neural network displaying it. For compositionality, the illustration is DALL-E 3 creating an image of a teddy bear skateboarding in Times Square. For productivity, the illustration is novel words produced by GPT-2: "IKEA-ness", "nonneotropical", "Brazilianisms", "quackdom", "Smurfverse". For inductive biases, the illustration is a graph showing that a meta-learned neural network can learn formal languages from a small number of examples.
Reposted by Taylor Webb
raphaelmilliere.com
Can LLMs reason by analogy like humans? We investigate this question in a new paper published in the Journal of Memory and Language (link below). This was a long-running but very rewarding project. Here are a few thoughts on our methodology and main findings. 1/9
Reposted by Taylor Webb
codydong.bsky.social
My first, first author paper, comparing the properties of memory-augmented large language models and human episodic memory, out in @cp-trendscognsci.bsky.social!

authors.elsevier.com/a/1lV174sIRv...

Here’s a quick 🧵(1/n)
authors.elsevier.com
Reposted by Taylor Webb
nadinedijkstra.bsky.social
After five years of confused staring at Greek letters, it is my absolute pleasure to finally share our (with @smfleming.bsky.social) computational model of mental imagery and reality monitoring: Perceptual Reality Monitoring as Higher-Order inference on Sensory Precision ✨
osf.io/preprints/ps...
OSF
osf.io
taylorwwebb.bsky.social
Very excited for this symposium! We have an amazing lineup of speakers exploring the intersection between cog sci and mechanistic interpretability. If you’re at CogSci and interested in the ways in which mechanisms in LLMs might inform cognitive theories, please check it out!
annaleshinskaya.bsky.social
Thrilled to announce our symposium, Cognitively Inspired Interpretability inn Large Neural Networks, at #CogSci2025 featuring @taylorwwebb.bsky.social, Ellie Pavlick, Jiahai Feng, Gustaw Opielka, ‪‪@claires012345.bsky.social‬, and Idan Blank!
Reposted by Taylor Webb
neuroai.bsky.social
If you're at ICML, check our work on AlgEval, toward algorithmic understanding of generative AI. I couldn't make it in person but am excited to say @taylorwwebb.bsky.social is there presenting our spotlight paper.
P.S. If you see Taylor congratulate him on his professorship!
arxiv.org/abs/2507.07544
taylorwwebb.bsky.social
Come check out our poster session at 11 tomorrow to find out how LLMs approximate symbol systems for abstract reasoning icml.cc/virtual/2025...