Shahab Bakhtiari
banner
shahabbakht.bsky.social
Shahab Bakhtiari
@shahabbakht.bsky.social
|| assistant prof at University of Montreal || leading the systems neuroscience and AI lab (SNAIL: https://www.snailab.ca/) 🐌 || associate academic member of Mila (Quebec AI Institute) || #NeuroAI || vision and learning in brains and machines
Pinned
So excited to see this preprint released from the lab into the wild.

Charlotte has developed a theory for how learning curriculum influences learning generalization.
Our theory makes straightforward neural predictions that can be tested in future experiments. (1/4)

🧠🤖 🧠📈 #MLSky
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
In the context of the recent discussions on travelling waves and oscillations in 🧠, the recent direction of work on ANNs by Keller, Welling et al is my favorite:
arxiv.org/abs/2409.13669

Focusing on the advantages of travelling waves for equivariant representations and conserving symmetries.
A Spacetime Perspective on Dynamical Computation in Neural Information Processing Systems
There is now substantial evidence for traveling waves and other structured spatiotemporal recurrent neural dynamics in cortical structures; but these observations have typically been difficult to reco...
arxiv.org
November 25, 2025 at 6:49 PM
Reposted by Shahab Bakhtiari
📍Excited to share that our paper was selected as a Spotlight at #NeurIPS2025!

arxiv.org/pdf/2410.03972

It started from a question I kept running into:

When do RNNs trained on the same task converge/diverge in their solutions?
🧵⬇️
November 24, 2025 at 4:43 PM
Reposted by Shahab Bakhtiari
I think Mark Churchland's recent review does a nice job of motivating how we got to where we are now in terms of population level descriptions with respect to motor cortex. Its not clear to me that this should generalize across the brain and across all behaviors. www.nature.com/articles/s41...
Preparatory activity and the expansive null-space - Nature Reviews Neuroscience
How does motor-cortex activity well before movement not drive motor outputs? In this Review, Churchland and Shenoy detail how searching for answers transitioned the understanding of neural activi...
www.nature.com
November 24, 2025 at 2:03 PM
Reposted by Shahab Bakhtiari
Come on Konrad, why do you cave so easily? Here, let me try it for you:
1. Spikes are (to good approximation) the only events that matter.
2. Extracellular fields are one way by which spikes interact with each other.
1/2
As we are having a discussion on neural codes: @earlkmiller.bsky.social is entirely right that the "only spike rates matter" idea that is so prominent in neuroscience has no credible evidence. We simply do not currently know how neurons code relevant information. Oscillations are likely part of it.
November 21, 2025 at 3:51 PM
Reposted by Shahab Bakhtiari
New blog post of a type I haven't done in a while: notes on nested learning, a paper from Google research. On the theme "what comes after the transformer?" And specifically how to address the weaknesses of transformers, how they struggle to handle memory well. open.substack.com/pub/itcanthi...
Paper Notes: Nested Learning
A research paper from Google on how to enable AI to learn over its lifetime without a distinct "training" phase
open.substack.com
November 20, 2025 at 11:46 PM
Reposted by Shahab Bakhtiari
My lab is looking for new students who are very passionate about foundational models and planning/RL/robotics. Apply via Mila. I will also be at #NeurIPS to discuss research ideas and opportunities. See notes below for application advice.
November 19, 2025 at 3:10 PM
Reposted by Shahab Bakhtiari
We're almost at the end of the year, and that means an end-of-year review! Send me your favorite NeuroAI papers of the year (preprints or published, late last year is fine too).
November 19, 2025 at 4:14 PM
Reposted by Shahab Bakhtiari
I’m really excited about our release of Gemini 3 today, the result of hard work by many, many people in the Gemini team and all across Google! 🎊

blog.google/products/gem...

Gemini 3 performs quite well on a wide range of benchmarks.
November 19, 2025 at 2:53 AM
Reposted by Shahab Bakhtiari
I have so many issues with this podcast with @earlkmiller.bsky.social . I think that this podcast nicely shows why I have trouble with such approaches. Lets go through some of the claims.
November 18, 2025 at 8:05 PM
Reposted by Shahab Bakhtiari
🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex
Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...
arxiv.org
November 18, 2025 at 12:37 PM
Reposted by Shahab Bakhtiari
This is an excellent blueprint on a very fascinating use of AI scientist! And the results and super cool and interesting! 🤩
I have been asked this when talking about our work on using powerlaws to study representation quality in deep neural networks, glad to have a more concrete answer now! 😃
November 16, 2025 at 10:29 PM
Reposted by Shahab Bakhtiari
paper🚨
When we learn a category, do we learn the structure of the world, or just where to draw the line? In a cross-species study, we show that humans, rats & mice adapt optimally to changing sensory statistics, yet rely on fundamentally different learning algorithms.
www.biorxiv.org/content/10.1...
Different learning algorithms achieve shared optimal outcomes in humans, rats, and mice
Animals must exploit environmental regularities to make adaptive decisions, yet the learning algorithms that enabels this flexibility remain unclear. A central question across neuroscience, cognitive science, and machine learning, is whether learning relies on generative or discriminative strategies. Generative learners build internal models the sensory world itself, capturing its statistical structure; discriminative learners map stimuli directly onto choices, ignoring input statistics. These strategies rely on fundamentally different internal representations and entail distinct computational trade-offs: generative learning supports flexible generalisation and transfer, whereas discriminative learning is efficient but task-specific. We compared humans, rats, and mice performing the same auditory categorisation task, where category boundaries and rewards were fixed but sensory statistics varied. All species adapted their behaviour near-optimally, consistent with a normative observer constrained by sensory and decision noise. Yet their underlying algorithms diverged: humans predominantly relied on generative representations, mice on discriminative boundary-tracking, and rats spanned both regimes. Crucially, end-point performance concealed these differences, only learning trajectories and trial-to-trial updates revealed the divergence. These results show that similar near-optimal behaviour can mask fundamentally different internal representations, establishing a comparative framework for uncovering the hidden strategies that support statistical learning. ### Competing Interest Statement The authors have declared no competing interest. Wellcome Trust, https://ror.org/029chgv08, 219880/Z/19/Z, 225438/Z/22/Z, 219627/Z/19/Z Gatsby Charitable Foundation, GAT3755 UK Research and Innovation, https://ror.org/001aqnf71, EP/Z000599/1
www.biorxiv.org
November 17, 2025 at 7:18 PM
Reposted by Shahab Bakhtiari
Happy to share my new paper published in @nathumbehav.nature.com: A critical look at statistical power in computational modeling studies, particularly those based on model selection.
www.nature.com/articles/s41...
November 17, 2025 at 6:13 PM
I’m genuinely curious about this. The numbers in the blog are quite impressive.

Has anyone tried it and would like to share their $200 experience?
Today, we're announcing Kosmos, our newest AI Scientist, available today. Kosmos makes fully autonomous scientific discoveries at scale by analyzing datasets and literature, and is the most powerful agent for science so far. Beta users estimate that Kosmos does 6 months of work in a single day.
November 17, 2025 at 4:11 PM
Reposted by Shahab Bakhtiari
Dandi, dandiarchive.org, Brainlife brainlife.io/about/ etc are pretty good. But perhaps fostering meaningful interactions between experimentalist and theoretician are ultimate solution.
November 17, 2025 at 2:54 PM
Reposted by Shahab Bakhtiari
🧠Our new preprint is out on PsyArXiv!

We study how getting more feedback (seeing what you could have earned) and facing gains vs losses change the way people choose between risky and safe options.
🖇️Link: doi.org/10.31234/osf...

It's a thread🧶:
November 16, 2025 at 12:09 PM
Reposted by Shahab Bakhtiari
Is there an academic/industry divide in attitudes about using AI to support discovery? I noticed this post has 3.6k likes on X but only 6 likes on Bluesky. It deserves more attention here!
Today, we're announcing Kosmos, our newest AI Scientist, available today. Kosmos makes fully autonomous scientific discoveries at scale by analyzing datasets and literature, and is the most powerful agent for science so far. Beta users estimate that Kosmos does 6 months of work in a single day.
November 17, 2025 at 2:32 PM
For any question a theoretical neuroscientist is pondering, there are at least a few relevant datasets out there locked inside individual labs. I also suspect many of those labs would be willing to share their data if there were an easy way to prepare it for public release.
If you try to construct the model to be brain-like you inevitably face ~100 choices that are severely under-constrained by data, and you just have to muddle through.
November 17, 2025 at 1:07 PM
Reposted by Shahab Bakhtiari
It is actually an incredibly frustrating time to be a theoretical neuroscientist right now imo, for this reason
Same for neuroscience. The lack of ability to measure many neurons’ activity, perturb them, and measure intracellular processes and connections is what limits understanding the brain.

The key barriers are not algorithms or AI.

🧪#neuroscience 🧠🤖 #MLSky
November 17, 2025 at 1:23 AM
Reposted by Shahab Bakhtiari
There was never any point to having reference letters. That's why we've all started using AI to do this nonesense task.

References should only be used for short-listed candidates for important positions/awards, and ideally, be done via a call to get the most honest opinion possible.
From my discussions with other faculty, the use of generative AI I hear about the most is writing reference letters.

What's the point of having reference letters anymore if everyone is just having them written by machine?
November 14, 2025 at 7:10 PM
Reposted by Shahab Bakhtiari
MiniThread: I was reading this paper and through it was worth a comment because the results are very counterintuitive to me (and the authors too)

Miller, G. A., & Selfridge, J. A. (1950). Verbal context and the recall of meaningful material. The American journal of psychology, 63(2), 176-185.
November 13, 2025 at 5:06 PM
Reposted by Shahab Bakhtiari
Want to help shape the SCENE collaboration?! Join us as an executive director: www.cam.ac.uk/jobs/scene-m...
SCENE Manager
The Simons Collaboration on Ecological Neuroscience (SCENE): SCENE is an international consortium of 20 leading researchers in the fields of Computational, Systems and Cognitive Neuroscience, and
www.cam.ac.uk
November 13, 2025 at 5:32 PM
Reposted by Shahab Bakhtiari
Fei-Fei Li’s Worldlabs.ai releases their Marble Labs model and tools. They predict meshes that can be re-styled. Smart. And predictable. It helps solve the predictive impermanence issues with pure pixel-to-pixel world models and it’s going to work with how far engines already work.
World Labs
World Labs is a spatial intelligence company, building frontier models that can perceive, generate, and interact with the 3D world.
Worldlabs.ai
November 13, 2025 at 4:48 PM
We’re snailposting, post your snails!

I couldn’t let this pass without posting my lab logo for no good reason 🤪 🐌
November 13, 2025 at 3:11 PM
Reposted by Shahab Bakhtiari
Some pretty eye-opening data on the effect of AI coding.

When Cursor added agentic coding in 2024, adopters produced 39% more code merges, with no sign of a decrease in quality (revert rates were the same, bugs dropped) and no sign that the scope of the work shrank. papers.ssrn.com/sol3/papers....
November 13, 2025 at 5:18 AM