Mick Bonner
mickbonner.bsky.social
Mick Bonner
@mickbonner.bsky.social
Assistant Professor of Cognitive Science at Johns Hopkins. My lab studies human vision using cognitive neuroscience and machine learning. bonnerlab.org
Pinned
Dimensionality reduction may be the wrong approach to understanding neural representations. Our new paper shows that across human visual cortex, dimensionality is unbounded and scales with dataset size—we show this across nearly four orders of magnitude. journals.plos.org/ploscompbiol...
I don't disagree with that point, but at the same time, you can think of this from another perspective: Isn't it crazy that despite the many complex nonlinear transformations implemented by seemingly different models, they nonetheless arrive at something that is similar up to a linear transform?
February 10, 2026 at 5:32 PM
More to come. We are working on a paper now that characterizes these issues in more depth.
February 10, 2026 at 5:25 PM
This number is based on what we have seen in analyses in my lab. Some examples are Fig. 5 of in this paper...
www.science.org/doi/10.1126/...
Universal dimensions of visual representation
Probing neural representations reveals universal aspects of vision in artificial and biological networks.
www.science.org
February 10, 2026 at 5:25 PM
Second, if the only thing that differentiates two alternative models is a simple linear reweighing, it raises a question of how important their differences really are. It may be more informative in the end to focus on understanding what the models have in common than how they differ.
February 10, 2026 at 4:55 PM
3. We have been thinking about this. The answer is not straightforward. First, RSA is effectively insensitive to anything beyond the first 5-10 PCs in brain and network representations, and I happen to think there is much more to the story than just a handful of dimensions. bsky.app/profile/mick...
Dimensionality reduction may be the wrong approach to understanding neural representations. Our new paper shows that across human visual cortex, dimensionality is unbounded and scales with dataset size—we show this across nearly four orders of magnitude. journals.plos.org/ploscompbiol...
February 10, 2026 at 4:55 PM
1. Yes, trained networks are much better when using RSA. We show this in a supplementary analysis.
2. We have never computed this exact quantify. But we did show that if you do PCA on wide untrained networks, you can drastically reduce their dimensionality while still retaining their performance.
February 10, 2026 at 4:55 PM
Although pre-trained networks can be super useful for comp neuro, the surprising success of untrained networks suggests that there may be still be much to learn by focusing on simpler approaches. We shouldn't be focusing all our attention on the latest DNN models coming out of the ML world.
February 10, 2026 at 2:56 PM
These architectural manipulations were things that you wouldn’t typically think to try if your primary focus was on trained networks. We wrote about this in our discussion.
February 10, 2026 at 2:56 PM
Importantly, one of the things we learned in that work was that the field hasn’t been giving untrained networks the best chance possible. We found that fairly simple architectural manipulations could dramatically improve their performance.
February 10, 2026 at 2:56 PM
That's true. But untrained networks can do surprisingly well. In a recent paper, we found that untrained networks can rival trained networks in a key monkey dataset. In the human data we examined, there was still a gap relative to pre-trained models, as you point out. www.nature.com/articles/s42...
Convolutional architectures are cortex-aligned de novo - Nature Machine Intelligence
Kazemian et al. report that untrained convolutional networks with wide layers predict primate visual cortex responses nearly as well as task-optimized networks, revealing how architectural constraints...
www.nature.com
February 10, 2026 at 2:56 PM
Reposted by Mick Bonner
This paper was an awesome collaborative effort of a @fitngin.bsky.social working group. It provides a detailed review of how DNNs can be used to support dev neuro research

@lauriebayet.bsky.social and I wrote the network modeling section about how DNNs can be used to test developmental theories 🧵
Deep learning in fetal, infant, and toddler neuroimaging research
Artificial intelligence (AI) is increasingly being integrated into everyday tasks and work environments. However, its adoption in medical image analys…
www.sciencedirect.com
January 28, 2026 at 3:08 PM
Reposted by Mick Bonner
Infants organise their visual world into categories at two-months-old! So happy to see these results published - congratulations Cliona and the rest of the FOUNDCOG team.
1/7 Can infants recognise the world around them? 👶🧠 As part of the FOUNDCOG project, we scanned 134 awake infants using fMRI. Published today in Nature Neuroscience, our research reveals 2-month-old infants already possess complex visual representations in VVC that align with DNNs.
February 2, 2026 at 4:39 PM
New paper from our lab on the behavioral significance of high-dimensional neural representations!
Human visual cortex representations may be much higher-dimensional than earlier work suggested, but are these higher dimensions of cortical activity actually relevant to behavior? Our new paper tackles this by studying how different people experience the same movies. 🧵 www.cell.com/current-biol...
High-dimensional structure underlying individual differences in naturalistic visual experience
Han and Bonner reveal that individual visual experience arises from high-dimensional neural geometry distributed across multiple representational scales. By characterizing the full dimensional spectru...
www.cell.com
January 30, 2026 at 6:57 PM
Reposted by Mick Bonner
I have a PhD opening for my #VIDI BrainShorts project 📽️🧠🤖! Are you or do you know an ambitious, recent (or almost) MSc graduate with a background in NeuroAI and interest in large-scale data collection and video perception? Check out our vacancy! (deadline Feb 15).
werkenbij.uva.nl/en/vacancies...
Vacancy — PhD Position in NeuroAI for Video Perception in the Human Brain
<p><span>Are you interested in using AI to unravel the mysteries of the brain? Do you want to perform cutting-edge NeuroAI research and leverage deep learning to understand human vision? Then check out the vacancy below and apply for a PhD position in this exciting research direction.</span></p>
werkenbij.uva.nl
January 16, 2026 at 12:31 PM
Reposted by Mick Bonner
Why do we find some scenes more aesthetic than others?

For my first in @sciencenews.bsky.social, I wrote about a new study that suggests that our aesthetic preferences could have evolved as cognitive shortcuts. 🧠🧪

www.sciencenews.org/article/brai...
Easy on the eyes is also easy on the brain
A new study finds that the brain spends less energy processing scenes that people find aesthetically pleasing.
www.sciencenews.org
January 9, 2026 at 9:18 PM
Reposted by Mick Bonner
Our new paper in @sfnjournals.bsky.social shows different neural systems for integrating views into places--PPA integrates views *of* a location (e.g., views of a landmark), while RSC integrates views *from* a location (e.g., views of a panorama). Work by the bluesky-less Linfeng Tony Han.
#JNeurosci: Using fMRI, Han and Epstein explored how people integrate different kinds of views to form mental maps of places, revealing two sets of brain regions involved in integrating views of landmarks into existing mental maps of a virtual city.
https://doi.org/10.1523/JNEUROSCI.0187-25.2025
January 7, 2026 at 5:11 PM
Reposted by Mick Bonner
Why isn’t modern AI built around principles from cognitive science or neuroscience? Starting a substack (infinitefaculty.substack.com/p/why-isnt-m...) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3
Why isn’t modern AI built around principles from cognitive science?
First post in a series on cognitive science and AI
infinitefaculty.substack.com
December 16, 2025 at 3:40 PM
Reposted by Mick Bonner
Spread the word: I'm looking to hire a postdoc to explore the concept of attention (as studied in psych/neuro, not the transformer mechanism) in large Vision-Language Models. More details here: lindsay-lab.github.io/2025/12/08/p...
#MLSky #neurojobs #compneuro
Lindsay Lab - Postdoc Position
Artificial neural networks applied to psychology, neuroscience, and climate change
lindsay-lab.github.io
December 8, 2025 at 11:53 PM
As for what other inductive biases will prove to be important, this is still TBD. I think that wiring costs (e.g., topography) may be one.
December 15, 2025 at 7:57 PM
But neuroscientists and AI engineers have different goals! Neuroscientists should be seeking parsimonious theories, not high-performing models.
December 15, 2025 at 7:57 PM
Importantly, to get this to work, NeuroAI researchers have to back to the drawing board and search for simpler approaches. I think that currently, we are relying too much on the tools and models coming out of AI. It makes it seem like the only feasible approach is whatever currently works in AI.
December 15, 2025 at 7:57 PM
The simple-local-learning goal is certainly non-trivial! But recent findings (especially universality of network representations) suggest that it has potential.
December 15, 2025 at 7:57 PM
What might such a theory look like? My bet is that it will be one that combines strong architectural inductive biases with fully unsupervised learning algorithms that operate without the need for backpropagation. This is a very different direction than where AI and NeuroAI are currently headed.
December 15, 2025 at 6:45 PM
Although the deep learning revolution in vision science started with task-based optimization, there are intriguing signs that a far more parsimonious computational theory of the visual hierarchy is attainable.
December 15, 2025 at 6:45 PM