Mick Bonner
mickbonner.bsky.social
Mick Bonner
@mickbonner.bsky.social
Assistant Professor of Cognitive Science at Johns Hopkins. My lab studies human vision using cognitive neuroscience and machine learning. bonnerlab.org
As for what other inductive biases will prove to be important, this is still TBD. I think that wiring costs (e.g., topography) may be one.
December 15, 2025 at 7:57 PM
But neuroscientists and AI engineers have different goals! Neuroscientists should be seeking parsimonious theories, not high-performing models.
December 15, 2025 at 7:57 PM
Importantly, to get this to work, NeuroAI researchers have to back to the drawing board and search for simpler approaches. I think that currently, we are relying too much on the tools and models coming out of AI. It makes it seem like the only feasible approach is whatever currently works in AI.
December 15, 2025 at 7:57 PM
The simple-local-learning goal is certainly non-trivial! But recent findings (especially universality of network representations) suggest that it has potential.
December 15, 2025 at 7:57 PM
What might such a theory look like? My bet is that it will be one that combines strong architectural inductive biases with fully unsupervised learning algorithms that operate without the need for backpropagation. This is a very different direction than where AI and NeuroAI are currently headed.
December 15, 2025 at 6:45 PM
Although the deep learning revolution in vision science started with task-based optimization, there are intriguing signs that a far more parsimonious computational theory of the visual hierarchy is attainable.
December 15, 2025 at 6:45 PM
These universal representations are not restricted to early network layers. We see them across the full depth of the networks that we examined. Their strong universality and independence of task demands calls out for a parsimonious explanation that has yet to discovered.
December 15, 2025 at 6:45 PM
A second paper from my lab adds another element to this story: after training, many diverse DNNs converge to universal features that are independent of the tasks they were trained on. It is these universal features that are most strongly shared with visual cortex. www.science.org/doi/10.1126/...
Universal dimensions of visual representation
Probing neural representations reveals universal aspects of vision in artificial and biological networks.
www.science.org
December 15, 2025 at 6:45 PM
What does this mean? It suggests that architectural inductive biases alone can get us surprisingly far in explaining the image representations of the ventral stream. See a great commentary by @binxuwang.bsky.social wang.bsky.social and Carlos Ponce. www.nature.com/articles/s42...
Structure as an inductive bias for brain–model alignment - Nature Machine Intelligence
Even before training, convolutional neural networks may reflect the brain’s visual processing principles. A study now shows how structure alone can help to explain the alignment between brains and mod...
www.nature.com
December 15, 2025 at 6:45 PM
Second, similar manipulations in other architectures were relatively ineffective—the effects were specific to convolutional architectures and relied critically on the use of spatially local filters.
December 15, 2025 at 6:45 PM
These results could not simply be explained by high-dimensional regression. First, we could drastically reduce the dimensionality of wide layers through PCA while still retaining strong performance.
December 15, 2025 at 6:45 PM
We found that architectural manipulations alone (most importantly, making deeper layers wider) yielded large performance gains in untrained convolutional models of the ventral stream. In fact, these untrained networks even rivaled ImageNet-trained AlexNet in predicting monkey IT representations!
December 15, 2025 at 6:45 PM
Our recent paper adds to this story by showing the remarkable effectiveness of untrained convolutional networks in predicting ventral stream representations. www.nature.com/articles/s42...
Convolutional architectures are cortex-aligned de novo - Nature Machine Intelligence
Kazemian et al. report that untrained convolutional networks with wide layers predict primate visual cortex responses nearly as well as task-optimized networks, revealing how architectural constraints...
www.nature.com
December 15, 2025 at 6:45 PM
I see what you mean now. We explored this question in simulations at some point. The general take-away was that noise did not alter the shape of the spectrum. It just reduced the range of dimensions that we could reliably detect.
December 12, 2025 at 5:03 PM
Thanks! We use cross-validation and cross-subject analyses to address this. The effects we’re looking at generalize to held-out test data.
December 12, 2025 at 4:42 PM
First, I think it’s an open question whether we should expect low-D representations for task purposes in general. Second, I think what Raj had in mind is that some tasks only require us to attend to a subset of features.
December 12, 2025 at 4:30 PM
It’s still an open question whether you could explain these representations with a lower-dimensional nonlinear manifold. My hunch is there is no such simple manifold. But if anyone has suggestions for nonlinear methods to try, let us know! One challenge is that we need it to be cross-validated.
December 12, 2025 at 2:14 AM
Yes, it’s generally thought that dimensionality governs a trade-off between robustness and expressivity. It’s possible that scale-free representations strike a balance between these two competing desiderata.
December 12, 2025 at 1:56 AM
Agreed!
December 11, 2025 at 9:37 PM
Our work demonstrates that fully understanding human brain representations requires a high-dimensional statistical approach—otherwise, we're just seeing the tip of the iceberg!
December 11, 2025 at 3:32 PM
Why did so many previous studies report low dimensionality? 1. High-quality neural datasets are finally large enough to probe representations beyond just tens of dimensions! 2. Standard methods in cognitive neuroscience are insensitive to low-variance—but meaningful—dimensions.
December 11, 2025 at 3:32 PM