Martin Schrimpf
@mschrimpf.bsky.social
2.7K followers 62 following 55 posts
NeuroAI Prof @EPFL 🇨🇭. ML + Neuro 🤖🧠. Brain-Score, CORnet, Vision, Language. Previously: PhD @MIT, ML @Salesforce, Neuro @HarvardMed, & co-founder @Integreat. go.epfl.ch/NeuroAI
Posts Media Videos Starter Packs
mschrimpf.bsky.social
A glimpse at what #NeuroAI brain models might enable: a topographic vision model predicts stimulation patterns that steer complex object recognition behavior in primates. This could be a key 'software' component for visual prosthetic hardware 🧠🤖🧪
Reposted by Martin Schrimpf
hannesmehrer.bsky.social
🧠 New preprint: we show that model-guided microstimulation can steer monkey visual behavior.

Paper: arxiv.org/abs/2510.03684

🧵
mschrimpf.bsky.social
Just to support Sam's argument here: there is indeed a lot of evidence across several domains such as vision and language that ML models develop representations similar to the human brain. There are of course many differences but on a certain level of abstraction there is a surprising convergence
mschrimpf.bsky.social
More precisely we would categorize it as a brain based disorder, but now I'm curious if you would be on board with that?
mschrimpf.bsky.social
You're right and I apologize for the imprecise phrasing. I wanted to connect with the usual "brain in health and disease" phrasing, for which we developed some first tools based on the learning disorder dyslexia. We are hopeful that these tools will be applicable to diseases of brain function
Reposted by Martin Schrimpf
hannesmehrer.bsky.social
Very happy to be part of this project: Melika Honarmand has done a great job of using vision-language-models to predict the behavior of people with dyslexia. A first step toward modeling various disease states using artificial neural networks.
mschrimpf.bsky.social
I've been arguing that #NeuroAI should model the brain in health *and* in disease -- very excited to share a first step from Melika Honarmand: inducing dyslexia in vision-language-models via targeted perturbations of visual-word-form units (analogous to human VWFA) 🧠🤖🧪 arxiv.org/abs/2509.24597
mschrimpf.bsky.social
We're super excited about this approach: localizing model analogues of hypothesized neural causes in the brain and testing their downstream behavioral effects is applicable much more broadly in a variety of other contexts!
mschrimpf.bsky.social
Digging deeper into the ablated model, we found that its behavioral patterns mirror phonological deficits of dyslexic humans, without a significant deficit in orthographic processing. This connects to experimental work suggesting that phonological and orthographic deficits have distinct origins.
mschrimpf.bsky.social
It turns out that the ablation of these units has a very specific effect: it reduced reading performance to dyslexia levels *but* keeps visual reasoning performance intact. This does not happen with random units, so localization is key.
mschrimpf.bsky.social
We achieve this via the localization and subsequent ablation of units that are "visual-word-form selective" i.e. are more active for the visual presentation of words over other images. After ablating the units we test the effect on behavior in benchmarks testing reading and other control tasks
mschrimpf.bsky.social
I've been arguing that #NeuroAI should model the brain in health *and* in disease -- very excited to share a first step from Melika Honarmand: inducing dyslexia in vision-language-models via targeted perturbations of visual-word-form units (analogous to human VWFA) 🧠🤖🧪 arxiv.org/abs/2509.24597
mschrimpf.bsky.social
We're super excited about this approach more broadly: localizing model analogues of hypothesized neural causes in the brain and testing their downstream behavioral effects is applicable in a variety of other contexts!
mschrimpf.bsky.social
Digging deeper into the ablated model, we found that its behavioral patterns mirror phonological deficits of dyslexic humans, without a significant deficit in orthographic processing. This connects to experimental work suggesting that phonological and orthographic deficits have distinct origins.
mschrimpf.bsky.social
It turns out that the ablation of these units has a very specific effect: it reduced reading performance to dyslexia levels *but* keeps visual reasoning performance intact. This does not happen with random units, so localization is key.
mschrimpf.bsky.social
We achieve this via the localization and subsequent ablation of units that are "visual-word-form selective" i.e. are more active for the visual presentation of words over other images. After ablating the units we test the effect on behavior in benchmarks testing reading and other control tasks
Reposted by Martin Schrimpf
mbeyeler.bsky.social
👁️🧠 New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!

TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.

www.biorxiv.org/content/10.1...
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.
Reposted by Martin Schrimpf
icepfl.bsky.social
EPFL, ETH Zurich & CSCS just released Apertus, Switzerland’s first fully open-source large language model.
Trained on 15T tokens in 1,000+ languages, it’s built for transparency, responsibility & the public good.

Read more: actu.epfl.ch/news/apertus...
Reposted by Martin Schrimpf
epfl-brainmind.bsky.social
Action potential 👉 3 faculty opportunities to join EPFL neuroscience 1. Tenure Track Assistant Professor in Neuroscience go.epfl.ch/neurofaculty, 2. Tenure Track Assistant Professor in Life Sciences Engineering, or 3. Associate Professor (tenured) in Life Sciences Engineering go.epfl.ch/LSEfaculty
Reposted by Martin Schrimpf
eringrant.me
Our #CCN2025 GAC debate w/ @gretatuckute.bsky.social, Gemma Roig (www.cvai.cs.uni-frankfurt.de), Jacqueline Gottlieb (gottlieblab.com), Klaus Oberauer, @mschrimpf.bsky.social &‬ @brittawestner.bsky.social asks:

📊 What benchmarks are useful for cognitive science? 💭
2025.ccneuro.org/gac
Speakers and organizers of the GAC debate. Time and location of the GAC debate: 5 PM in Room C1.03.
mschrimpf.bsky.social
As part of #CCN2025 our satellite event on Monday will explore how we can model the brain as a physical system, from topography to biophysical detail -- and how such models can potentially lead to impactful applications neuroailab.github.io/modeling-the-physical-brain. Join us! 🧪🧠 🤖
https://neuroailab.github.io/modeling-the-p…
mschrimpf.bsky.social
this is all to say that I think it is very cool the idea of "diverse representations driven by a unified objective" is coming to fruition, and I find the consistently high performance and alignment of powerful video models a strong support for it
mschrimpf.bsky.social
which enables a fine-grain mapping of cortical space with a new multi-task relevance analysis; the accurate (R~0.5) prediction of second-by-second human brain activity, which makes us more confident in the characterization of action understanding pathways; and a couple more
mschrimpf.bsky.social
The mouse work is definitely relevant, will make sure to reference (apologies for the oversight). I do think there are substantial novelties that have only been made possible with more recent powerful video models: the tight relation to behavior and a variety of tasks,