Andreas Tolias
andreastolias.bsky.social
Andreas Tolias
@andreastolias.bsky.social
Stanford Professor | NeuroAI Scientist | Entrepreneur working at the intersection of neuroscience, AI, and neurotechnology to decode intelligence @ enigmaproject.ai
Building on Lurz et al., our new Wang et al. studies movie‑data performance vs. training size and compares scaling for Conv‑LSTM vs. CvT(convolutional vision
transformer)‑LSTM. Details: www.nature.com/articles/s41...
April 19, 2025 at 1:21 PM
Reposted by Andreas Tolias
In Lurz et al., ICLR 2021 we did quite some analysis on scaling and generalization across animals in the context of visual response prediction (incl. behavioral modulation) with @sinzlab.bsky.social and @andreastolias.bsky.social: openreview.net/forum?id=Tp7...
Generalization in data-driven models of primary visual cortex
Deep neural networks (DNN) have set new standards at predicting responses of neural populations to visual input. Most such DNNs consist of a convolutional network (core) shared across all neurons...
openreview.net
April 18, 2025 at 1:40 PM
We didn't know the optimal patterns driving mouse V1 neurons until the deep learning model by Walker et al. (2019). FYI: Unlike mice, Gabors actually describe macaque V1 neurons quite well (Fu et al., Cell Reports).
April 13, 2025 at 7:11 PM
Reposted by Andreas Tolias
Join me, @andreastolias.bsky.social, and many of the incredible MICrONS team members in an AI-driven approach to neuroscience discovery

Apply here: www.linkedin.com/jobs/view/42...

Or email us at [email protected]
Enigma hiring Research Engineer: Multi-Modal Modeling in Stanford, CA | LinkedIn
Posted 11:42:35 PM. The modeling team at Enigma is seeking ML Research Engineers to build and scale the next generation…See this and similar jobs on LinkedIn.
www.linkedin.com
April 13, 2025 at 6:48 PM
3/3 The core strength of our approach—robust prediction of neural responses to novel visual stimulus domains. Dyer's autoregressive approach generates latent embeddings for neural decoding—an entirely different architectural paradigm with different scientific objectives.
April 13, 2025 at 1:57 PM
2/3 However this is not the main point, these models serve fundamentally different purposes. Ours explicitly predicts neural responses to visual stimuli (an encoding model), creating functional digital twins.
April 13, 2025 at 1:57 PM
1/3 Just for clarification our foundation model was introduced on March 21st, 2023—predating Dyer et al. by over six months.
www.biorxiv.org/content/bior...
www.biorxiv.org
April 13, 2025 at 1:57 PM
2
April 13, 2025 at 1:54 PM
8/8 Deep learning simulation enables systematic representational-level characterization, though detailed circuit-cell-type-level mechanistic comprehension remains beyond current capabilities in the cortex.
April 13, 2025 at 1:54 PM
7/8 and characterization of the feature landscape of mouse visual cortex (Tong et al., bioRxiv 2023)—just a few examples of their applications. Most importantly, they yield in silico predictions which are subsequently verified through experimental testing.
April 13, 2025 at 1:54 PM
6/8 Predictive models also enabled to systematically characterize single neuron invariance properties (Ding et al., bioRxiv 2023), center-surround interactions (Fu et al., bioRxiv 2023), color-opponency mechanisms (Hofling et al., Elife 2024),
April 13, 2025 at 1:54 PM
5/8 Our models also revealed that mouse V1 neurons shift their selectivity toward UV when pupil dilation or running begins, despite maintaining stable spatial stimulus structure—discovered in the digital twin and validated experimentally in closed-loop studies (Franke et al., Nature 2022).
April 13, 2025 at 1:54 PM
4/8 For example, these simulations revealed that mouse V1 neurons exhibit complex spatial features deviating from the common notion that Gabor-like stimuli are optimal (Walker, Sinz et al., Nature Neuroscience 2019).
April 13, 2025 at 1:54 PM
3/8 When ANNs accurately simulate neural function, they facilitate 'mechanistic interpretability' (to borrow the AI term)—enabling rigorous representational-level analysis of neuronal tuning.
April 13, 2025 at 1:54 PM
2/8 Moreover, both task- and data-driven neural predictive models are powerful tools to gain neuroscientific insights as we and others have demonstrated repeatedly.
April 13, 2025 at 1:54 PM
1/8: This quote from our abstract refers to task-driven modeling approaches (e.g., Yamins, DiCarlo, et al.) which define computational objectives and reveal hidden representations closely matching brain activity—widely recognized for deepening insights into brain computations.
April 13, 2025 at 1:54 PM
3/3 The core strength of our approach—robust prediction of neural responses to novel visual stimulus domains. Dyer's autoregressive approach generates latent embeddings for neural decoding—an entirely different architectural paradigm with different scientific objectives.
April 13, 2025 at 1:47 PM
2/3 However this is not the main point, these models serve fundamentally different purposes. Ours explicitly predicts neural responses to visual stimuli (an encoding model), creating functional digital twins.
April 13, 2025 at 1:47 PM
1/3 Just for clarification our foundation model was introduced on March 21st, 2023—predating Dyer et al. by over six months.
www.biorxiv.org/content/bior...
www.biorxiv.org
April 13, 2025 at 1:47 PM
Huge thanks to @IARPAnews for funding this groundbreaking effort through the @BRAINinitiative, and to our amazing team at
@stanforduniversity.bsky.social @stanfordmedicine.bsky.social @BCM @Allen @Princeton @unigoettingen.bsky.social
#MICrONS #NeuroAI #Connectomics #FoundationModels #AI
April 10, 2025 at 11:46 PM
Foundation models offer a powerful way to systematically decode the neural code of natural intelligence, bridging the gap between brain structure and function.
April 10, 2025 at 11:46 PM
Instead, they preferentially connect based on shared functional tuning, choosing partners with similar feature selectivity (“what”) rather than merely receptive field overlap (“where”).
April 10, 2025 at 11:46 PM