Andreas Tolias
@andreastolias.bsky.social
730 followers 160 following 27 posts
Stanford Professor | NeuroAI Scientist | Entrepreneur working at the intersection of neuroscience, AI, and neurotechnology to decode intelligence @ enigmaproject.ai
Posts Media Videos Starter Packs
andreastolias.bsky.social
Deeply grateful to the @simonsfoundation.org for launching SCENE and thrilled to join this 10-year journey into ecological neuroscience—unraveling how sensory and motor systems interact. Excited to collaborate with an incredible team of theorists and experimentalists working across species!
simonsfoundation.org
We are excited to announce our new Simons Collaboration on Ecological Neuroscience (SCENE)! This program will unite experts in experimental and computational #neuroscience approaches to investigate how the brain represents sensorimotor interactions. www.simonsfoundation.org/2025/04/24/s... #science
Simons Foundation Launches Collaboration on Ecological Neuroscience
Simons Foundation Launches Collaboration on Ecological Neuroscience on Simons Foundation
www.simonsfoundation.org
andreastolias.bsky.social
Building on Lurz et al., our new Wang et al. studies movie‑data performance vs. training size and compares scaling for Conv‑LSTM vs. CvT(convolutional vision
transformer)‑LSTM. Details: www.nature.com/articles/s41...
andreastolias.bsky.social
A super exciting paper by @aecker.bsky.social and Marissa Weis, part of the #MICrONS package, deriving a set of principles to characterize the morphological diversity of excitatory neurons across cortical layers.
www.nature.com/immersive/d4...
andreastolias.bsky.social
We didn't know the optimal patterns driving mouse V1 neurons until the deep learning model by Walker et al. (2019). FYI: Unlike mice, Gabors actually describe macaque V1 neurons quite well (Fu et al., Cell Reports).
Reposted by Andreas Tolias
naturecomputes.bsky.social
MICrONS represents a huge step forward for the field. Big-data and AI will drive the next wave of discoveries in neuroscience
andreastolias.bsky.social
3/3 The core strength of our approach—robust prediction of neural responses to novel visual stimulus domains. Dyer's autoregressive approach generates latent embeddings for neural decoding—an entirely different architectural paradigm with different scientific objectives.
andreastolias.bsky.social
2/3 However this is not the main point, these models serve fundamentally different purposes. Ours explicitly predicts neural responses to visual stimuli (an encoding model), creating functional digital twins.
andreastolias.bsky.social
1/3 Just for clarification our foundation model was introduced on March 21st, 2023—predating Dyer et al. by over six months.
www.biorxiv.org/content/bior...
www.biorxiv.org
andreastolias.bsky.social
8/8 Deep learning simulation enables systematic representational-level characterization, though detailed circuit-cell-type-level mechanistic comprehension remains beyond current capabilities in the cortex.
andreastolias.bsky.social
7/8 and characterization of the feature landscape of mouse visual cortex (Tong et al., bioRxiv 2023)—just a few examples of their applications. Most importantly, they yield in silico predictions which are subsequently verified through experimental testing.
andreastolias.bsky.social
6/8 Predictive models also enabled to systematically characterize single neuron invariance properties (Ding et al., bioRxiv 2023), center-surround interactions (Fu et al., bioRxiv 2023), color-opponency mechanisms (Hofling et al., Elife 2024),
andreastolias.bsky.social
5/8 Our models also revealed that mouse V1 neurons shift their selectivity toward UV when pupil dilation or running begins, despite maintaining stable spatial stimulus structure—discovered in the digital twin and validated experimentally in closed-loop studies (Franke et al., Nature 2022).
andreastolias.bsky.social
4/8 For example, these simulations revealed that mouse V1 neurons exhibit complex spatial features deviating from the common notion that Gabor-like stimuli are optimal (Walker, Sinz et al., Nature Neuroscience 2019).
andreastolias.bsky.social
3/8 When ANNs accurately simulate neural function, they facilitate 'mechanistic interpretability' (to borrow the AI term)—enabling rigorous representational-level analysis of neuronal tuning.
andreastolias.bsky.social
2/8 Moreover, both task- and data-driven neural predictive models are powerful tools to gain neuroscientific insights as we and others have demonstrated repeatedly.
andreastolias.bsky.social
1/8: This quote from our abstract refers to task-driven modeling approaches (e.g., Yamins, DiCarlo, et al.) which define computational objectives and reveal hidden representations closely matching brain activity—widely recognized for deepening insights into brain computations.
andreastolias.bsky.social
3/3 The core strength of our approach—robust prediction of neural responses to novel visual stimulus domains. Dyer's autoregressive approach generates latent embeddings for neural decoding—an entirely different architectural paradigm with different scientific objectives.
andreastolias.bsky.social
2/3 However this is not the main point, these models serve fundamentally different purposes. Ours explicitly predicts neural responses to visual stimuli (an encoding model), creating functional digital twins.
andreastolias.bsky.social
1/3 Just for clarification our foundation model was introduced on March 21st, 2023—predating Dyer et al. by over six months.
www.biorxiv.org/content/bior...
www.biorxiv.org
andreastolias.bsky.social
Huge thanks to @IARPAnews for funding this groundbreaking effort through the @BRAINinitiative, and to our amazing team at
@stanforduniversity.bsky.social @stanfordmedicine.bsky.social @BCM @Allen @Princeton @unigoettingen.bsky.social
#MICrONS #NeuroAI #Connectomics #FoundationModels #AI
andreastolias.bsky.social
Foundation models offer a powerful way to systematically decode the neural code of natural intelligence, bridging the gap between brain structure and function.
andreastolias.bsky.social
Instead, they preferentially connect based on shared functional tuning, choosing partners with similar feature selectivity (“what”) rather than merely receptive field overlap (“where”).