Mehdi Azabou
@mehdiazabou.bsky.social
510 followers 54 following 36 posts
Working on neuro-foundation models Postdoc at Columbia | ML PhD, Georgia Tech | https://www.mehai.dev/
Posts Media Videos Starter Packs
Reposted by Mehdi Azabou
kanakarajanphd.bsky.social
(1/8) New paper from our team!

Yu Duan & Hamza Chaudhry introduce POCO, a tool for predicting brain activity at the cellular & network level during spontaneous behavior.

Find out how we built POCO & how it changes neurobehavioral research 👇

arxiv.org/abs/2506.14957
mehdiazabou.bsky.social
🚨 The call for demos is still open, the deadline is tomorrow!

If you have a tool for visualizing large-scale data, pipelines for training foundation models, or BCI demos, we want to see it!

Submission is only 500 words, and it's a great opportunity to showcase your work.
mehdiazabou.bsky.social
Excited to announce the Foundation Models for the Brain and Body workshop at #NeurIPS2025! 🧠📈 🧪

We invite short papers or interactive demos on AI for neural, physiological or behavioral data.

Submit by Aug 22 👉 brainbodyfm-workshop.github.io
mehdiazabou.bsky.social
Excited to announce the Foundation Models for the Brain and Body workshop at #NeurIPS2025! 🧠📈 🧪

We invite short papers or interactive demos on AI for neural, physiological or behavioral data.

Submit by Aug 22 👉 brainbodyfm-workshop.github.io
mehdiazabou.bsky.social
POSSM shows that monkey-to-human transfer works 🤯
This could change how we train neuro-foundation models. Multi-species pretraining!

POYO x SSM = POSSM 🧠⚡
A big step toward real-time, generalizable brain decoders at scale.

🧠📈 🧪
averyryoo.bsky.social
New preprint! 🧠🤖

How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?

We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!

🧵1/7
Reposted by Mehdi Azabou
averyryoo.bsky.social
New preprint! 🧠🤖

How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?

We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!

🧵1/7
Reposted by Mehdi Azabou
jeanremiking.bsky.social
Our EEG-Foundation Challenge, on more than 3,000 subjects, is accepted at #Neurips 2025, go check it out:
eeg2025.github.io

Led by B Aristimunha D Truong P Guetschel and SY Shirazi!
EEG Challenge (2025)
From Cross-Task to learning subject invariance representation in EEG decoding
eeg2025.github.io
Reposted by Mehdi Azabou
colehurwitz.bsky.social
Neural Encoding and Decoding at Scale (NEDS) is now accepted to @icmlconf.bsky.social as a spotlight (top 2.6%)! 🧠 🧪
colehurwitz.bsky.social
Another step toward a foundation model of the mouse brain: "Neural Encoding and Decoding at Scale (NEDS)"

Trained on neural and behavioral data from 70+ mice, NEDS achieves state-of-the-art prediction of behavior (decoding) and neural responses (encoding) on held-out animals. 🐀
Reposted by Mehdi Azabou
tyrellturing.bsky.social
Check out our new paper at #ICLR2025, where we show that multi-task neural decoding is both possible and beneficial.

As well, the latents of a model trained only on neural activity capture information about brain regions and cell-types.

Step-by-step, we're gonna scale up folks!

🧠📈 🧪 #NeuroAI
mehdiazabou.bsky.social
Scaling models across multiple animals was a major step toward building neuro-foundation models; the next frontier is enabling multi-task decoding to expand the scope of training data we can leverage.

Excited to share our #ICLR2025 Spotlight paper introducing POYO+ 🧠

poyo-plus.github.io

🧵
POYO+
POYO+: Multi-session, multi-task neural decoding from distinct cell-types and brain regions
poyo-plus.github.io
mehdiazabou.bsky.social
This work was done with amazing collaborators:
Krystal Pan, @vinam.bsky.social, Ian Knight, Eva Dyer and @tyrellturing.bsky.social !
mehdiazabou.bsky.social
We find that the model’s latent representations carry meaningful information that reflect the anatomy and physiology of different regions and sub-types, even though it was never given any information about these distinctions!
mehdiazabou.bsky.social
Our results show benefits of scaling across multiple recordings and tasks. We also show that transfer to new datasets works really well, even when we're dealing with new tasks or new brain areas!
mehdiazabou.bsky.social
We train POYO+ on the @alleninstitute.bsky.social brain observatory dataset. That's 256 mice, 6 visual brain areas and 13 genetically defined cellular sub-types.

This is x10 more data than POYO-1.
mehdiazabou.bsky.social
POYO+ adds support for regression, classification, and segmentation tasks. It can be trained on multiple tasks at the same time!

We query POYO+ when decoding, meaning that it can be queried to decode any number of tasks, and these tasks can be different depending on the context.
mehdiazabou.bsky.social
POYO+ adds support for regular time series data through a value projection layer. We use it on calcium traces!
mehdiazabou.bsky.social
How is POYO+ different from POYO?
1. POYO+ is even more flexible, it supports more modalities and more tasks!
2. POYO+ is trained on 10x the data.
3. We provide analyses that reveal latent structure in the neural activity, modulated by brain areas, cell types, and tasks.
mehdiazabou.bsky.social
Scaling models across multiple animals was a major step toward building neuro-foundation models; the next frontier is enabling multi-task decoding to expand the scope of training data we can leverage.

Excited to share our #ICLR2025 Spotlight paper introducing POYO+ 🧠

poyo-plus.github.io

🧵
POYO+
POYO+: Multi-session, multi-task neural decoding from distinct cell-types and brain regions
poyo-plus.github.io
Reposted by Mehdi Azabou
colehurwitz.bsky.social
Another step toward a foundation model of the mouse brain: "Neural Encoding and Decoding at Scale (NEDS)"

Trained on neural and behavioral data from 70+ mice, NEDS achieves state-of-the-art prediction of behavior (decoding) and neural responses (encoding) on held-out animals. 🐀
Reposted by Mehdi Azabou
tyrellturing.bsky.social
Interested in foundation models for #neuroscience? Want to contribute to the development of the next-generation of multi-modal models? Come join us at IVADO in Montreal!

We're hiring a full-time machine learning specialist for this work.

Please share widely!

#NeuroAI 🧠📈 🧪
ivado.bsky.social
🔍 [Job Offer] #MachineLearning Specialist.

Join the IVADO Research Regroupement - AI and Neuroscience (R1) to develop foundational models in the field of neuroscience.

More info: ivado.ca/2025/04/08/s...

#JobOffer #AI #Neuroscience #Research #MachineLearning
Machine Learning Specialist | IVADO
ivado.ca
Reposted by Mehdi Azabou
nandahkrishna.bsky.social
Really enjoyed TAing for this tutorial, had great discussions with several attendees. Do check out `torch_brain` and the other packages here:
github.com/neuro-galaxy
Reposted by Mehdi Azabou
nandahkrishna.bsky.social
Talk recordings from our COSYNE Workshop on Neuro-foundation Models 🌐🧠 are now up on the workshop website!

neurofm-workshop.github.io
Reposted by Mehdi Azabou
josueortc.bsky.social
Come to my talk today at #COSYNE2025 where I will talk about developing spatiotemporal models of brain dynamics
mehdiazabou.bsky.social
If I missed any relevant models, please let me know!
mehdiazabou.bsky.social
Thanks to everyone who came to Day 1 of the Workshop!

I had fun making this plot for the opening talk. It's exciting to see the exponential growth in the amount of pretraining data 🚀

I compiled a list of neuro-foundation models for EPhys and OPhys: github.com/mazabou/awes...