Courtois Project on Neuronal Modelling
@cneuromod.ca
45 followers 14 following 10 posts
The Courtois project on Neural Modelling (cneuromod) aims at training artificial neural networks to mimic extensive experimental data on individual human brain activity and behaviour.
Posts Media Videos Starter Packs
cneuromod.ca
Importantly, In both cases, alignment collapsed on new game levels. This “demonstration ad absurdum” shows that meaningful brain alignment must be robust and generalize across conditions. 3/
brain–model alignment scores for human participants playing Super Mario Bros. Models include (1) a conventional convolutional neural network trained on the task, and (2) the raw NES console memory. Both show comparable alignment on the training levels but a sharp drop to near-zero alignment on unseen levels, highlighting poor out-of-distribution generalization.
cneuromod.ca
We showed that the raw memory content of the NES console aligned with brain activity about as well as a convolutional neural network trained with RL.
And the brain is obviously not a NES. 2/
cneuromod.ca
Brain–AI alignment can reveal exciting similarities in representations.
Our Super Mario Bros. experiment demonstrates an important caveat: alignment can be very brittle. 1/🧵
Reposted by Courtois Project on Neuronal Modelling
martinhebart.bsky.social
Very much looking forward to #CCN2025! Would love to see you at our lab's talks and posters, and meet me at the panel discussion in the Algonauts session on Wednesday!
Reposted by Courtois Project on Neuronal Modelling
cneuromod.ca
In 2019, the CNeuroMod team and 6 participants began a massive data collection journey: twice-weekly MRI scans for most of 5 years. Data collection is now complete! 1/🧵
Poster titled "Neuromod: The Courtois Project on Neuronal Modelling" with logos from Université de Montréal and the Centre de recherche de l'Institut universitaire de gériatrie de Montréal.

Large bold text reads:
6 BRAINS – 987H-fMRI – 18 TASKS
Followed by the subtitle:
Naturalistic & Controlled – Multimodal / Perception + Action
Each letter in "18 TASKS" contains thumbnails from various visual tasks.

The central table summarizes 32 datasets grouped by primary domain (Vision, Audition, Language, Memory, Action, Other). For each dataset, the table indicates which stimulus modalities were used (Vision, Speech, Audio, Motion), what responses were collected (Physiology, Eye tracking, Explanations, Actions), and how many sessions and subjects were scanned. The overall visual style is playful and bold, with rainbow colors for modality types and rich iconography indicating data types.
cneuromod.ca
Meet us at @cogcompneuro.bsky.social in Amsterdam! The team will be there, including @jaboyle.bsky.social (project lead), Basile Pinsard (data manager), and @lune-bellec.bsky.social (founder). #CCN2025
cneuromod.ca
The result? Nearly 1000 hours of task fMRI from 6 subjects (mostly 5). We're releasing experiments slowly, focusing on quality checks and well-documented derivatives for reuse. 5/6 participants made their data fully open (CC0), thanks to @conp-pcno.bsky.social
cneuromod.ca
In 2019, the CNeuroMod team and 6 participants began a massive data collection journey: twice-weekly MRI scans for most of 5 years. Data collection is now complete! 1/🧵
Poster titled "Neuromod: The Courtois Project on Neuronal Modelling" with logos from Université de Montréal and the Centre de recherche de l'Institut universitaire de gériatrie de Montréal.

Large bold text reads:
6 BRAINS – 987H-fMRI – 18 TASKS
Followed by the subtitle:
Naturalistic & Controlled – Multimodal / Perception + Action
Each letter in "18 TASKS" contains thumbnails from various visual tasks.

The central table summarizes 32 datasets grouped by primary domain (Vision, Audition, Language, Memory, Action, Other). For each dataset, the table indicates which stimulus modalities were used (Vision, Speech, Audio, Motion), what responses were collected (Physiology, Eye tracking, Explanations, Actions), and how many sessions and subjects were scanned. The overall visual style is playful and bold, with rainbow colors for modality types and rich iconography indicating data types.
cneuromod.ca
Pair it with ∼75 h of video-watching fMRI per subject used in the 2025 @algonautsproject.bsky.social competition for a one-two controlled-naturalistic punch in modelling individual human vision. Huge thanks: THINGS Initiative, Courtois Foundation, our dedicated participants, and the CNeuroMod crew.
cneuromod.ca
New CNeuroMod-THINGS open-access fMRI dataset: 4 participants · ~4 000 images (720 categories) each shown 3× (12k trials per subject)· individual functional localizers & NSD-inspired QC . Preprint: arxiv.org/abs/2507.09024 Congrats Marie St-Laurent and @martinhebart.bsky.social !!
four brain maps showing noise ceiling estimates in response to image presentation
Reposted by Courtois Project on Neuronal Modelling
jennhu.bsky.social
Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! 📣

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4
Home
First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)
coginterp.github.io
Reposted by Courtois Project on Neuronal Modelling
lune-bellec.bsky.social
Can one hundred MRI scans be linked to hearing loss? The case of the Courtois NeuroMod project 🧠🎧👇🧵
Reposted by Courtois Project on Neuronal Modelling
lune-bellec.bsky.social
🧠📊 Just out: a deeply sample structural brain and spine dataset from the cneuromod.ca team, now published in Imaging Neuroscience! led by Mathieu Boudreau and Julien Cohen-Adad. 📰 Paper here: lnkd.in/ewXA3dED 1/🧵
The CNeuroMod longitudinal brain and spine structural protocol