RJ Antonello
@rjantonello.bsky.social
49 followers 69 following 7 posts
Postdoc in the Mesgarani Lab. Studying how we can use AI to understand language processing in the brain.
Posts Media Videos Starter Packs
Reposted by RJ Antonello
neuranna.bsky.social
As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @neurotaha.bsky.social
built a library to easily compare design choices & model features across datasets!

We hope it will be useful to the community & plan to keep expanding it!
1/
neurotaha.bsky.social
🚨 Paper alert:
To appear in the DBM Neurips Workshop

LITcoder: A General-Purpose Library for Building and Comparing Encoding Models

📄 arxiv: arxiv.org/abs/2509.091...
🔗 project: litcoder-brain.github.io
rjantonello.bsky.social
We think these QA models are an important step in bridging the gap between data-driven models of the brain and the easy-to-understand, but hard-to-encode, qualitative theories that guide our intuitions as neuroscientists. 5/6
rjantonello.bsky.social
More surprisingly, we find that the model places critical importance on some unexpected topics, like the existence of specialized or technical terminology or on words that describe events like dialogue or direct speech quotations. 4/6
rjantonello.bsky.social
Our model naturally and automatically replicates many famous neuroscience results, in addition to opening the door to a few surprises. For instance, we naturally observe the selectivity to tactile sensation words in somatosensory areas, and the selectivity to places in OPA, PPA and RSC. 3/6
rjantonello.bsky.social
We show that the model we outperforms less interpretable models built out of the hidden states of LLMs, especially in low data settings. Our model is so compact that it can be fully illustrated in a single figure! 2/6
rjantonello.bsky.social
In our new paper, we explore how we can build encoding models that are both powerful and understandable. Our model uses an LLM to answer 35 questions about a sentence's content. The answers linearly contribute to our prediction of how the brain will respond to that sentence. 1/6
Reposted by RJ Antonello
alexanderhuth.bsky.social
New paper with @mujianing.bsky.social & @prestonlab.bsky.social! We propose a simple model for human memory of narratives: we uniformly sample incoming information at a constant rate. This explains behavioral data much better than variable-rate sampling triggered by event segmentation or surprisal.
biorxiv-neursci.bsky.social
Efficient uniform sampling explains non-uniform memory of narrative stories https://www.biorxiv.org/content/10.1101/2025.07.31.667952v1
Reposted by RJ Antonello
neuromdl.bsky.social
🚨Paper alert!🚨
TL;DR first: We used a pre-trained deep neural network to model fMRI data and to generate images predicted to elicit a large response for each many different parts of the brain. We aggregate these into an awesome interactive brain viewer: piecesofmind.psyc.unr.edu/activation_m...
Cortex Feature Visualization
piecesofmind.psyc.unr.edu
Reposted by RJ Antonello
gretatuckute.bsky.social
What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals
Reposted by RJ Antonello
biorxiv-neursci.bsky.social
Stimulus dependencies---rather than next-word prediction---can explain pre-onset brain encoding during natural listening https://www.biorxiv.org/content/10.1101/2025.03.08.642140v1
Reposted by RJ Antonello
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......
Technical Associate I, Kanwisher Lab
MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139
careers.peopleclick.com
Reposted by RJ Antonello
bkhmsi.bsky.social
🚨 New Preprint!!

LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
Reposted by RJ Antonello
evfedorenko.bsky.social
Just in time for the holidays! Some cool new evidence from @eghbal_hosseini for the idea of universal representations shared by high-performing ANNs and brains in two domains: language and vision! Go Eghbal!
rjantonello.bsky.social
Really excited to be at NeurIPS this week presenting our new encoding model scaling laws work! Be sure to check out our poster (#402) on Tuesday afternoon and our new code and model release, and feel free to DM me to chat!