Shrey Dixit
@shreydixit.bsky.social
47 followers 82 following 17 posts
Doctoral Researcher doing NeuroAI at the Max Planck Institute of Human Cognitive and Brain Sciences
Posts Media Videos Starter Packs
Reposted by Shrey Dixit
Reposted by Shrey Dixit
nichome.bsky.social
🚨 New preprint! Impact of Task Similarity and Training Regimes on Cognitive Transfer and Interference 🧠

We compare humans and neural networks in a learning task, showing how training regime and task similarity interact to drive transfer or interference.

www.biorxiv.org/content/10.1...
Impact of Task Similarity and Training Regimes on Cognitive Transfer and Interference
Learning depends not only on the content of what we learn, but also on how we learn and on how experiences are structured over time. To investigate how task similarity and training regime interact dur...
www.biorxiv.org
Reposted by Shrey Dixit
doellerlab.bsky.social
Join us at the Max Planck Institute in Leipzig as a Postdoc to explore cognitive maps in the human brain: learning, memory & the formation of structural representations. Excellent infrastructure with a leading scientific network. Apply by 13 October: postdocprogram.mpg.de/node/21187
Reposted by Shrey Dixit
matthiasmichel.bsky.social
Very happy to announce that our paper “Sensory Horizons and the Functions of Conscious Vision” is now out as a target article in BBS!! @smfleming.bsky.social and I present a new theory of the evolution and functions of visual consciousness. Article here: doi.org/10.1017/S014.... A (long) thread 🧵
Sensory Horizons and the Functions of Conscious Vision | Behavioral and Brain Sciences | Cambridge Core
Sensory Horizons and the Functions of Conscious Vision
doi.org
Reposted by Shrey Dixit
marcusghosh.bsky.social
How does the structure of a neural circuit shape its function?

@neuralreckoning.bsky.social & I explore this in our new preprint:

doi.org/10.1101/2025...

🤖🧠🧪

🧵1/9
A diagram showing 128 neural network architectures.
shreydixit.bsky.social
Finally, huge thanks to the organizers for giving us this opportunity @algonautsproject.bsky.social
shreydixit.bsky.social
What's next? We plan to publish a more in-depth analysis of VIBE’s internal dynamics and feature–parcel mappings, to advance our understanding of brain function, and guide future work in neuroscience.
Spoiler: We already have a model that beats the top score of this challenge ;)
shreydixit.bsky.social
Shapley (MSA) Insights:
Feature attribution maps align with neuroanatomy, but crazily enough, textual features from the transcripts are the most predictive of all.
shreydixit.bsky.social
We have strong lifts over baseline both in-distribution and OOD. VIBE (final): r=0.3225 (ID, Friends S07) & 0.2125 (6×OOD).
Competition submission (earlier iteration): r=0.3198 ID / 0.2096 OOD → 1st in Phase-1, 2nd overall.
vs baseline 0.2033/0.0895 → +0.119/+0.123.
shreydixit.bsky.social
We present VIBE: Video-Input Brain Encoder, which is a 2-stage Transformer. First stage merges text, audio, and visual features per timepoint (plus subject embeddings). And the second stage models temporal dynamics with rotary positional embeddings.
shreydixit.bsky.social
Competition & Data: Algonauts 2025 tests how well we can predict brain activity while people watch naturalistic movies. Multi-modal stimuli (video, audio, text) → whole-brain fMRI, split into parcels. Train on movies/TV; evaluate in-distribution and on out-of-distribution films.
shreydixit.bsky.social
We did it! 🏆 We won Phase 1 and placed 2nd overall in the Algonauts 2025 Challenge. So proud of the crew
@keckjanis.bsky.social,Viktor Studenyak,Daniel Schad,Aleksandr Shpilevoi. Huge thanks to @andrejbicanski.bsky.social and @doellerlab.bsky.social for support. Report: arxiv.org/abs/2507.17958
shreydixit.bsky.social
I personally prefer subway surfers. This one couldn't hold my attention for the whole video
Reposted by Shrey Dixit
jbimaknee.bsky.social
For a few years I have said that neuromorphic is specialized general purpose, like GPUs, but with different advantages.

In this preprint I try to put some substance to that claim. There are real theoretical advantages, but they aren't obvious. 🧪🧠🤖 www.arxiv.org/abs/2507.17886
Neuromorphic Computing: A Theoretical Framework for Time, Space, and Energy Scaling
Neuromorphic computing (NMC) is increasingly viewed as a low-power alternative to conventional von Neumann architectures such as central processing units (CPUs) and graphics processing units (GPUs), h...
www.arxiv.org
shreydixit.bsky.social
Aside from a few mispronunciations, the AI really got the paper. Honestly, it's reassuring—if an AI can follow it without any mistakes, then folks in the field probably can too :)
shreydixit.bsky.social
Just came across an AI-generated video summary/review of our recent preprint—and I have to say, I’m genuinely impressed. It does a great job summarizing the paper, and I’d actually recommend it to others. Check it out: www.youtube.com/watch?v=3g5K...
Multidimensional Game-Theoretic Attribution of Function of Neural Units
YouTube video by LuxaK
www.youtube.com
Reposted by Shrey Dixit
ianholmes.org
White text on white background instructing LLMs to give positive reviews is apparently now common enough to show up in searches for boilerplate text.
neuralnoise.com
"in 2025 we will have flying cars" 😂😂😂
Reposted by Shrey Dixit
kayson.bsky.social
Good morning folks. If you’re around #OCNS2025, maybe come by today for a chat about optimal communication in brain networks? ✨
Reposted by Shrey Dixit
marcelomattar.bsky.social
Thrilled to see our TinyRNN paper in @nature! We show how tiny RNNs predict choices of individual subjects accurately while staying fully interpretable. This approach can transform how we model cognitive processes in both healthy and disordered decisions. doi.org/10.1038/s415...
Discovering cognitive strategies with tiny recurrent neural networks - Nature
Modelling biological decision-making with tiny recurrent neural networks enables more accurate predictions of animal choices than classical cognitive models and offers insights into the underlying cog...
doi.org
shreydixit.bsky.social
Not really. Thankfully, Max Planck has GPU clusters that I can use.
Although I did ask my friend (o3) about it, according to whom, 5090 is sufficient for most cases. (chatgpt.com/share/685c6a...)
ChatGPT - RTX 5000 vs 5090 for ML
Shared via ChatGPT
chatgpt.com
shreydixit.bsky.social
DCGAN Case Study:
Pixel-wise Shapley Modes reveal the inverted CNN hierarchy: first transposed-conv layer shapes high-level facial parts; final layer merely renders RGB channels.
shreydixit.bsky.social
LLM Case Study:
Calculated expert-level contributions of an MOE-based LLM across arithmetic, language ID, and factual recall. Found an expert which was super-important for all domains. Also found redundant experts, removing which does not decrease performance much.
shreydixit.bsky.social
MLP Case Study:
Neural computations within a three-layer MNIST MLP were analysed. L1/L2 regularisation funnels computations into a few neurons. Also, contrary to popular belief, large weights do not equal high importance of neural units.