Arna Ghosh
@arnaghosh.bsky.social
240 followers 190 following 38 posts
PhD student at Mila & McGill University, Vanier scholar • 🧠+🤖 grad student• Ex-RealityLabs, Meta AI • Believer in Bio-inspired AI • Comedy+Cricket enthusiast
Posts Media Videos Starter Packs
arnaghosh.bsky.social
Very cool study, with interesting insights about theta sequences and learning!
rhythmicspikes.bsky.social
1/
🚨 New preprint! 🚨

Excited and proud (& a little nervous 😅) to share our latest work on the importance of #theta-timescale spiking during #locomotion in #learning. If you care about how organisms learn, buckle up. 🧵👇

📄 www.biorxiv.org/content/10.1...
💻 code + data 🔗 below 🤩

#neuroskyence
Reposted by Arna Ghosh
arnaghosh.bsky.social
Congratulations, Dan!! 😁
arnaghosh.bsky.social
This looks like a very cool result! 😀
Can't wait to read in detail.
arnaghosh.bsky.social
Fantastic work on Multi-agent RL from
@dvnxmvlhdf5.bsky.social & @tyrellturing.bsky.social! 🤩
dvnxmvlhdf5.bsky.social
Preprint Alert 🚀

Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate with them. But for humans, this isn’t always the case. For example, plains indigenous groups used to leave resources for others to use at effigies called Manitokan.
1/8
Manitokan are images set up where one can bring a gift or receive a gift. 1930s Rocky Boy Reservation, Montana, Montana State University photograph. Colourized with AI
arnaghosh.bsky.social
Re diff implicit biases of architecture: the metrics implemented here (roughly) characterize the eigenspectrum (eigenval distribution) of the representation space. They don't really incorporate the eigenvector information --> hence, "what" features don't matter, only "how" matters.
arnaghosh.bsky.social
Comparing models of different architectures often is tricky because of the different implicit biases of each architecture.
RSA can be helpful to in some cases.
But if you are looking for a metric (number) that tells you which model is better, I have some ideas but they are not implemented here. 😜
arnaghosh.bsky.social
Indeed!
The metrics work best when comparing networks of comparable architectures, though. So, if you are looking to select the best model checkpoint from a pretraining routine or diff hyperparam configurations, and the loss function is not as insightful, these metrics are incredibly helpful. :)
arnaghosh.bsky.social
Also, big shoutout to @quentin-garrido.bsky.social+gang and
@aggieinca.bsky.social+gang for developing Rankme and Lidar, respectively.
Reptrix incorporates these representation quality metrics. 🚀
Let's make it easier to select good SSL/foundation models. 💪
arnaghosh.bsky.social
Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
arnaghosh.bsky.social
PS: It was fun to put Reptrix together with @daniebenes.bsky.social, our open-source expert and new addition to the α-Squad comprising of Arnab Mondal, @kumarkagrawal.bsky.social @tyrellturing.bsky.social and I.
[6/6]
arnaghosh.bsky.social
If you do end up giving Reptrix a try, we would love to hear from you! We also encourage you to contribute to our repo and add example notebooks trying out these metrics for your networks, beyond natural language and vision domains.
github.com/BARL-SSL/rep...
[5/6]
reptrix/CONTRIBUTING.md at main · BARL-SSL/reptrix
Library that provides metrics to assess representation quality - BARL-SSL/reptrix
github.com
arnaghosh.bsky.social
Metrics for evaluating representation quality:
- α-ReQ: Measures discriminativeness. Lower alpha = better!
- RankMe: Assesses representation capacity. Higher rank = higher capacity!
- LiDAR: Evaluates separability among object manifolds. Higher rank = better separability!
[4/6]
arnaghosh.bsky.social
✨ Key Features of Reptrix:
- 📈 Suite of metrics to assess representation quality: α-ReQ, RankMe, LiDAR, and more!
- 🤝 Seamless PyTorch integration for minimal setup, maximum insights
- 💻 Open Source: Contribute and enhance!
[3/6]
arnaghosh.bsky.social
Inspired by conversations after our α-ReQ paper (NeurIPS 2022) and subsequent work, we created Reptrix as an open-source library for assessing representation quality across models of vision, language… and more.
Check out our @mila-quebec.bsky.social blogpost: mila.quebec/en/article/a...
[2/6]
α-ReQ: Assessing Representation Quality in SSL | Mila
The success of self-supervised learning algorithms has drastically changed the landscape of training deep neural networks. With well-engineered architectures and training objectives, SSL models learn ...
mila.quebec
arnaghosh.bsky.social
Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
arnaghosh.bsky.social
Super cool paper!
It formalizes a lot of ideas I have been mulling over the past year, and connects tons of historical ideas neatly.
Definitely worth a read if you are working/interested in mechanistic interp and neural representations.
david-klindt.bsky.social
🔵 New paper! We explore sparse coding, superposition, and the Linear Representation Hypothesis (LRH) through identifiability theory, compressed sensing, and interpretability. If you’re curious about lifting neural reps out of superposition, this might interest you! 🤓
arxiv.org/abs/2503.01824
From superposition to sparse codes: interpretable representations in neural networks
Understanding how information is represented in neural networks is a fundamental challenge in both neuroscience and artificial intelligence. Despite their nonlinear architectures, recent evidence sugg...
arxiv.org
arnaghosh.bsky.social
I do not have words that can capture how grateful I am to you for your support throughout this journey. 🥹

I joined the lab as a fan of our work (apical dendrites FTW 😉), 5.5 years later I am a fan of you! ⭐
arnaghosh.bsky.social
Thank you so much Rui! :)
I was blessed with great examiners, who made my job easier. 😅
arnaghosh.bsky.social
Thanks Koustuv da for your guidance and wise words throughout! ❤️
arnaghosh.bsky.social
Thanks Yigit!! ❤️
Looking forward to doing exciting things together. :)
arnaghosh.bsky.social
Thanks a lot, Adrien!
You have been an integral part of my grad school journey, thank you so much for your support and encouragement throughout. 🥹
arnaghosh.bsky.social
I have been very fortunate to have the support of close friends & family through my academic journey, as well as the support of my stellar defence committee: Aaron Courville, Adriana Romero-Soriano, @somnirons.bsky.social @apeyrache.bsky.social, David Adelani. 🚀
arnaghosh.bsky.social
Just over a week since I defended my 🤖+🧠PhD thesis, and the feeling is just sinking in. Extremely grateful to
@tyrellturing.bsky.social for supporting me through this amazing journey! 🙏
Big thanks to all members of the LiNC lab, and colleagues at mcgill University and @mila-quebec.bsky.social. ❤️😁
Reposted by Arna Ghosh
franklandlab.bsky.social
Just giving this a boost for those who may not have seen it yet... we have a PI position (molecular and cellular basis of cognition) at The Hospital for Sick Children (Toronto). The position comes with an appointment at Assist/Assoc Prof level at U of T. Share widely!
can-acn.org/scientist-se...
Scientist/Senior Scientist – Research Institute, Hospital for Sick Children, University of Toronto – Canadian Association for Neuroscience
can-acn.org