M Ganesh Kumar
@mgkumar138.bsky.social
210 followers 200 following 33 posts
Computational Neuroscience, Reinforcement Learning. Postdoctoral Fellow @ Harvard. Previously @ A*STAR & NUS. 🇸🇬
Posts Media Videos Starter Packs
Reposted by M Ganesh Kumar
pessoabrain.bsky.social
𝗕𝗿𝗮𝗶𝗻-𝗯𝗼𝗱𝘆 𝗽𝗵𝘆𝘀𝗶𝗼𝗹𝗼𝗴𝘆:
𝗟𝗼𝗰𝗮𝗹, 𝗿𝗲𝗳𝗹𝗲𝘅, 𝗮𝗻𝗱 𝗰𝗲𝗻𝘁𝗿𝗮𝗹 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻
Excellent review paper about reactive and anticipatory processes.
#neuroskyence
doi.org/10.1016/j.ce...
mgkumar138.bsky.social
I am extremely grateful to be awarded the National University of Singapore (NUS) Development Grant, and to be a Young NUS Fellow! Look forward to collaborating with the Yong Loo Lin School of Medicine on exciting projects. This is my first grant and hopefully many more to come! #NUS #NeuroAI
Reposted by M Ganesh Kumar
antihebbiann.bsky.social
I wrote a Comment on neurotheory, and now you can read it!

Some thoughts on where neurotheory has and has not taken root within the neuroscience community, how it has shaped those subfields, and where we theorists might look next for fresh adventures.

www.nature.com/articles/s41...
Theoretical neuroscience has room to grow
Nature Reviews Neuroscience - The goal of theoretical neuroscience is to uncover principles of neural computation through careful design and interpretation of mathematical models. Here, I examine...
www.nature.com
Reposted by M Ganesh Kumar
elliottwimmer.bsky.social
🧵 New paper! We studied depression symptoms and goal-directed decisions under uncertainty

@shiyiliang.bsky.social, with @evanrussek.bsky.social & @robbrutledge.bsky.social

Surprisingly, we found that apathy–anhedonia was linked to enhanced goal-directed behavior. www.biorxiv.org/content/10.1...
www.biorxiv.org
mgkumar138.bsky.social
Not just for AI but these theories can improve our understanding of biological networks too!
simonsfoundation.org
Our new Simons Collaboration on the Physics of Learning and Neural Computation will develop powerful tools from #physics, #math, computer science and theoretical #neuroscience to understand how large neural networks learn, compute, scale, reason and imagine: www.simonsfoundation.org/2025/08/18/s...
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation on Simons Foundation
www.simonsfoundation.org
Reposted by M Ganesh Kumar
talboger.bsky.social
On the left is a rabbit. On the right is an elephant. But guess what: They’re the *same image*, rotated 90°!

In @currentbiology.bsky.social, @chazfirestone.bsky.social & I show how these images—known as “visual anagrams”—can help solve a longstanding problem in cognitive science. bit.ly/45BVnCZ
Reposted by M Ganesh Kumar
tomerullman.bsky.social
trying this with GPT-5 and charting new frontiers in gaslighting
Reposted by M Ganesh Kumar
david-g-clark.bsky.social
Wanted to share a new version (much cleaner!) of a preprint on how connectivity structure shapes collective dynamics in nonlinear RNNs. Neural circuits have highly non-iid connectivity (e.g., rapidly decaying singular values, structured singular-vector overlaps), unlike classical random RNN models.
Connectivity structure and dynamics of nonlinear recurrent neural networks
Studies of the dynamics of nonlinear recurrent neural networks often assume independent and identically distributed couplings, but large-scale connectomics data indicate that biological neural circuit...
arxiv.org
mgkumar138.bsky.social
3. We present TeDFA-δ, a bio. plaus. deep spiking RL model that leverages temporal integration and weak learning rules to outperform standard MLPs+BP for policy learning, highlighting the importance of neural dynamics over credit assignment for effective control:

2025.ccneuro.org/poster/?id=S...
Poster Presentation
2025.ccneuro.org
mgkumar138.bsky.social
2. We developed a bio. plaus. computational model of the dentate gyrus that shows how both impaired synaptic plasticity and increased neurogenesis—modulated by Cbln4-Neo1 complex—disrupt behavioral pattern separation:

2025.ccneuro.org/poster/?id=P...
Poster Presentation
2025.ccneuro.org
mgkumar138.bsky.social
1. We developed a RNN-based meta-RL framework that models schizophrenia-like decision-making deficits. We see a positive correlation between the number of dynamical attractor states and suboptimal behavior:

2025.ccneuro.org/poster/?id=4...
Poster Presentation
2025.ccneuro.org
mgkumar138.bsky.social
1 proceeding and 2 extended abstracts at Cognitive Computational Neuroscience (CCN) Conference 2025! Short summaries and links are in the thread. Look forward to the discussions! #CCN25
Reposted by M Ganesh Kumar
Reposted by M Ganesh Kumar
lampinen.bsky.social
In neuroscience, we often try to understand systems by analyzing their representations — using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns
Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices
Reposted by M Ganesh Kumar
sussillodavid.bsky.social
Coming March 17, 2026!
Just got my advance copy of Emergence — a memoir about growing up in group homes and somehow ending up in neuroscience and AI. It’s personal, it’s scientific, and it’s been a wild thing to write. Grateful and excited to share it soon.
Reposted by M Ganesh Kumar
neural-reckoning.org
How can we test theories in neuroscience? Take a variable predicted to be important by the theory. It could fail to be observed because it's represented in some nonlinear, even distributed way. Or it could be observed but not be causal because the network is a reservoir. How can we deal with this?
Reposted by M Ganesh Kumar
neurograce.bsky.social
This summer my lab's journal club somewhat unintentionally ended up reading papers on a theme of "more naturalistic computational neuroscience". I figured I'd share the list of papers here 🧵:
mgkumar138.bsky.social
First #ICML2025 conference proceeding (icml.cc/virtual/2025...)! We (@frostedblakess.bsky.social, @jzv.bsky.social, @cpehlevan.bsky.social) developed a simple model to better understand state representation learning dynamics in both artificial and biological intelligent systems!
ICML Poster A Model of Place Field Reorganization During Reward MaximizationICML 2025
icml.cc
mgkumar138.bsky.social
State representation learning in the hippocampus?
kempnerinstitute.bsky.social
Monday 4/28 at #ICLR2025!

Submission: openreview.net/forum?id=Qcv...

'A Model of Place Field Reorganization During Reward Maximization'

@mgkumar138.bsky.social, Blake Bordelon, Jacob A Zavatone-Veth, @CPehlevan.bsky.social

#ML #neuroscience
mgkumar138.bsky.social
I'm heading back to Singapore for ICLR25! Hit me up for discussions or where to find good food!

#neuroai #home
kempnerinstitute.bsky.social
Are you at #ICLR2025? See the lineup of Kempner Institute presenters and check out their work!

#ML #AI
mgkumar138.bsky.social
Interestingly, we found no significant difference in under and over-updating behavior in Schizophrenia patient data (Nassar et al. 2021). Instead, analyzing the behavior using the delta area metric showed a significant difference, suggesting the utility of model-guided human-behavior data analysis.
mgkumar138.bsky.social
We used a fixed point finder algorithm and found that suboptimal agents (lower delta area value) exhibited smaller number of unstable fixed points compared to more optimal agents. The number of stable fixed points remained consistent across the delta area metric.
mgkumar138.bsky.social
Besides the (1) reward discount factor, we explored (2) prediction error scaling, (3) probability of disrupting RNN dynamics, (4) rollout buffer length. Each hyperparameter differently influenced the suboptimal decision making behavior, which we termed as delta area.