Kanaka Rajan
@kanakarajanphd.bsky.social
3.5K followers 230 following 35 posts
Associate Professor at Harvard & Kempner Institute. Applying computational frameworks & machine learning to decode multi-scale neural processes. Marathoner. Rescue dog mom. https://www.rajanlab.com/
Posts Media Videos Starter Packs
Pinned
kanakarajanphd.bsky.social
(1/8) New paper from our team!

Yu Duan & Hamza Chaudhry introduce POCO, a tool for predicting brain activity at the cellular & network level during spontaneous behavior.

Find out how we built POCO & how it changes neurobehavioral research 👇

arxiv.org/abs/2506.14957
kanakarajanphd.bsky.social
(6/8) Combined with its prediction speed and steady improvement from longer recordings/more sessions, POCO shows enormous potential for usage in larger brains & real-time neurotechnologies like “neuro-foundation models” for brain-computer interfaces (BCI).
kanakarajanphd.bsky.social
(5/8) Other time-series forecasting models perform well on synthetic/simulated data 🤖

POCO dominates in context-dense predictions based on REAL neural data 🧠
kanakarajanphd.bsky.social
(4/8) Beyond neural predictions, POCO's learned unit embeddings independently reproduce brain region clustering without any anatomical labels.

That means at single-cell resolution across entire brains, POCO mimics biological organization purely from neural activity patterns ✨
kanakarajanphd.bsky.social
(3/8) POCO forecasts how the brain will behave up to ~15 seconds into the future across behavioral data & species 🔮

After pre-training, POCO’s speed & flexibility allow it to adapt to new recordings with minimal fine-tuning, opening the door for real-time applications.
kanakarajanphd.bsky.social
(2/8) POCO was trained on spontaneous & task-specific behavior data from zebrafish, mice, & C. elegans. It combines a local forecaster with a population encoder capturing brain-wide patterns, so we track each neuron individually AND how the whole brain affects each cell 🧠
kanakarajanphd.bsky.social
(1/8) New paper from our team!

Yu Duan & Hamza Chaudhry introduce POCO, a tool for predicting brain activity at the cellular & network level during spontaneous behavior.

Find out how we built POCO & how it changes neurobehavioral research 👇

arxiv.org/abs/2506.14957
kanakarajanphd.bsky.social
Thanks for having me at @camp_course and the @iitmadras Brain Center during my visit to India this summer!🥭

It was lovely to be back home, and a pleasure to work with the young scientists there who are finding their path in computational neuroscience 🧠
kanakarajanphd.bsky.social
(7/7) Congrats to Riley & Ryan on this work. Also huge thanks to collaborators Felix Berg, @raymondrchua.bsky.social‬, John Vastola, @joshlunger.bsky.social, Billy Qian & everyone who helps us kick the tires.
kanakarajanphd.bsky.social
(6/7) A 4096-unit agent that remembers, plans & navigates risks gives a “window-sized” brain we can watch neuron-by-neuron. ForageWorld is a perfect sandbox for testing cognitive map theories & offers a blueprint for ultra-efficient autonomous AI systems in a naturalistic world.
kanakarajanphd.bsky.social
(5/7) Analyzing the trained agent reveals an interpretable neural GPS: past & future positions can be linearly decoded over long horizons from the agent’s ‘neural’ activity, and a lightweight “predict-its-own-position” signal sharpens its compass even further.
kanakarajanphd.bsky.social
(4/7) What we see is planning & recall over hundreds of timesteps!

After a quick wander, the agent switches from exploring to visiting patches from memory: revisiting food not seen for over 500-1000 steps, skirting predator zones & timing resource visits.
kanakarajanphd.bsky.social
(3/7) For the agent’s “brain,” we used a lean recurrent network: 4096 units (<0.2% of the size of an ant brain), with only 10% connectivity & we let RL teach it what to do by trial & error.
kanakarajanphd.bsky.social
(2/7) Introducing ForageWorld: Each session spawns a large arena with lakes, predators & food patches that deplete over time. The AI agent must juggle hunger, thirst & fatigue in this virtual space.

The agent can only "see" a small patch around itself, so no bird’s-eye view.
kanakarajanphd.bsky.social
(1/7) New preprint from Rajan lab! 🧠🤖
@ryanpaulbadman1.bsky.social & Riley Simmons-Edler show–through cog sci, neuro & ethology–how an AI agent with fewer ‘neurons’ than an insect can forage, find safety & dodge predators in a virtual world. Here's what we built

Preprint: arxiv.org/pdf/2506.06981
kanakarajanphd.bsky.social
New work incoming at #RLDM2025 🤖 🐟

While we look forward to sharing our research, I'm mindful that many colleagues, including the authors of our second abstract, can't attend due to funding & travel issues.

Read the extended abstracts: rldm.org/program-2025/ #neuroskyence
Rajan Lab presentation at RLDM 2025, Investigating active electrosensing and communication in deep-reinforcement learning trained artificial fish collectives
kanakarajanphd.bsky.social
It was such a pleasure to host @kordinglab.bsky.social at @kempnerinstitute.bsky.social last week - thank you for joining us!
Kanaka and Konrad smiling together in front of a blackboard
kanakarajanphd.bsky.social
Big day for the Rajan lab at @harvardmed.bsky.social Friday Seminar Series 🌟

@satpreetsingh.bsky.social & Siyan Zhou gave outstanding talks on collective behaviors in artificial fish schools & disordered attractors in mice. I’m so proud of their work! 🤖🧠
Reposted by Kanaka Rajan
kanakarajanphd.bsky.social
Headed to #Cosyne2025?

Don't miss this workshop led by Rajan lab postdocs @satpreetsingh.bsky.social, @chingfang.bsky.social & @gzmozd.bsky.social on how complex tasks, agent-based models & theory-experiment cross-talk help us study how the brain works 🧠🤖

#neuroskyence @cosynemeeting.bsky.social
COSYNE 2025 workshop information for the Rajan Lab.
Monday, March 31, 9:00am-6:30pm features a Workshop titled 'Agent-based models in Neuroscience: Complex Planning, Embodiment, and Beyond'. The workshop explores digital agents and neural computations behind naturalistic behavior, bringing together experimentalists and theorists at the intersection of neuroscience, AI, and biomechanics.
Names and headshots for the organizers (Ching Fang, PhD; Satpreet Singh, PhD; and Gizem Ozdil, PhD) are included
Reposted by Kanaka Rajan
kanakarajanphd.bsky.social
Big showing from the Rajan Lab at @cosynemeeting.bsky.social!

We have posters on everything from multi-agent social foraging to neuromodulated neural networks. Catch us in Poster Sessions 2 & 3 🧠🤖

#Cosyne2025 #NeuroAI #CompSci #neuroskyence
COSYNE 2025 Rajan lab posters. 
Friday, March 28 - Poster Session 2:
2-043: Emergent small-group foraging under variable group size, food scarcity, and sensory capabilities by Zhouyang (Hanson) Lu, Satpreet H Singh, Sonja Johnson-Yu, Aaron Walsman, Kanaka Rajan
2-058: 'Modeling rapid neuromodulation in the cortex-basal ganglia-thalamus loop' by Julia Costacurta and Yu Duan (co-first), John Assad, Kanaka Rajan and Scott Linderman (co-senior)
2-060: 'Measuring and Controlling Solution Degeneracy across Task-Trained RNNs' by Ann Huang, Satpreet Singh, Kanaka Rajan

Saturday, March 29 - Poster Session 3:

3-020: 'ForageWorld: RL agents in complex foraging arenas develop internal maps for navigation and planning' by Ryan Badman, Riley Simmons-Edler, Joshua Lunger, John Vastola, William Qian, Kanaka Rajan
3-109: 'Inhibition-stabilized disordered dynamics in mouse cortex during navigational decision-making' by Siyan Zhou, Ryan Badman, Charlotte Arlt, Kanaka Rajan, Christopher Harvey