Emerson Harkin
@efharkin.bsky.social
130 followers 85 following 45 posts
Computational neuroscience post doc interested in serotonin | Dayan lab @mpicybernetics.bsky.social | 🇨🇦 in 🇩🇪
Posts Media Videos Starter Packs
Pinned
efharkin.bsky.social
I'm excited to share that the last chapter of my PhD thesis is now published in Nature! 🍾

What drives serotonin neurons? We think it's the expectation of future reward and --- critically --- how fast this expectation is increasing. 📈

doi.org/10.1038/s415...

1/6
A prospective code for value in the serotonin system - Nature
Merging ideas from reinforcement learning theory with recent insights into the filtering properties of the dorsal raphe nucleus, a unifying perspective is found explaining why serotonin neurons are ac...
doi.org
Reposted by Emerson Harkin
modirshanechi.bsky.social
New in @pnas.org: doi.org/10.1073/pnas...

We study how humans explore a 61-state environment with a stochastic region that mimics a “noisy-TV.”

Results: Participants keep exploring the stochastic part even when it’s unhelpful, and novelty-seeking best explains this behavior.

#cogsci #neuroskyence
Reposted by Emerson Harkin
gershbrain.bsky.social
This is one of the most outstanding examples of circuit understanding I've seen in a long time. The unification of theory and experiment is beautiful.

When Malcolm presented this in my lab, the audience was cheering at the end, and one person shouted (non-ironically) "You did it!"
malcolmgcampbell.bsky.social
🚨Our preprint is online!🚨

www.biorxiv.org/content/10.1...

How do #dopamine neurons perform the key calculations in reinforcement #learning?

Read on to find out more! 🧵
efharkin.bsky.social
What a beautiful result! Congrats on this work.
Reposted by Emerson Harkin
marisosa.bsky.social
The Sosa Lab website is now live!
www.sosaneurolab.com

We will be seeking a postdoctoral researcher to join the growing team! If you are a rodent neuroscientist and interested in doing systems neuro work in the mountains 🏔️, please check out the "Join" page.
Sosa Lab
www.sosaneurolab.com
efharkin.bsky.social
field matures.

Kuhn would have been writing around the time of H&H's squid axon experiments. I can't help but think that if he were writing today, he might say that ephys has matured --- but neuro as a whole, maybe not so much. Perhaps that's your point?
efharkin.bsky.social
I just finished reading Kuhn's Structure of Scientific Revolutions and was astonished by his argument that fully precise definitions are 1) rare, especially early on, and 2) not necessary for progress. In his view, shared intuitions are more fundamental, and these only become codified as the 1/2
Reposted by Emerson Harkin
modirshanechi.bsky.social
So happy to see this work out! 🥳
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! 🥰

Tweeprint on an earlier version: bsky.app/profile/modi... 🧠🧪👩‍🔬
openmindjournal.bsky.social
Merits of Curiosity: A Simulation Study
Abstract‘Why are we curious?’ has been among the central puzzles of neuroscience and psychology in the past decades. A popular hypothesis is that curiosity is driven by intrinsically generated reward signals, which have evolved to support survival in complex environments. To formalize and test this hypothesis, we need to understand the enigmatic relationship between (i) intrinsic rewards (as drives of curiosity), (ii) optimality conditions (as objectives of curiosity), and (iii) environment structures. Here, we demystify this relationship through a systematic simulation study. First, we propose an algorithm to generate environments that capture key abstract features of different real-world situations. Then, we simulate artificial agents that explore these environments by seeking one of six representative intrinsic rewards: novelty, surprise, information gain, empowerment, maximum occupancy principle, and successor-predecessor intrinsic exploration. We evaluate the exploration performance of these simulated agents regarding three potential objectives of curiosity: state discovery, model accuracy, and uniform state visitation. Our results show that the comparative performance of each intrinsic reward is highly dependent on the environmental features and the curiosity objective; this indicates that ‘optimality’ in top-down theories of curiosity needs a precise formulation of assumptions. Nevertheless, we found that agents seeking a combination of novelty and information gain always achieve a close-to-optimal performance on objectives of curiosity as well as in collecting extrinsic rewards. This suggests that novelty and information gain are two principal axes of curiosity-driven behavior. These results pave the way for the further development of computational models of curiosity and the design of theory-informed experimental paradigms.
dlvr.it
Reposted by Emerson Harkin
roxana-zeraati.bsky.social
Looking forward to attending #CCN2025 for the first time and presenting the first steps of my postdoc project! If you’re interested in how learning the temporal structure of the environment affects foraging decisions and how we’re testing this in a naturalistic experiment come by poster B90, Wed.
efharkin.bsky.social
So now every ChatGPT response will start like this?

"Your message addresses an important question and provides many nice insights. However, additional work is needed to make it fully convincing. Specifically, I have the following concerns:"
Reposted by Emerson Harkin
guidomeijer.com
🚨Pre-print alert🚨

We stimulated serotonin with optogenetics while doing large-scale Neuropixel recordings across the mouse brain. We found strong widespread modulation of neural activity, but no effect on the choices of the mouse 🐭

How is this possible? Strap in! (1/9) 👇🧵

doi.org/10.1101/2025...
Serotonin drives choice-independent reconfiguration of distributed neural activity
Serotonin (5-HT) is a central neuromodulator which is implicated in, amongst other functions, cognitive flexibility. 5-HT is released from the dorsal raphe nucleus (DRN) throughout nearly the entire f...
doi.org
efharkin.bsky.social
What a thoughtful and thought-provoking piece!
efharkin.bsky.social
What do you think? If a 🛑 is represented in a forest and no behaviour is there to hear it, does it really make a sign?

Inspired by this thought-provoking thread from @neuralreckoning.bsky.social: bsky.app/profile/neur...
efharkin.bsky.social
3. Slightly tangential: If we did a controlled experiment beforehand that involved randomly presenting a 🛑 while recording neural activity, we can say that the 🛑 *causes* the activity. 🧪 No need to use weasel words and say "activity correlates with 🛑".
efharkin.bsky.social
🤓 My uninformed opinion:

1. We can say that a red octagon is represented.
2. If drivers usually stop, we can say a stop sign is represented even if this particular driver didn't stop this time.
3. (continued 👇)
efharkin.bsky.social
I've been watching the debate over "representations" in neuroscience 🍿 and I wanted to suggest a thought experiment:

Suppose a driver sees a 🛑 and this causes vision neurons to spike in a characteristic way, but the driver blows through the intersection. Is the stop sign *represented* in the brain?
efharkin.bsky.social
Congrats! So exciting to see this wonderful work in print.

Those water drops are looking 👌, btw
efharkin.bsky.social
⏰ Check out this inspiring pair of articles from @paulmasset.bsky.social and Margarida Sousa! Some dopamine neurons care about rewards far in the future more than others, allowing the brain to learn the timing of future rewards.

Congrats to the authors! 🍾

🔓 links: rdcu.be/epxkE rdcu.be/epxkG
A multidimensional distributional map of future reward in dopamine neurons
Nature - An algorithm called time–magnitude reinforcement learning (TMRL) extends distributional reinforcement learning to take account of reward time and magnitude, and behavioural and...
rdcu.be
Reposted by Emerson Harkin
fzenke.bsky.social
1/6 Why does the brain maintain such precise excitatory-inhibitory balance?
Our new preprint explores a provocative idea: Small, targeted deviations from this balance may serve a purpose: to encode local error signals for learning.
www.biorxiv.org/content/10.1...
led by @jrbch.bsky.social
efharkin.bsky.social
In medicine, systematic reviews do the job of distilling a large body of literature into a clear take-home message. In neuro, systematic reviews are few and far between, and it feels like sometimes we use theory papers as the next best thing...
efharkin.bsky.social
Thanks so much! I'm really glad you found it helpful.
Reposted by Emerson Harkin
neuroai.bsky.social
Ugh… there’s also what I call messianic AI, the fantasy that AI will “solve” science. Treating science like a vending machine for solitons/profit & scientists as human cogs replaceable by machinery. But Science is a living culture of critical discussion, mentorship, shared community values &methods.
efharkin.bsky.social
I'm curious about how this would work, since the inherent specificity of perturbation data seems to go against the generality we'd want from a foundation model.

Train foundation model on big correlation data, then fine tune on a perturbation?