Samuel Liebana
@samuel-liebana.bsky.social
55 followers 52 following 8 posts
Research Fellow at the Gatsby Unit, UCL Q: How do we learn?
Posts Media Videos Starter Packs
Reposted by Samuel Liebana
kristorpjensen.bsky.social
I’m super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!

www.biorxiv.org/content/10.1...
Reposted by Samuel Liebana
cmc-lab.bsky.social
In our Learning Club @cmc-lab.bsky.social today (Aug 18, Thu, 2pm CET), Samuel Liebana will tell us about his paper (www.cell.com/cell/fulltex... [joint work w/ @saxelab.bsky.social & @laklab.bsky.social]. Want to attend, send an empty email to [email protected] to get the link!
www.cell.com
Reposted by Samuel Liebana
malcolmgcampbell.bsky.social
🚨Our preprint is online!🚨

www.biorxiv.org/content/10.1...

How do #dopamine neurons perform the key calculations in reinforcement #learning?

Read on to find out more! 🧵
samuel-liebana.bsky.social
Very glad you liked it Blake 🙂
Reposted by Samuel Liebana
Reposted by Samuel Liebana
saxelab.bsky.social
How does in-context learning emerge in attention models during gradient descent training?

Sharing our new Spotlight paper @icmlconf.bsky.social: Training Dynamics of In-Context Learning in Linear Attention
arxiv.org/abs/2501.16265

Led by Yedi Zhang with @aaditya6284.bsky.social and Peter Latham
Reposted by Samuel Liebana
saxelab.bsky.social
Excited to share new work @icmlconf.bsky.social by Loek van Rossem exploring the development of computational algorithms in recurrent neural networks.

Hear it live tomorrow, Oral 1D, Tues 15 Jul West Exhibition Hall C: icml.cc/virtual/2025...

Paper: openreview.net/forum?id=3go...

(1/11)
ICML Poster Algorithm Development in Neural Networks: Insights from the Streaming Parity TaskICML 2025
icml.cc
samuel-liebana.bsky.social
Thanks Tim!!! Very glad you liked it
samuel-liebana.bsky.social
Thank you to all our collaborators and funders for making this work possible!
samuel-liebana.bsky.social
Finally, a deep neural network model trained with gradient descent and dopamine-like teaching signals captured the mice's learning trajectories from naive to expert.

Remarkably, the model's fixed-point graph succinctly explained the diverse yet systematic strategies mice developed through learning.
samuel-liebana.bsky.social
Dopamine (DA) signals in the dorsolateral striatum (DLS) provided further evidence for deep GD learning.

DLS DA acted as a partial stimulus-based RPE that only drove learning for stimuli used in decisions ("associated"), resembling the dependence of GD updates on hidden-layer representations.
samuel-liebana.bsky.social
We found evidence for deep GD dynamics in mice learning a task from naive to expert:

1. Learning transitioned through strategies that persisted for several days
2. From early behavior, we could predict behavior many days later
3. Strategies developed sensitivity to visual stimuli over learning
samuel-liebana.bsky.social
Deep learning theory has identified key properties of GD dynamics such as:

1. Learning plateaus, in deep but not shallow networks
2. Local learning, with connected & systematic trajectories
3. A hierarchy of learning stages of increasing complexity

Does animal learning share these properties?
Reposted by Samuel Liebana
laklab.bsky.social
Our work, out at Cell, shows that the brain’s dopamine signals teach each individual a unique learning trajectory. Collaborative experiment-theory effort, led by Sam Liebana in the lab. The first experiment my lab started just shy of 6y ago & v excited to see it out: www.cell.com/cell/fulltex...
Reposted by Samuel Liebana
sainsburywellcome.bsky.social
New research shows long-term learning is shaped by dopamine signals that act as partial reward prediction errors.

The study in mice reveals how early behavioural biases predict individual learning trajectories.

Find out more ⬇️

www.sainsburywellcome.org/web/blog/lon...
Schematic of the study