GerstnerLab
@gerstnerlab.bsky.social
240 followers 120 following 13 posts
The Laboratory of Computational Neuroscience @EPFL studies models of neurons, networks of neurons, synaptic plasticity, and learning in the brain.
Posts Media Videos Starter Packs
Pinned
gerstnerlab.bsky.social
Is it possible to go from spikes to rates without averaging?

We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!

Presented at Gatsby Neural Dynamics Workshop, London.
From Spikes To Rates
YouTube video by Gerstner Lab
youtu.be
gerstnerlab.bsky.social
P4 52 “Coding Schemes in Non-Lazy Artificial Neural Networks” by @avm.bsky.social
gerstnerlab.bsky.social
WEDNESDAY 14:00 – 15:30

P4 25 “Rarely categorical, always high-dimensional: how the neural code changes along the cortical hierarchy” by @shuqiw.bsky.social

P4 35 “Biologically plausible contrastive learning rules with top-down feedback for deep networks” by @zihan-wu.bsky.social
gerstnerlab.bsky.social
WEDNESDAY 12:30 – 14:00

P3 4 “Toy Models of Identifiability for Neuroscience” by @flavioh.bsky.social

P3 55 “How many neurons is “infinitely many”? A dynamical systems perspective on the mean-field limit of structured recurrent neural networks” by Louis Pezon
gerstnerlab.bsky.social
P2 65 “Rate-like dynamics of spiking neural networks” by Kasper Smeets
gerstnerlab.bsky.social
TUESDAY 18:00 – 19:30

P2 2 “Biologically informed cortical models predict optogenetic perturbations” by @bellecguill.bsky.social

P2 12 “High-precision detection of monosynaptic connections from extra-cellular recordings” by @shuqiw.bsky.social
gerstnerlab.bsky.social
Lab members are at the Bernstein conference @bernsteinneuro.bsky.social with 9 posters! Here’s the list:

TUESDAY 16:30 – 18:00

P1 62 “Measuring and controlling solution degeneracy across task-trained recurrent neural networks” by @flavioh.bsky.social
Reposted by GerstnerLab
modirshanechi.bsky.social
New in @pnas.org: doi.org/10.1073/pnas...

We study how humans explore a 61-state environment with a stochastic region that mimics a “noisy-TV.”

Results: Participants keep exploring the stochastic part even when it’s unhelpful, and novelty-seeking best explains this behavior.

#cogsci #neuroskyence
Reposted by GerstnerLab
bio-emergent.bsky.social
🎉 "High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model" will be presented as an oral at #NeurIPS2025 🎉

Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.

www.biorxiv.org/content/10.1...
1/2
gerstnerlab.bsky.social
Work led by Martin Barry with the supervision of Wulfram Gerstner and Guillaume Bellec @bellecguill.bsky.social
gerstnerlab.bsky.social
In experiments (models & simulations), we showed how this approach supports stable retention of old tasks while learning new ones (split CIfar-100, ASC…)
gerstnerlab.bsky.social
We designed a Bio-inspired Context-specific gating of plasticity and neuronal activity allowing for a drastic reduction in catastrophic forgetting.

We also show the capacity of our model of both forward and backward transfer! All of this thanks to the shared neuronal activity across tasks.
gerstnerlab.bsky.social
We designed a Gating/Availabilty model that detects selective neurons - most useful neuron for the task - during learning, shunt activity of the others (Gating) and decrease the learning rate of task selective neuron (Availability)
gerstnerlab.bsky.social
🧠 “You never forget how to ride a bike”, but how is that possible?
Our study proposes a bio-plausible meta-plasticity rule that shapes synapses over time, enabling selective recall based on context
Context selectivity with dynamic availability enables lifelong continual learning
“You never forget how to ride a bike”, – but how is that possible? The brain is able to learn complex skills, stop the practice for years, learn other…
www.sciencedirect.com
Reposted by GerstnerLab
modirshanechi.bsky.social
So happy to see this work out! 🥳
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! 🥰

Tweeprint on an earlier version: bsky.app/profile/modi... 🧠🧪👩‍🔬
openmindjournal.bsky.social
Merits of Curiosity: A Simulation Study
Abstract‘Why are we curious?’ has been among the central puzzles of neuroscience and psychology in the past decades. A popular hypothesis is that curiosity is driven by intrinsically generated reward signals, which have evolved to support survival in complex environments. To formalize and test this hypothesis, we need to understand the enigmatic relationship between (i) intrinsic rewards (as drives of curiosity), (ii) optimality conditions (as objectives of curiosity), and (iii) environment structures. Here, we demystify this relationship through a systematic simulation study. First, we propose an algorithm to generate environments that capture key abstract features of different real-world situations. Then, we simulate artificial agents that explore these environments by seeking one of six representative intrinsic rewards: novelty, surprise, information gain, empowerment, maximum occupancy principle, and successor-predecessor intrinsic exploration. We evaluate the exploration performance of these simulated agents regarding three potential objectives of curiosity: state discovery, model accuracy, and uniform state visitation. Our results show that the comparative performance of each intrinsic reward is highly dependent on the environmental features and the curiosity objective; this indicates that ‘optimality’ in top-down theories of curiosity needs a precise formulation of assumptions. Nevertheless, we found that agents seeking a combination of novelty and information gain always achieve a close-to-optimal performance on objectives of curiosity as well as in collecting extrinsic rewards. This suggests that novelty and information gain are two principal axes of curiosity-driven behavior. These results pave the way for the further development of computational models of curiosity and the design of theory-informed experimental paradigms.
dlvr.it
Reposted by GerstnerLab
modirshanechi.bsky.social
Attending #CCN2025?
Come by our poster in the afternoon (4th floor, Poster 72) to talk about the sense of control, empowerment, and agency. 🧠🤖

We propose a unifying formulation of the sense of control and use it to empirically characterize the human subjective sense of control.

🧑‍🔬🧪🔬
gerstnerlab.bsky.social
Is it possible to go from spikes to rates without averaging?

We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!

Presented at Gatsby Neural Dynamics Workshop, London.
From Spikes To Rates
YouTube video by Gerstner Lab
youtu.be
Reposted by GerstnerLab
lucasgruaz.bsky.social
Excited to present at the PIMBAA workshop at #RLDM2025 tomorrow!
We study curiosity using intrinsically motivated RL agents and developed an algorithm to generate diverse, targeted environments for comparing curiosity drives.

Preprint (accepted but not yet published): osf.io/preprints/ps...
OSF
osf.io
Reposted by GerstnerLab
sobeckerneuro.bsky.social
Stoked to be at RLDM! Curious how novelty and exploration are impacted by generalization across similar stimuli? Then don't miss my flash talk in the PIMBAA workshop (tmr at 10:30, E McNabb Theatre) or stop by my poster tmr (#74)! Looking forward to chat 🤩

www.biorxiv.org/content/10.1...
Representational similarity modulates neural and behavioral signatures of novelty
Novelty signals in the brain modulate learning and drive exploratory behaviors in humans and animals. While the perceived novelty of a stimulus is known to depend on previous experience, the effect of...
www.biorxiv.org
Reposted by GerstnerLab
avm.bsky.social
Interested in high-dim chaotic networks? Ever wondered about the structure of their state space? @jakobstubenrauch.bsky.social has answers - from a separation of fixed points and dynamics onto distinct shells to a shared lower-dim manifold and linear prediction of dynamics.
jakobstubenrauch.bsky.social
(1/3) How to analyse a dynamical system? Find its fixed points, study their properties!

How to analyse a *high-dimensional* dynamical system? Find its fixed points, study their properties!

We do that for a chaotic neural network! finally published: doi.org/10.1103/Phys...
Fixed point geometry in chaotic neural networks
Understanding the high-dimensional chaotic dynamics occurring in complex biological systems such as recurrent neural networks or ecosystems remains a conceptual challenge. For low-dimensional dynamics...
doi.org
Reposted by GerstnerLab
gauteeinevoll.bsky.social
Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner

theoreticalneuroscience.no/thn22

John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about? @icepfl.bsky.social
Reposted by GerstnerLab
Reposted by GerstnerLab
bio-emergent.bsky.social
New round of spike vs rate?

The concentration of measure phenomenon can explain the emergence of rate-based dynamics in networks of spiking neurons, even when no two neurons are the same.

This is what's shown in the last paper of my PhD, out today in Physical Review Letters 🎉 tinyurl.com/4rprwrw5
Emergent Rate-Based Dynamics in Duplicate-Free Populations of Spiking Neurons
Can spiking neural networks (SNNs) approximate the dynamics of recurrent neural networks? Arguments in classical mean-field theory based on laws of large numbers provide a positive answer when each ne...
tinyurl.com