GerstnerLab
@gerstnerlab.bsky.social
240 followers
120 following
13 posts
The Laboratory of Computational Neuroscience @EPFL studies models of neurons, networks of neurons, synaptic plasticity, and learning in the brain.
Posts
Media
Videos
Starter Packs
Reposted by GerstnerLab
Reposted by GerstnerLab
GerstnerLab
@gerstnerlab.bsky.social
· Sep 4
GerstnerLab
@gerstnerlab.bsky.social
· Sep 4
GerstnerLab
@gerstnerlab.bsky.social
· Sep 4
GerstnerLab
@gerstnerlab.bsky.social
· Sep 4
Context selectivity with dynamic availability enables lifelong continual learning
“You never forget how to ride a bike”, – but how is that possible? The brain is able to learn complex skills, stop the practice for years, learn other…
www.sciencedirect.com
Reposted by GerstnerLab
Open Mind
@openmindjournal.bsky.social
· Aug 23
Merits of Curiosity: A Simulation Study
Abstract‘Why are we curious?’ has been among the central puzzles of neuroscience and psychology in the past decades. A popular hypothesis is that curiosity is driven by intrinsically generated reward signals, which have evolved to support survival in complex environments. To formalize and test this hypothesis, we need to understand the enigmatic relationship between (i) intrinsic rewards (as drives of curiosity), (ii) optimality conditions (as objectives of curiosity), and (iii) environment structures. Here, we demystify this relationship through a systematic simulation study. First, we propose an algorithm to generate environments that capture key abstract features of different real-world situations. Then, we simulate artificial agents that explore these environments by seeking one of six representative intrinsic rewards: novelty, surprise, information gain, empowerment, maximum occupancy principle, and successor-predecessor intrinsic exploration. We evaluate the exploration performance of these simulated agents regarding three potential objectives of curiosity: state discovery, model accuracy, and uniform state visitation. Our results show that the comparative performance of each intrinsic reward is highly dependent on the environmental features and the curiosity objective; this indicates that ‘optimality’ in top-down theories of curiosity needs a precise formulation of assumptions. Nevertheless, we found that agents seeking a combination of novelty and information gain always achieve a close-to-optimal performance on objectives of curiosity as well as in collecting extrinsic rewards. This suggests that novelty and information gain are two principal axes of curiosity-driven behavior. These results pave the way for the further development of computational models of curiosity and the design of theory-informed experimental paradigms.
dlvr.it
Reposted by GerstnerLab
GerstnerLab
@gerstnerlab.bsky.social
· Aug 8
Emergent Rate-Based Dynamics in Duplicate-Free Populations of Spiking Neurons
Can spiking neural networks (SNNs) approximate the dynamics of recurrent neural networks? Arguments in classical mean-field theory based on laws of large numbers provide a positive answer when each ne...
journals.aps.org
Reposted by GerstnerLab
Reposted by GerstnerLab
Sophia Becker
@sobeckerneuro.bsky.social
· Jun 11
Representational similarity modulates neural and behavioral signatures of novelty
Novelty signals in the brain modulate learning and drive exploratory behaviors in humans and animals. While the perceived novelty of a stimulus is known to depend on previous experience, the effect of...
www.biorxiv.org
Reposted by GerstnerLab
Matteo Carandini
@carandinilab.net
· Jun 9
High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model
Computation in recurrent networks of neurons has been hypothesized to occur at the level of low-dimensional latent dynamics, both in artificial systems and in the brain. This hypothesis seems at odds ...
www.biorxiv.org
Reposted by GerstnerLab
Alex van Meegen
@avm.bsky.social
· Jun 10
Reposted by GerstnerLab
Reposted by GerstnerLab