Friedemann Zenke
@fzenke.bsky.social
650 followers 310 following 13 posts
Computational neuroscientist at the FMI. www.zenkelab.org
Posts Media Videos Starter Packs
Pinned
fzenke.bsky.social
1/6 Why does the brain maintain such precise excitatory-inhibitory balance?
Our new preprint explores a provocative idea: Small, targeted deviations from this balance may serve a purpose: to encode local error signals for learning.
www.biorxiv.org/content/10.1...
led by @jrbch.bsky.social
Reposted by Friedemann Zenke
neural-reckoning.org
Message for participants of the #SNUFA 2025 spiking neural network workshop. We got almost 60 awesome abstract submissions, and we'd now like your help to select which ones should be offered talks. Follow the "abstract voting" link at snufa.net/2025/ to take part. It should take <15m. Thanks! ❤️
SNUFA 2025
Spiking Neural networks as Universal Function Approximators
snufa.net
Reposted by Friedemann Zenke
laurelinelogiaco.bsky.social
Interested in doing a Ph.D. to work on building models of the brain/behavior? Consider applying to graduate schools at CU Anschutz:
1. Neuroscience www.cuanschutz.edu/graduate-pro...
2. Bioengineering engineering.ucdenver.edu/bioengineeri...

You could work with several comp neuro PIs, including me.
Reposted by Friedemann Zenke
kristorpjensen.bsky.social
I’m super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!

www.biorxiv.org/content/10.1...
fzenke.bsky.social
Truly honored (and a little overwhelmed) to see our work featured in The Transmitter's "This Paper Changed My Life." Huge thanks to @neural-reckoning.org for the kind words - and to our amazing community that keeps pushing spiking neural network research forward 🙏
Reposted by Friedemann Zenke
neural-reckoning.org
Submissions (short!) due for SNUFA spiking neural networks conference in <2 weeks! 🤖🧠🧪

forms.cloud.microsoft/e/XkZLavhaJe

More info at snufa.net/2025/

Note that we normally get around 700 participants and recordings go on YouTube and get 100s-1000s views.

Please repost.
SNUFA 2025
Spiking Neural networks as Universal Function Approximators
snufa.net
Reposted by Friedemann Zenke
Reposted by Friedemann Zenke
neural-reckoning.org
Spiking neural networks people, this message is for you!

The annual SNUFA workshop is now open for abstract submission (deadline Sept 26) and (free) registration. This year's speakers include Elisabetta Chicca, Jason Eshraghian, Tomoki Fukai, Chengcheng Huang, and... you?

snufa.net/2025/

🤖🧠🧪
SNUFA 2025
Spiking Neural networks as Universal Function Approximators
snufa.net
Reposted by Friedemann Zenke
neural-reckoning.org
We're hiring,, of interest to people in spiking neural networks and neuromorphic particularly. See below. 👇

Note the deadline for applications is very soon! Apologies for this but various admin necessities made it unavoidable.
danakarca.bsky.social
Hiring a post-doc at Imperial in EEE. Broad in scope + flexible on topics: neural networks & new AI accelerators from a HW/SW co-design perspective!

w/ @neuralreckoning.bsky.social @achterbrain.bsky.social in Intelligent Systems and Networks group.

Plz share! 🚀: www.imperial.ac.uk/jobs/search-...
Description
Please note that job descriptions are not exhaustive, and you may be asked to take on additional duties that align with the key responsibilities ment...
www.imperial.ac.uk
Reposted by Friedemann Zenke
elife.bsky.social
Ditching months-long delays for fast, constructive feedback.

This interview with @solygamagda.bsky.social dives into the experience of publishing with eLife and what it could mean for a more open and efficient future in science.
Publishing with eLife: “the future of science lies in greater transparency”
Neuroscientist Magdalena Solyga shares her latest study and her experience publishing with eLife.
buff.ly
Reposted by Friedemann Zenke
georgkeller.bsky.social
There might be a bit of misconception here. What the paper very convincingly shows is that visual cortex does not compute global oddball prediction errors and does not receive any top-down predictions that could be used to compute such prediction errors.
Reposted by Friedemann Zenke
durstewitzlab.bsky.social
Got prov. approval for 2 major grants in Neuro-AI & Dynamical Systems Reconstruction, on learning & inference in non-stationary environments, out-of-domain generalization, and DS foundation models. To all AI/math/DS enthusiasts: Expect job announcements (PhD/PostDoc) soon! Feel free to get in touch.
Reposted by Friedemann Zenke
tpvogels.bsky.social
Here is your last reminder that the application deadline for Imbizo.Africa is nearing quickly, the 1st of July, in fact tomorrow. Still the place where diversity is at its best in the world! Tell all who need to hear. #africa #neuro
#Imbizo - Simons Computational Neuroscience Imbizo - #Imbizo
Simons Computational Neuroscience Imbizo summer school in Cape Town, South Africa
Imbizo.Africa
Reposted by Friedemann Zenke
hannahpayne.bsky.social
My latest Aronov lab paper is now published @Nature!

When a chickadee looks at a distant location, the same place cells activate as if it were actually there 👁️

The hippocampus encodes where the bird is looking, AND what it expects to see next -- enabling spatial reasoning from afar

bit.ly/3HvWSum
Reposted by Friedemann Zenke
saxelab.bsky.social
How does in-context learning emerge in attention models during gradient descent training?

Sharing our new Spotlight paper @icmlconf.bsky.social: Training Dynamics of In-Context Learning in Linear Attention
arxiv.org/abs/2501.16265

Led by Yedi Zhang with @aaditya6284.bsky.social and Peter Latham
Reposted by Friedemann Zenke
steinaerts.bsky.social
The deadline for the VIB.AI group leader positions is approaching - send in your CV and short research plan before 14th June to start your BioML research lab in Leuven or Ghent
vibai.bsky.social
We want to connect:
To link model builders with data generators.
To bring together scientists asking why cells behave the way they do, and others figuring out how to model that behavior.

If you're working on AI in biology, consider joining!
https://tinyurl.com/y35m6khy
Reposted by Friedemann Zenke
Reposted by Friedemann Zenke
fzenke.bsky.social
Absolutely. The idea is that in addition to the very important role of keeping this delicate equilibrium, deviations from balance could serve yet another purpose.
fzenke.bsky.social
6/6 BCP does not only work. Trained networks also replicate in-vivo dynamics of SOM and VIP interneurons during motor learning and fear conditioning experiments. Our model takes a step toward linking neuronal circuits with plasticity and behavior.
fzenke.bsky.social
5/6 In multi-layer networks, BCP enables online learning without separate training phases and segregated dendrites, while only using feedback-driven E/I balance perturbations.
fzenke.bsky.social
4/6 We formalize this idea as Balance-Controlled Plasticity (BCP), a plasticity framework grounded in adaptive control theory, which simplifies to a Hebbian-like learning rule modulated by recurrent inhibition. This learning rule is both biologically plausible yet powerful.
fzenke.bsky.social
3/6 We propose feedback signals may target interneurons to transiently disrupt E/I balance. These controlled deviations can efficiently encode local error signals, allowing to guide plasticity without the need for special dendritic morphology.
fzenke.bsky.social
2/6 Existing theories of bio-plausible learning and credit assignment often rely on segregated dendrites to encode neuronal errors.

However, not all neurons have morphologically well-separated dendrites while data-driven plasticity models seem at odds with such error modulated learning.
fzenke.bsky.social
1/6 Why does the brain maintain such precise excitatory-inhibitory balance?
Our new preprint explores a provocative idea: Small, targeted deviations from this balance may serve a purpose: to encode local error signals for learning.
www.biorxiv.org/content/10.1...
led by @jrbch.bsky.social