A Erdem Sagtekin
@aesagtekin.bsky.social
620 followers 3 following 2 posts
chronic reader of old papers. dynamical systems & computation & bio-plausible learning. curr: @flatiron ccn, msc: comp neuro @tubingen, bsc: EE.
Posts Media Videos Starter Packs
Reposted by A Erdem Sagtekin
fzenke.bsky.social
1/6 Why does the brain maintain such precise excitatory-inhibitory balance?
Our new preprint explores a provocative idea: Small, targeted deviations from this balance may serve a purpose: to encode local error signals for learning.
www.biorxiv.org/content/10.1...
led by @jrbch.bsky.social
Reposted by A Erdem Sagtekin
matthijspals.bsky.social
How to find all fixed points in piece-wise linear recurrent neural networks (RNNs)?
A short thread 🧵

In RNNs with N units with ReLU(x-b) activations the phase space is partioned in 2^N regions by hyperplanes at x=b 1/7
Reposted by A Erdem Sagtekin
david-g-clark.bsky.social
(1/5) Fun fact: Several classic results in the stat. mech. of learning can be derived in a couple lines of simple algebra!

In this paper with Haim Sompolinsky, we simplify and unify derivations for high-dimensional convex learning problems using a bipartite cavity method.
arxiv.org/abs/2412.01110
Simplified derivations for high-dimensional convex learning problems
Statistical physics provides tools for analyzing high-dimensional problems in machine learning and theoretical neuroscience. These calculations, particularly those using the replica method, often invo...
arxiv.org
aesagtekin.bsky.social
This list likely reflects mainly my interests and circle, and I’m sure I’ve missed many people, but I gave it a try: (I’ll be slowly editing it until it reaches 150/150)

go.bsky.app/7VFUkdn

(also, I tried but couldn't remove my profile...)
aesagtekin.bsky.social
i enjoyed reading the geometry of plasticity paper and felt that something important was coming, this is it:
tyrellturing.bsky.social
Check out our new preprint! We show there is a better learning algorithm for #compneuro than gradient descent (GD): exponentiated gradients (EG).

tl;dr: EG respects Dale's law, produces weight distributions that match biology, and outperforms GD in biologically relevant scenarios.

🧠📈 🧪
repromancer.bsky.social
Why does #compneuro need new learning methods? ANN models are usually trained with Gradient Descent (GD), which violates biological realities like Dale’s law and log-normal weights. Here we describe a superior learning algorithm for comp neuro: Exponentiated Gradients (EG)! 1/12 #neuroscience 🧪