DurstewitzLab
@durstewitzlab.bsky.social
1K followers 1.8K following 42 posts
Scientific AI/ machine learning, dynamical systems (reconstruction), generative surrogate models of brains & behavior, applications in neuroscience & mental health
Posts Media Videos Starter Packs
durstewitzlab.bsky.social
Despite being extremely lightweight (only 0.1% of params, 0.6% training corpus size, of closest competitor), it also outperforms major TS foundation models like Chronos variants on real-world TS forecasting with minimal inference times (0.2%) ...
durstewitzlab.bsky.social
Our #AI #DynamicalSystems #FoundationModel DynaMix was accepted to #NeurIPS2025 with outstanding reviews (6555) – first model which can *zero-shot*, w/o any fine-tuning, forecast the *long-term statistics* of time series provided a context. Test it on #HuggingFace:
huggingface.co/spaces/Durst...
durstewitzlab.bsky.social
We have openings for several fully-funded positions (PhD & PostDoc) at the intersection of AI/ML, dynamical systems, and neuroscience within a BMFTR-funded Neuro-AI consortium, at Heidelberg University & Central Institute of Mental Health:
www.einzigartigwir.de/en/job-offer...

More info below ...
Reposted by DurstewitzLab
gerstnerlab.bsky.social
Is it possible to go from spikes to rates without averaging?

We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!

Presented at Gatsby Neural Dynamics Workshop, London.
From Spikes To Rates
YouTube video by Gerstner Lab
youtu.be
durstewitzlab.bsky.social
Got prov. approval for 2 major grants in Neuro-AI & Dynamical Systems Reconstruction, on learning & inference in non-stationary environments, out-of-domain generalization, and DS foundation models. To all AI/math/DS enthusiasts: Expect job announcements (PhD/PostDoc) soon! Feel free to get in touch.
durstewitzlab.bsky.social
We wrote a little #NeuroAI piece about in-context learning & neural dynamics vs. continual learning & plasticity, both mechanisms to flexibly adapt to changing environments:
arxiv.org/abs/2507.02103
We relate this to non-stationary rule learning tasks with rapid performance jumps.

Feedback welcome!
What Neuroscience Can Teach AI About Learning in Continuously Changing Environments
Modern AI models, such as large language models, are usually trained once on a huge corpus of data, potentially fine-tuned for a specific task, and then deployed with fixed parameters. Their training ...
arxiv.org
durstewitzlab.bsky.social
Fantastic work by Florian Bähner, Hazem Toutounji, Tzvetan Popov and many others - I'm just the person advertising!
durstewitzlab.bsky.social
How do animals learn new rules? By systematically testing diff. behavioral strategies, guided by selective attn. to rule-relevant cues: rdcu.be/etlRV
Akin to in-context learning in AI, strategy selection depends on the animals' "training set" (prior experience), with similar repr. in rats & humans.
Abstract rule learning promotes cognitive flexibility in complex environments across species
Nature Communications - Whether neurocomputational mechanisms that speed up human learning in changing environments also exist in other species remains unclear. Here, the authors show that both...
rdcu.be
Reposted by DurstewitzLab
russoel.bsky.social
What a line up!! With Lorenzo Gaetano Amato, Demian Battaglia, @durstewitzlab.bsky.social, @engeltatiana.bsky.social,‪ @seanfw.bsky.social‬, Matthieu Gilson, Maurizio Mattia, @leonardopollina.bsky.social‬, Sara Solla.
Reposted by DurstewitzLab
russoel.bsky.social
Into population dynamics? Coming to #CNS2025 but not quite ready to head home?

Come join us! at the Symposium on "Neural Population Dynamics and Latent Representations"! 🧠
📆 July 10th
📍 Scuola Superiore Sant’Anna, Pisa (and online)
👉 Free registration: neurobridge-tne.github.io
#compneuro
durstewitzlab.bsky.social
I’m really looking so much forward to this! In wonderful Pisa!
russoel.bsky.social
Into population dynamics? Coming to #CNS2025 but not quite ready to head home?

Come join us! at the Symposium on "Neural Population Dynamics and Latent Representations"! 🧠
📆 July 10th
📍 Scuola Superiore Sant’Anna, Pisa (and online)
👉 Free registration: neurobridge-tne.github.io
#compneuro
durstewitzlab.bsky.social
Just heading back from a fantastic workshop on neural dynamics at Gatsby/ London, organized by Tatiana Engel, Bruno Averbeck, & Peter Latham.
Enjoyed seeing so many old friends, Memming Park, Carlos Brody, Wulfram Gerstner, Nicolas Brunel & many others …
Discussed our recent DS foundation models …
durstewitzlab.bsky.social
We dive a bit into the reasons why current time series FMs not trained for DS reconstruction fail, and conclude that a DS perspective on time series forecasting & models may help to advance the #TimeSeriesAnalysis field.

(6/6)
durstewitzlab.bsky.social
Remarkably, it not only generalizes zero-shot to novel DS, but it can even generalize to new initial conditions and regions of state space not covered by the in-context information.

(5/6)
durstewitzlab.bsky.social
And no, it’s neither based on Transformers nor Mamba – it’s a new type of mixture-of-experts architecture based on the recently introduced AL-RNN (proceedings.neurips.cc/paper_files/...), specifically trained for DS reconstruction.
#AI

(4/6)
durstewitzlab.bsky.social
It often even outperforms TS FMs on forecasting diverse empirical time series, like weather, traffic, or medical data, typically used to train TS FMs.

This is surprising, cos DynaMix’ training corpus consists *solely* of simulated limit cycles & chaotic systems, no empirical data at all!

(3/6)
durstewitzlab.bsky.social
Unlike TS FMs, DynaMix exhibits #ZeroShotLearning of long-term stats of unseen DS, incl. attractor geometry & power spectrum, w/o *any* re-training, just from a context signal.

It does so with only 0.1% of the parameters of Chronos & 10x faster inference times than the closest competitor.

(2/6)
durstewitzlab.bsky.social
Can time series (TS) #FoundationModels (FM) like Chronos zero-shot generalize to unseen #DynamicalSystems (DS)?

No, they cannot!

But *DynaMix* can, the first TS/DS FM based on principles of DS reconstruction, capturing the long-term evolution of out-of-domain DS: arxiv.org/pdf/2505.131...
(1/6)
durstewitzlab.bsky.social
I'm presenting our lab's work on *learning generative dynamical systems models from multi-modal and multi-subject data* in the world-wide theoretical neurosci seminar Wed 23rd, 11am ET:
www.wwtns.online

--> incl. recent work on building foundation models for #dynamical-systems reconstruction #AI 🧪
Home | Neuroscience | World Wide Theoretical Neuroscience Seminar
WWTNS is a weekly digital seminar on Zoom targeting the theoretical neuroscience community. Its aim is to be a platform to exchange ideas among theoreticians.
www.wwtns.online