DurstewitzLab
durstewitzlab.bsky.social
DurstewitzLab
@durstewitzlab.bsky.social
Scientific AI/ machine learning, dynamical systems (reconstruction), generative surrogate models of brains & behavior, applications in neuroscience & mental health
Tomorrow Christoph will present DynaMix, the first foundation model for dynamical systems reconstruction, at #NeurIPS2025 Exhibit Hall C,D,E #2303
December 5, 2025 at 1:28 PM
We have openings for several fully-funded positions (PhD & PostDoc) at the intersection of AI/ML, dynamical systems, and neuroscience within a BMFTR-funded Neuro-AI consortium, at Heidelberg University & Central Institute of Mental Health:
www.einzigartigwir.de/en/job-offer...

More info below ...
August 15, 2025 at 7:46 AM
Got prov. approval for 2 major grants in Neuro-AI & Dynamical Systems Reconstruction, on learning & inference in non-stationary environments, out-of-domain generalization, and DS foundation models. To all AI/math/DS enthusiasts: Expect job announcements (PhD/PostDoc) soon! Feel free to get in touch.
July 13, 2025 at 6:23 AM
Just heading back from a fantastic workshop on neural dynamics at Gatsby/ London, organized by Tatiana Engel, Bruno Averbeck, & Peter Latham.
Enjoyed seeing so many old friends, Memming Park, Carlos Brody, Wulfram Gerstner, Nicolas Brunel & many others …
Discussed our recent DS foundation models …
June 19, 2025 at 11:37 AM
We dive a bit into the reasons why current time series FMs not trained for DS reconstruction fail, and conclude that a DS perspective on time series forecasting & models may help to advance the #TimeSeriesAnalysis field.

(6/6)
May 20, 2025 at 2:15 PM
Remarkably, it not only generalizes zero-shot to novel DS, but it can even generalize to new initial conditions and regions of state space not covered by the in-context information.

(5/6)
May 20, 2025 at 2:15 PM
And no, it’s neither based on Transformers nor Mamba – it’s a new type of mixture-of-experts architecture based on the recently introduced AL-RNN (proceedings.neurips.cc/paper_files/...), specifically trained for DS reconstruction.
#AI

(4/6)
May 20, 2025 at 2:15 PM
It often even outperforms TS FMs on forecasting diverse empirical time series, like weather, traffic, or medical data, typically used to train TS FMs.

This is surprising, cos DynaMix’ training corpus consists *solely* of simulated limit cycles & chaotic systems, no empirical data at all!

(3/6)
May 20, 2025 at 2:15 PM
Unlike TS FMs, DynaMix exhibits #ZeroShotLearning of long-term stats of unseen DS, incl. attractor geometry & power spectrum, w/o *any* re-training, just from a context signal.

It does so with only 0.1% of the parameters of Chronos & 10x faster inference times than the closest competitor.

(2/6)
May 20, 2025 at 2:15 PM
Can time series (TS) #FoundationModels (FM) like Chronos zero-shot generalize to unseen #DynamicalSystems (DS)?

No, they cannot!

But *DynaMix* can, the first TS/DS FM based on principles of DS reconstruction, capturing the long-term evolution of out-of-domain DS: arxiv.org/pdf/2505.131...
(1/6)
May 20, 2025 at 2:15 PM
This gives rise to an interpretable latent feature space, where datasets with similar dynamics cluster. Intriguingly, this clustering according to *dynamical systems features* led to much better separation of groups than could be achieved by more trad. time series features.
(3/4)
January 26, 2025 at 11:28 AM
We show applications like transfer & few-shot learning, but most interestingly perhaps, subject/system-specific features were often linearly related to control parameters of the underlying dynamical system trained on …
(2/4)
January 26, 2025 at 11:28 AM
Toward interpretable #AI foundation models for #DynamicalSystems reconstruction: Our paper on transfer & few-shot learning for dynamical systems just got accepted for #ICLR2025 !

Previous version: arxiv.org/pdf/2410.04814; strongly updated version will be available soon ...
(1/4)
January 26, 2025 at 11:28 AM
That paper discusses an important issue for RNNs as used in neurosci. But we would argue that many RNN approaches do not truly reconstruct DS, for which we demand also agreement in long-term stats, attractor geometry, and generative perform. (esp. in chaotic systems, MSE as stats can be misleading).
December 25, 2024 at 12:09 PM
In proceedings.neurips.cc/paper_files/... we provided a highly efficient (often linear-time) algo for precisely locating attractors in ReLU-based RNNs. We prove that besides EVGP, bifurcations are a major obstacle in RNN training, but are provably alleviated by training techniques like GTF. (6/6)
December 24, 2024 at 12:39 PM
In proceedings.mlr.press/v235/goring2... we laid out a general theory for out-of-domain generalization in DSR. We define OODG in DSR as the ability to predict dynamics in unseen dynamical regimes (basins of attraction). We prove that in its most general form, this problem is intractable. (5/6)
December 24, 2024 at 12:39 PM
Our Almost-Linear RNN openreview.net/pdf?id=sEpSx... shows that simplicity is king, reducing the # of required nonlin. to a bare min., eg learning Lorenz chaos with just 2 ReLUs! The AL-RNN has a direct relation to symbolic dynamics. It strongly facilitates math. analysis of trained models. (4/6)
December 24, 2024 at 12:39 PM
In multimodal TF we extended this idea to combinations of arbitrary data modalities, illustrating that chaotic attractors can even be learned from just a symbolic encoding, and providing a common dynamical embedding for different modalities: proceedings.mlr.press/v235/brenner...
(3/6)
December 24, 2024 at 12:39 PM
I start with *generalized TF*, which overcomes the explod./vanish. grad. probl. *in training* for any RNN, enabling DSR on highly chaotic and complex real-world data: proceedings.mlr.press/v202/hess23a...
Most other DSR work considers only simulated benchm. & struggles with real data. (2/6)
December 24, 2024 at 12:39 PM
Now may be a good time to introduce our group on bsky with some of our contrib. to dynamical systems reconstruction (DSR) from past year. By DSR we mean learning a *generative surrogate model* of a dyn. process from TS data which reproduces full attractor & generalizes to new init. conditions (1/6)
December 24, 2024 at 12:39 PM