Dimitra Maoutsa
banner
dimma.bsky.social
Dimitra Maoutsa
@dimma.bsky.social
Theor/Comp Neuroscientist (postdoc)
Prev @TU Munich
Stochastic&nonlin. dynamics @TU Berlin&@MPIDS

Learning dynamics, plasticity&geometry of representations
https://dimitra-maoutsa.github.io
https://dimitra-maoutsa.github.io/M-Dims-Blog
Reposted by Dimitra Maoutsa
1/3 How reward prediction errors shape memory: when people gamble and cues signal unexpectedly high reward probability, those incidental images are remembered better than ones on safe trials, linking RL computations to episodic encoding. #RewardSignals #neuroskyence www.nature.com/articles/s41...
Positive reward prediction errors during decision-making strengthen memory encoding - Nature Human Behaviour
Jang and colleagues show that positive reward prediction errors elicited during incidental encoding enhance the formation of episodic memories.
www.nature.com
November 30, 2025 at 11:12 AM
Reposted by Dimitra Maoutsa
New(ish) paper!

It's often said that hippocampal replay, which helps to build up a model of the world, is biased by reward. But the canonical temporal-difference learning requires updates proportional to reward-prediction error (RPE), not reward magnitude

1/4

rdcu.be/eRxNz
Post-learning replay of hippocampal-striatal activity is biased by reward-prediction signals
Nature Communications - It is unclear which aspects of experience shape sleep’s contributions to learning. Here, by combining neural recordings in rats with reinforcement learning, the...
rdcu.be
November 29, 2025 at 6:32 PM
Reposted by Dimitra Maoutsa
Fig. 6: Mathematical model
November 29, 2025 at 8:13 AM
Reposted by Dimitra Maoutsa
Reposted by Dimitra Maoutsa
A study finds that cats meow harder to greet their male caregivers than female. Females are more verbally interactive, more skilled at interpreting cat meows. Apparently males require more meows to notice and respond to the needs of their cats.

onlinelibrary.wiley.com/doi/10.1111/...
November 28, 2025 at 1:03 PM
Reposted by Dimitra Maoutsa
I won't look up the names of the reviewers. But I would look up the names of the people who looked up the names of the reviewers.
November 27, 2025 at 10:50 PM
Reposted by Dimitra Maoutsa
Interesting remark. I think there's a difference between looking up the names of past reviewers out of curiosity, without having any consequence (eg, don't bully them), and looking up the names of current reviewers in order to bias the process. The later would be a big integrity failure.
November 28, 2025 at 12:00 AM
Reposted by Dimitra Maoutsa
OpenReview was breached. The names of authors, reviewers, ACs, etc, for all past and current conferences were visible for a time, making nothing anonymous anymore. These data have been released for this year's ICLR, but I fear it's also the case for the past 10 years of conferences.
November 28, 2025 at 8:11 AM
Reposted by Dimitra Maoutsa
In case someone missed it, an account called OpenReviewers has started posting public comments at ICLR submissions revealing the identity of reviewers. We are in a crazy time.
November 28, 2025 at 10:13 AM
Reposted by Dimitra Maoutsa
"An old saying about such follies is that “six months in the lab can you save you an afternoon in the library”; here we may have wasted a trillion dollars and several years to rediscover what cognitive science already knew."

garymarcus.substack.com/p/a-trillion...
A trillion dollars is a terrible thing to waste
The machine learning community is finally waking up to the madness, but the detour of the last few years has been costly.
garymarcus.substack.com
November 28, 2025 at 9:41 AM
Reposted by Dimitra Maoutsa
The hippocampus is not a library, it is a simulation engine.

HPC is known for storing maps of the environment but not so known for generating planned trajectories.

This paper proposes that recurrence in CA3 is crucial for planning.

A🧵with my toy model and notes:

#neuroskyence #compneuro #NeuroAI
November 28, 2025 at 3:02 AM
Reposted by Dimitra Maoutsa
7/🎛️ Control between areas
We applied our framework to a simplified model of interacting brain areas: a multi-area recurrent neural network (RNN) trained on a working memory task. After learning the task, its "sensory" area gained control over its "cognitive" area.
November 26, 2025 at 7:32 PM
Reposted by Dimitra Maoutsa
5/🔎 Estimating the Jacobian from data is difficult. To do so, we developed JacobianODE, a deep learning framework that leverages geometric properties of the Jacobian to infer it from data.

Scroll down the thread to learn how it works. For now, does it work?
November 26, 2025 at 7:32 PM
Reposted by Dimitra Maoutsa
November 27, 2025 at 11:13 AM
Reposted by Dimitra Maoutsa
New ranking scale for grad apps just dropped
November 27, 2025 at 6:19 PM
Reposted by Dimitra Maoutsa
My personal favorites (derogatory) are Chicago's 'are you the recommender an alum' and 'please rate the emotional stability of the applicant.'

And asking me to rate english language skills, which, toefl exists?
November 27, 2025 at 5:54 PM
Reposted by Dimitra Maoutsa
Phys. Rev. E: Synaptic plasticity alters the nature of the chaos transition in neural networks
http://link.aps.org/doi/10.1103/7kk9-3jm8
November 27, 2025 at 11:06 AM
Reposted by Dimitra Maoutsa
Nature research paper: Building compositional tasks with shared neural subspaces

go.nature.com/4ocRj3n
Building compositional tasks with shared neural subspaces - Nature
The brain can flexibly perform multiple tasks by compositionally combining task-relevant neural representations.
go.nature.com
November 27, 2025 at 11:37 AM
Reposted by Dimitra Maoutsa
Terrific work led by @emmaroscow.bsky.social showing that hippocampal replay reflects events with large prediction errors, all the better to bootstrap learning as we slumber

Congratulations to Matt Jones & Nathan Lepora for seeing this through to the end!

www.nature.com/articles/s41...
Post-learning replay of hippocampal-striatal activity is biased by reward-prediction signals - Nature Communications
It is unclear which aspects of experience shape sleep’s contributions to learning. Here, by combining neural recordings in rats with reinforcement learning, the authors show that reward-prediction sig...
www.nature.com
November 27, 2025 at 10:24 AM
Reposted by Dimitra Maoutsa
In search for the invisible: motor inhibition in monkey premotor cortex and its RNN replicas https://www.biorxiv.org/content/10.1101/2025.11.24.690225v1
November 27, 2025 at 3:15 AM
Reposted by Dimitra Maoutsa
A cholinergic mechanism orchestrating task-dependent computation across the cortex https://www.biorxiv.org/content/10.1101/2025.11.26.690825v1
November 27, 2025 at 8:16 AM
Reposted by Dimitra Maoutsa
1/6 New preprint 🚀 How does the cortex learn to represent things and how they move without reconstructing sensory stimuli? We developed a circuit-centric recurrent predictive learning (RPL) model based on JEPAs.
🔗 doi.org/10.1101/2025...
Led by @atenagm.bsky.social @mshalvagal.bsky.social
November 27, 2025 at 8:24 AM
Reposted by Dimitra Maoutsa
New preprint alert!

Cognitive maps are flexible, dynamic, (re)constructed representations

#psychscisky #neuroskyence #cognition #philsky 🧪
OSF
osf.io
November 26, 2025 at 6:11 PM
Reposted by Dimitra Maoutsa
13/ 😀Feel free to reach out to discuss this work, or the application of it to your field of study. Or come swing by our poster at #NeurIPS2025. We’d love to chat!

📄 Paper: openreview.net/forum?id=I82...
💾 Code: github.com/adamjeisen/J...
📍 Poster: Thu 4 Dec 11am - 2pm PST (#2111)
Characterizing control between interacting subsystems with deep...
Biological function arises through the dynamical interactions of multiple subsystems, including those between brain areas, within gene regulatory networks, and more. A common approach to...
openreview.net
November 26, 2025 at 7:32 PM
Reposted by Dimitra Maoutsa
How do brain areas control each other? 🧠🎛️

✨In our NeurIPS 2025 Spotlight paper, we introduce a data-driven framework to answer this question using deep learning, nonlinear control, and differential geometry.🧵⬇️
November 26, 2025 at 7:32 PM