Federico D’Agostino
fededagos.bsky.social
Federico D’Agostino
@fededagos.bsky.social
PhD student @ Uni Tübingen | @bethgelab.bsky.social | Computational Neuroscience & ML
This is a concrete step toward bridging the performance/understanding gap in vision science.

📄 Paper: openreview.net/forum?id=cnr...
⚙️ Code: github.com/bethgelab/wh...

🙏 A joint effort with @matthiaskue.bsky.social, Lisa Schwetlick, @bethgelab.bsky.social

#NeurIPS #CognitiveModeling
November 30, 2025 at 9:24 PM
💬 Conceptually: Deep neural networks should be viewed as scientific instruments. They tell us what is predictable in human behavior.

We then use that information to ask why, building fully interpretable models that approach the performance of their black-box counterparts.
November 30, 2025 at 9:24 PM
📈 The Result: SceneWalk-X (also re-implemented in #JAX ⚡)

These 3 mechanisms double SceneWalk’s explained variance on the MIT1003 dataset (from 35 % → 70 %)! We closed over 56 % of the gap to deep networks, setting a new State-of-the-Art for mechanistic scanpath prediction.
November 30, 2025 at 9:24 PM
↔️ 3. Cardinal + Leftward Bias

People tend to move their eyes more horizontally, and display a subtle initial bias for leftward movements. Adding this adaptive attentional prior further stabilized the model.
November 30, 2025 at 9:24 PM
➡️ 2. Saccadic Momentum

The eyes often tend to continue moving in the same direction, especially after long saccades. We captured this bias by adding a dynamic directional map that adapts based on the previous eye movement.
November 30, 2025 at 9:24 PM
🔥 1. Time-Dependent Temperature Scaling

Early fixations are more focused (exploitative), later ones become more exploratory. We modeled this with a decaying “temperature” that controls the determinism of fixation choices over time.
November 30, 2025 at 9:24 PM
From these systematic failures, we isolated three critical mechanisms SceneWalk was missing.

The data pointed to known cognitive principles, but revealed critical new nuances. Our method showed us not just what was missing, but how to formulate it to match human behavior. 👇
November 30, 2025 at 9:24 PM
💡 Our idea: Use the deep model not just to chase performance, but as a tool for scientific discovery.

We isolate "controversial fixations" where DeepGaze's likelihood vastly exceeds SceneWalk's.
These reveal where the mechanistic model fails to capture predictable patterns.
November 30, 2025 at 9:24 PM
Science often faces a choice:

Build models primarily designed to predict, or models that compactly explain. But what if we used them in synergy?

Our paper tackles this head-on. We combine a deep network (DeepGaze III) with an interpretable mechanistic model (SceneWalk).
November 30, 2025 at 9:24 PM
Thanks for sharing this!
I was not aware of it but looks really relevant. No problem if your lab is no longer working on this much, we will try to incorporate it in the future and reach out if we have any trouble 😉
This is exactly the kind of engagement we hoped to get!
March 17, 2025 at 1:05 PM
Try it out and help us improve the accessibility of retinal datasets and models together.

A team effort with:
@thomaszen.bsky.social
@dgonschorek.bsky.social
@lhoefling.bsky.social
@teuler.bsky.social
@bethgelab.bsky.social

#openscience #computationalneuroscience (9/9)
March 14, 2025 at 9:41 AM
This is just the beginning.
We see openretina as more than a Python package—it aims to be the start of an initiative to foster open collaboration in computational retina research.
We’d love your feedback! (8/9)
March 14, 2025 at 9:41 AM
Researchers can use openretina to:
✅ Explore pre-trained models in minutes
✅ Train their own models
✅ Contribute datasets & models to the community (7/9)
March 14, 2025 at 9:41 AM
The currently supported models follow a Core + Readout architecture:
🔸 Core: Extracts shared retinal features across data recording sessions
🔸 Readout: Maps shared features to individual neuron responses
🔹 Includes pre-trained models & easy dataset loading (6/9)
March 14, 2025 at 9:41 AM
Why does it matter?
Current retina models are often dataset-specific, limiting generalization.
With OpenRetina, we integrate:
🐭 🦎 🐒 Data from multiple species
🎥 Different stimuli & recording modalities
🧠 Deep learning models that can be trained across datasets (5/9)
March 14, 2025 at 9:41 AM
What is openretina?
It’s a Python package built on PyTorch, designed for:
🔹 Training deep learning models on retinal data
🔹 Sharing and using pre-trained retinal models
🔹 Cross-dataset, cross-species comparisons
🔹 In-silico hypothesis testing & experiment guidance (4/9)
March 14, 2025 at 9:41 AM
Understanding the retina is crucial for decoding how visual information is processed. However, decades of data and models remain scattered across labs and approaches. We introduce openretina to unify retinal system identification. (2/9)
March 14, 2025 at 9:41 AM