Davide Sometti
@zds7.bsky.social
25 followers 30 following 7 posts
PhD student at @unituebingen.bsky.social‬
Posts Media Videos Starter Packs
Pinned
zds7.bsky.social
🧠⏱️ New preprint!
We found that temporal prediction errors are more strongly encoded when an overt response is required, and this encoding occurs in motor rather than sensory space.

www.biorxiv.org/content/10.1...
🧵👇
Reposted by Davide Sometti
snstuebingen.bsky.social
📢 Deadline extended! 📢

The registration deadline for #SNS2025 has been extended to Sunday, September 28th!

Register here 👉 meg.medizin.uni-tuebingen.de/sns_2025/reg...

PS: Students of the GTC (Graduate Training Center for Neuroscience) in Tübingen can earn 1 CP for presenting a poster! 👀
a woman in front of a white board with the words take your time written on it
ALT: a woman in front of a white board with the words take your time written on it
media.tenor.com
Reposted by Davide Sometti
agreco.bsky.social
Are top-down feedback connections enough for robust vision?

We found ConvRNN with top-down feedback exhibiting OOD robustness only when trained with dropout, revealing a dual mechanism for robust sensory coding

with @marco-d.bsky.social, Karl Friston, Giovanni Pezzulo & @siegellab.bsky.social

🧵👇
Reposted by Davide Sometti
agreco.bsky.social
🔵 Proud to share our new preprint 🔵

We compared humans and deep neural networks on sound localization 👂📍

Humans robustly localized OOD sounds even without primary interaural cues (ITD & ILD)

Models localized well only in-training distribution sounds, failing on OOD regime

Link & full story 🧵👇
zds7.bsky.social
Finally, cross-decoding analysis showed significant pattern generalization only when the motor output was identical but not when the sensory input was held constant, suggesting that temporal prediction errors were primarily encoded in a motor rather than sensory space.
zds7.bsky.social
Crucially, we found action-oriented prediction error encoding even when controlling for motor confounds by removing motor-evoked activity from the original data, indicating that the effect was not just a motor artifact.
zds7.bsky.social
By fitting prediction error trajectories to the brain data, we found that error signals were amplified when a response was required following the tactile stimulation, compared to when the stimulus was passively received.
zds7.bsky.social
Multivariate decoding analyses highlighted a distributed frontocentral 🧠 network linked to tactile-motor associations which predicted reaction-time variability across trials.
zds7.bsky.social
We recorded MEG while participants received temporally jittered finger stimulations, either just perceiving them (stimulus only) or reacting as fast as possible (stimulus response), allowing us to dissect how action demands shape the brain’s encoding of temporal expectations.
zds7.bsky.social
🧠⏱️ New preprint!
We found that temporal prediction errors are more strongly encoded when an overt response is required, and this encoding occurs in motor rather than sensory space.

www.biorxiv.org/content/10.1...
🧵👇
Reposted by Davide Sometti
snstuebingen.bsky.social
🔵 Tübingen Systems Neuroscience Symposium 2025 is here! 🔵

#SNS2025 brings together leading international researchers in system neuroscience 🧠

Join us for plenary lectures, poster sessions and social events on 6️⃣-7️⃣ October 2️⃣0️⃣2️⃣5️⃣

registration is open here 👉 meg.medizin.uni-tuebingen.de/sns_2025/
Reposted by Davide Sometti
agreco.bsky.social
🚨 NEW PAPER 🚨

Psychedelics alter cognition profoundly, but is the alteration of the visual perception causally related to high-level cognition modulation?

In VR, we found that simulated visual hallucinations affect high-level human cognition in specific ways!

Link 👉 doi.org/10.1016/j.co...

🧵👇
Reposted by Davide Sometti
robchavez.bsky.social
You cannot decode perceived basic emotions categories using fMRI MVPA in just the amygdala... We've tried it, you've tried it, everyone's tried it, no one's talked about it... It doesn't work.

The amygdala doesn't work like that.

journals.sagepub.com/doi/10.1177/...
Changing Better by Sharing Abandoned Work (Relevant May Not Be Enough) - Gavin M. Schwarz, 2025
journals.sagepub.com
Reposted by Davide Sometti
agreco.bsky.social
🔵 NEW PAPER 🔵

Spatiotemporal Style Transfer #STST is out on #NatureComputationalScience!

STST as a framework for dynamic visual stimulus generation to study brain and machine vision

feat. @siegellab.bsky.social
paper: www.nature.com/articles/s43...
code: github.com/antoninogrec...

🧵👇
Reposted by Davide Sometti
Reposted by Davide Sometti
vlott.bsky.social
The Raincloud Quartet (new pre-print)

All N = 111, M = 0.04, SD = 0.27.
One-sided t-tests vs. 0 yield: t(110) = 1.67, p = .049.

Only raincloud plots reveal the qualitative differences, preventing the drawing of inappropriate conclusions.

How did we create the quartet?
Where are data & code?
🧵👇
Four raincloud plots: a normal distribution, bimodal, skewed, and one riddled with outliers. Popular sample statistics are identical so that mere plotting of the identical means and confidence intervals would miss these qualitative differences. abstract of pre-print