🎓 https://scholar.google.com/citations?hl=en&user=k5eR8_oAAAAJ
We found ConvRNN with top-down feedback exhibiting OOD robustness only when trained with dropout, revealing a dual mechanism for robust sensory coding
with @marco-d.bsky.social, Karl Friston, Giovanni Pezzulo & @siegellab.bsky.social
🧵👇
@agreco.bsky.social will unpack how predictive processing shapes vision in brains & machines this Wednesday at 4pm CET🧠🤖
Don’t miss it!!
Sign up 👉 www.crowdcast.io/c/loops-semi...
@agreco.bsky.social will unpack how predictive processing shapes vision in brains & machines this Wednesday at 4pm CET🧠🤖
Don’t miss it!!
Sign up 👉 www.crowdcast.io/c/loops-semi...
It's on Dec 10th 🗓️, 4pm CET ⏱️, live on crowdcast! (Link 👇)
Looking forward to the discussion!
It's on Dec 10th 🗓️, 4pm CET ⏱️, live on crowdcast! (Link 👇)
Looking forward to the discussion!
#philsci #cogsky #CognitiveNeuroscience
@phaueis.bsky.social
aktuell.uni-bielefeld.de/2025/11/24/t...
#philsci #cogsky #CognitiveNeuroscience
@phaueis.bsky.social
aktuell.uni-bielefeld.de/2025/11/24/t...
What’s wild is that reviewers are still making this exact mistake today, maybe even more than before.
What’s wild is that reviewers are still making this exact mistake today, maybe even more than before.
We combined psychophysics, 7T fMRI, and computational modeling of vision with placebo, 5mg, and 10mg psilocybin, in the same group of participants, to clarify the computational mechanisms of psychedelics. 🧵
We combined psychophysics, 7T fMRI, and computational modeling of vision with placebo, 5mg, and 10mg psilocybin, in the same group of participants, to clarify the computational mechanisms of psychedelics. 🧵
- doesn’t linearize, distorting similarity metrics
- is biased by temporal jitter across epochs
- may miss important dimensions for transient amplification
If you think there is a state space, use a state space model!
- doesn’t linearize, distorting similarity metrics
- is biased by temporal jitter across epochs
- may miss important dimensions for transient amplification
If you think there is a state space, use a state space model!
Recent numerical advances cracked the scalability barrier. Voxel-level hierarchical modeling is now feasible, revealing just how punishing traditional multiple-comparison adjustments really are.
arxiv.org/abs/2511.12825
Recent numerical advances cracked the scalability barrier. Voxel-level hierarchical modeling is now feasible, revealing just how punishing traditional multiple-comparison adjustments really are.
arxiv.org/abs/2511.12825
"FDR-based corrections [...] may be overly conservative, discarding biologically meaningful effects"
👇👇
Recent numerical advances cracked the scalability barrier. Voxel-level hierarchical modeling is now feasible, revealing just how punishing traditional multiple-comparison adjustments really are.
arxiv.org/abs/2511.12825
"FDR-based corrections [...] may be overly conservative, discarding biologically meaningful effects"
👇👇
This makes me think back to this beautiful and underappreciated paper by @talyarkoni.com @jake-westfall.bsky.social and @nichols.bsky.social
wellcomeopenresearch.org/articles/1-2...
This makes me think back to this beautiful and underappreciated paper by @talyarkoni.com @jake-westfall.bsky.social and @nichols.bsky.social
wellcomeopenresearch.org/articles/1-2...
Mass-univariate analysis is still the bread-and-butter: intuitive, fast… and chronically overfitted. Add harsh multiple-comparison penalties, and we patch the workflow with statistical band-aids. No wonder the stringency debates never die.
Mass-univariate analysis is still the bread-and-butter: intuitive, fast… and chronically overfitted. Add harsh multiple-comparison penalties, and we patch the workflow with statistical band-aids. No wonder the stringency debates never die.
The outcome: a self-supervised training objective based on active vision that beats the SOTA on NSD representational alignment. 👇
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715
+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
The outcome: a self-supervised training objective based on active vision that beats the SOTA on NSD representational alignment. 👇
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715
+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715
+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
Computational modeling of error patterns during reward-based learning show evidence that habit learning (value free!) supplements working memory in 7 human data sets.
rdcu.be/eQjLN
Computational modeling of error patterns during reward-based learning show evidence that habit learning (value free!) supplements working memory in 7 human data sets.
rdcu.be/eQjLN
But, contrary to what you may think, noise ceilings do not provide an absolute index of data quality.
Let's dive into why. 🧵
But, contrary to what you may think, noise ceilings do not provide an absolute index of data quality.
Let's dive into why. 🧵
(1) Are the claims interesting/important?
(2) Does the evidence support the claims?
Most of my reviews these days are short and focused.
(1) Are the claims interesting/important?
(2) Does the evidence support the claims?
Most of my reviews these days are short and focused.
Using EEG + fMRI, we show that when humans recognize images that feedforward CNNs fail on, the brain recruits cortex-wide recurrent resources.
www.biorxiv.org/content/10.1... (1/n)
Using EEG + fMRI, we show that when humans recognize images that feedforward CNNs fail on, the brain recruits cortex-wide recurrent resources.
www.biorxiv.org/content/10.1... (1/n)
tl;dr: you can now chat with a brain scan 🧠💬
1/n
tl;dr: you can now chat with a brain scan 🧠💬
1/n
www.nature.com/articles/s41...
#neuroAI
www.nature.com/articles/s41...
#neuroAI
A novel artifact-robust framework to investigate online effects of transcranial current stimulation (tCS).
Further, we test this approach in an MEG study 🧲🧠 and find neural interaction between tCS and flickering visual stimulation.
www.biorxiv.org/content/10.1...
A novel artifact-robust framework to investigate online effects of transcranial current stimulation (tCS).
Further, we test this approach in an MEG study 🧲🧠 and find neural interaction between tCS and flickering visual stimulation.
www.biorxiv.org/content/10.1...
Science shouldn’t depend on arbitrary thresholds that change with context.
Knowledge should accumulate, not collapse into yes/no verdicts.
Turning continuous evidence into discrete “significance” decisions is information loss
Science shouldn’t depend on arbitrary thresholds that change with context.
Knowledge should accumulate, not collapse into yes/no verdicts.
Turning continuous evidence into discrete “significance” decisions is information loss