David Amadeus Vogelsang
@davogelsang.bsky.social
30 followers 120 following 12 posts
Lecturer in Brain & Cognition at the University of Amsterdam
Posts Media Videos Starter Packs
Reposted by David Amadeus Vogelsang
earlkmiller.bsky.social
For all the knucklehead reviewers out there.
Principles for proper peer review - Earl K. Miller
jocnf.pubpub.org/pub/qag76ip8...
#neuroscience
Principles for proper peer review
jocnf.pubpub.org
Reposted by David Amadeus Vogelsang
olejensen.bsky.social
In our Trends in Cogn Sci paper we point to the connectivity crisis in task-based human EEG/MEG research: many connectivity metrics, too little replication. Time for community-wide benchmarking to build robust, generalisable measures across labs & tasks. www.sciencedirect.com/science/arti...
Confronting the connectivity crisis in human M/EEG research
The cognitive neuroscience community using M/EEG has not converged on measures of task-related inter-regional brain connectivity that generalize acros…
www.sciencedirect.com
davogelsang.bsky.social
Thank you; and that is an interesting question. My prediction is that it may not work so well (would be fun to test)
davogelsang.bsky.social
Thank you for your reply. Unfortunately, we did not examine within-category effects, but that would certainly be interesting to do
davogelsang.bsky.social
Our takeaway:
Memory has a geometry.
The magnitude of representations predicts memorability across vision and language, providing a new lens for understanding why some stimuli are memorable.
davogelsang.bsky.social
Think of memory as geometry:
An item’s vector length in representational space predicts how likely it is to stick in your mind — at least for images and words.
davogelsang.bsky.social
So what did we learn?
✅ Robust effect for images
✅ Robust effect for words
❌ No effect for voices
→ Memorability seems tied to how strongly items project onto meaningful representational dimensions, not all sensory domains.
davogelsang.bsky.social
Then we asked: does this principle also apply to voices?
Using a recent dataset with >600 voice clips, we tested whether wav2vec embeddings showed the same effect.
👉 They didn’t. No consistent link between L2 norm and voice memorability.
davogelsang.bsky.social
And crucially:
This effect held even after controlling for word frequency, valence, and size.
So representational magnitude is not just a proxy for familiar or emotionally loaded words.
davogelsang.bsky.social
Then we asked: is this just a visual trick, or is it present in other domains as well?
When we turned to words, the result was striking:
Across 3 big datasets, words with higher vector magnitude in embeddings were consistently more memorable, revealing the same L2 norm principle
davogelsang.bsky.social
In CNNs, the effect is strongest in later layers, where abstract, conceptual features are represented.
📊 Larger representational magnitude → higher memorability.
davogelsang.bsky.social
We first wanted to examine whether we could replicate this L2 norm effect as reported by Jaegle et al. (2019).
Using the massive THINGS dataset (>26k images, 13k participants), we replicated that the L2 norm of CNN representations predicts image memorability.
davogelsang.bsky.social
Why do we remember some things better than others?
Memory varies across people, but some items are intrinsically more memorable.
Jeagle et al. (2019) showed that a simple geometric property of representations — the L2 norm (vector magnitude) — positively correlates with image memorability
Reposted by David Amadeus Vogelsang
drbreaky.bsky.social
Interested in hippocampal dynamics and their interactions with cortical rhythms?

Our physically constrained model of cortico-hippocampal interactions - complete with fast geometrically informed numerical simulation (available at embedded github repo)

www.biorxiv.org/content/10.1...