Pablo Marcos-Manchón
@jazzmaniatico.bsky.social
35 followers 62 following 9 posts
ML Engineer trying to do neuroscience
Posts Media Videos Starter Packs
Pinned
jazzmaniatico.bsky.social
🧠🚨 How does the brain represent what we see? Is visual input transformed to form these representations in similar ways across people and even AI models like DNNs?

We explore these questions using fMRI and large-scale representational alignment analyses.

🔗 arxiv.org/abs/2507.13941

Thread👇 (1/8)
Convergent transformations of visual representation in brains and models
A fundamental question in cognitive neuroscience is what shapes visual perception: the external world's structure or the brain's internal architecture. Although some perceptual variability can be trac...
arxiv.org
Reposted by Pablo Marcos-Manchón
hyruuk.bsky.social
This year at #CCN25 we showed the importance of OOD evaluation to adjudicate between brain models. Our results demonstrate these trivial but key facts :
- high encoding accuracy ≠ functional convergence
- human brain ≠ NES console ≠ 4-layers CNN
- videogames are cool

w/ @lune-bellec.bsky.social 🙌
Reposted by Pablo Marcos-Manchón
lampinen.bsky.social
In neuroscience, we often try to understand systems by analyzing their representations — using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns
Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices
Reposted by Pablo Marcos-Manchón
xiongbowu.bsky.social
🚨 New preprint alert!

Excited to share our latest work on alpha/beta activity, eye movements, and memory.

Across 4 experiments combining scalp EEG/iEEG with eye tracking, we show that alpha/beta activity directly reflects eye movements, and only indirectly relates to memory.

👇 Highlights (1/7):
biorxiv-neursci.bsky.social
Low-frequency brain oscillations reflect the dynamics of the oculomotor system: a new perspective on subsequent memory effects https://www.biorxiv.org/content/10.1101/2025.07.29.667451v1
Reposted by Pablo Marcos-Manchón
martamasilva.bsky.social
🧠 Paper out!

We investigated how hippocampal and cortical ripples support memory during movie watching. We found that:

🎬 Hippocampal ripples mark event boundaries
🧩 Cortical ripples predict later recall

Ripples may help transform real-life experiences into lasting memories!

rdcu.be/eui9l
Movie-watching evokes ripple-like activity within events and at event boundaries
Nature Communications - The neural processes involved in memory formation for realistic experiences remain poorly understood. Here, the authors found that ripple-like activity in the human...
rdcu.be
jazzmaniatico.bsky.social
Diving deeper into the LOTC hub's social vs non-social component:

Alignment across brains along the lateral stream (EVC→LOTC) is present only when viewing social scenes (with people or animals).

This supports its proposed role as a specialized "third visual pathway" for social perception.

⬇️ (7/8)
Split-panel brain and graph plots showing inter-subject representational alignment for social vs non-social scenes. The lateral pathway only emerges during social scene perception.
jazzmaniatico.bsky.social
So what information does each hub actually encode?

Using KMCCA, we studied the primary dimension that organizes each hub's information:

👁️ EVC: Low-level visual features
🏞️ Ventral Hub: Scene & object structure
👨‍👩‍👧‍👦 LOTC Hub: Social vs. non-social content

⬇️ (6/8)
Scatter plots of shared representational components in three brain hubs (KMCCA top 2 dimensions). Early visual cortex shows low-level structure; the ventral hub encodes scene layout; LOTC separates social (human, animal) from non-social stimuli.
jazzmaniatico.bsky.social
Vision DNNs capture this shared geometry, with each brain hub showing a different layer alignment profile:

🧠 Early visual ↔️ Shallow DNN layers
🧠 Ventral hub ↔️ Mixed DNN layers
🧠 LOTC ↔️ Deep DNN layers

Language Models only align with the high-level LOTC hub.

⬇️ (5/8)
Comparison of brain alignment with deep vision (left) and language models (right). Vision models align broadly across cortex, with early areas matching shallow layers and higher areas matching deeper layers. Language models only align with LOTC. Line plots show RSA scores across model depth for three brain hubs.
jazzmaniatico.bsky.social
This shared representational geometry is so consistent across people that we could map a whole-brain connectivity network based on it, revealing interactions between visual, memory and prefrontal areas.

⬇️ (4/8)
Whole-brain connectivity graph based on representational similarity across individuals. Nodes represent cortical areas; edges reflect shared representational geometry. Two main subnetworks emerge along ventral and lateral visual pathways.
jazzmaniatico.bsky.social
We identified 3 cortical hubs with highly consistent representations across all individuals:
📍 Early visual cortex (V1–V4)
📍 Ventral hub (scene/object areas ~PPA)
📍 LOTC Hub (hMT+/TPOJ)

These hubs form two pathways:
- Classical ventral stream (EVC → Ventral)
- Lateral stream (EVC → LOTC)

⬇️ (3/8)
Three brain regions show high inter-subject representational similarity: early visual cortex, ventral hub, and LOTC. A connectivity graph shows how these hubs are embedded in two distinct streams based on representational geometry.
jazzmaniatico.bsky.social
Using Representational Similarity Analysis (RSA) on fMRI data from people viewing diverse scenes, we measure:

- Inter-subject RSA: Are visual representations shared across individuals?
- Brain-Model RSA: Is this shared information low-level (visual) or high-level (semantic)?

Methods ⬇️ (2/8)
Diagram showing how brain activity, vision models, and language models are compared using RSA to analyze representational alignment across stimuli, models, and brain regions.
jazzmaniatico.bsky.social
🧠🚨 How does the brain represent what we see? Is visual input transformed to form these representations in similar ways across people and even AI models like DNNs?

We explore these questions using fMRI and large-scale representational alignment analyses.

🔗 arxiv.org/abs/2507.13941

Thread👇 (1/8)
Convergent transformations of visual representation in brains and models
A fundamental question in cognitive neuroscience is what shapes visual perception: the external world's structure or the brain's internal architecture. Although some perceptual variability can be trac...
arxiv.org
jazzmaniatico.bsky.social
Deep learning models and brains share fascinating parallels in their ability to process and instantly integrate new knowledge.

Join us this year at ICON 2025 to discuss how sudden learning emerges across artificial and biological systems! 🧠🤖
jvoeller.bsky.social
📣 ICON Symposium

Really excited to announce our symposium at @ICON this year on Sudden Learning Across Systems!

Together with lots of cool people: @gonzalezgarcia.bsky.social @lindedomingo.bsky.social @ortiztudela.bsky.social @anikaloewe.bsky.social @jazzmaniatico.bsky.social and Andrea Greve!