Andrea Costantino
@costantinoai.bsky.social
82 followers 160 following 14 posts
👀🧠🤖 cognitive neuroscientist @ Hoplab, KU Leuven | interested in vision and learning
Posts Media Videos Starter Packs
Reposted by Andrea Costantino
tessamdekker.bsky.social
We’re hiring! We’re looking for two RAs to study neuroplasticity in sight loss, sight rescue and development in children and adults @ucl.ac.uk using a wide range of neuroimaging and behavioral methods. Please help spread the word! Apply by 16 Oct! t.ly/q3aYe #neurojobs #NeuroSkyence
Research Assistant at UCL
Recruiting now: Research Assistant on jobs.ac.uk. Click for details and explore more academic job opportunities on the top job board
t.ly
Reposted by Andrea Costantino
neurosteven.bsky.social
🧠 New preprint: Why do deep neural networks predict brain responses so well?
We find a striking dissociation: it’s not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics.
📊 Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
costantinoai.bsky.social
I see the point, but that might be a too simplistic example. In brains it is unlikely that a single neuron "does the job", and from a representational perspective what we call "noise" may still be a behaviorally relevant signal at the pop level (stim X shows property i but not j).
costantinoai.bsky.social
It was great reconnecting with friends and colleagues at #CCN2025 in Amsterdam and presenting our latest #expertise work.

We 👀 into how #chess experts represent the board, and how the content, structure, and location of these repr shift w/ expertise.⬇️
Reposted by Andrea Costantino
thomasserre.bsky.social
🚨 New preprint alert!
Our latest study, led by @DrewLinsley, examines how deep neural networks (DNNs) optimized for image categorization align with primate vision, using neural and behavioral benchmarks.
costantinoai.bsky.social
Congrats, Martin! Compelling results and very interesting methods. And, even more exciting, your conclusions/results are in line with our (now published, link in my last post) work on foveal feedback.

Looking forward to discussing this further!
costantinoai.bsky.social
...and/or a predictive coding mechanism, where higher-level areas form a “guess” about the periphery and send back only the most relevant low-level features.

Both frameworks match our conclusion that coarse perceptual details travel back to foveal V1!
costantinoai.bsky.social
Speculation time!

These findings could reflect a pre-saccadic mechanism, where peripheral info is relayed to the fovea to facilitate subsequent recognition...
costantinoai.bsky.social
Overall, we found that un-stimulated foveal V1 contains info about real-world, peripherally presented stimuli.

This signal is mediated by both local and high-to-low circuits, and it carries low-level, perceptual details—implying a form of compression on its way back to V1.
costantinoai.bsky.social
To determine the source of the signal, we ran a PPI analysis. The foveal ROI was functionally connected with both per V1 and LOC, but not other regions.

So there seems to be a local path within V1 and a route from higher-level visual areas—though not all info survives the trip.
costantinoai.bsky.social
Our MVPA results showed that foveal V1 indeed contained feedback about perceptually distinct categories (cars vs. bikes) but not finer semantic distinctions (male vs. female faces).

In contrast, higher-level areas like FFA and LOC robustly decoded those semantic details.
costantinoai.bsky.social
To tease apart low- and high-level info, we compared our data to two models capturing perceptual vs. semantic/categorical features.

If foveal V1 encodes perceptual details, we’d expect it to align more with TDANN’s predictions than with CLIP’s -- and vice-versa.
costantinoai.bsky.social
We recorded BOLD signal from various ROIs, including FFA, LOC, retinotopically defined “peripheral” and several "foveal" ROIs to capture potential feedback across varying spatial scales.

We used MVPA to see whether activation patterns reflected perceptual or categorical info.
costantinoai.bsky.social
We ran an fMRI experiment where participants performed a same/different task on two briefly presented peripheral images — faces (male/female) or vehicles (cars/motorbikes) — 7 degrees away from central fixation, ensuring the fovea wasn’t stimulated directly.
costantinoai.bsky.social
1. What is the nature of this info? Is the information fed back to foveal V1 mainly low-level/perceptual, or does it also carry higher-level/semantics properties?

2. Does it reach foveal V1 via local pathways, or through feedback from higher-order visual areas (FFA and LOC)?
costantinoai.bsky.social
What happens in our brain when we see peripheral objects?

Past studies showed that un-stimulated foveal cortex receives info about shapes presented only in peripheral vision—and that disrupting foveal V1 impairs peripheral discrimination.

But..
Reposted by Andrea Costantino
martinhebart.bsky.social
People talk a lot about objects, but what about the softness of a cushion, the greenness of an emerald, or the viscosity of oil? In our work just published @pnas.org, we shed light on how we make sense of the hundreds of materials around us.
www.pnas.org/doi/10.1073/...
Reposted by Andrea Costantino
neurograce.bsky.social
You know what I'd love to be able to do?

Research.
Reposted by Andrea Costantino
icevislab.bsky.social
Paper🚨 "Objects, Faces, and Spaces: Organizational Principles of Visual Object Perception as Evidenced by Individual Differences in Behavior" by @heidasigurdar.bsky.social & @ingamariao.bsky.social JEP:G editor's choice ->free to read psycnet.apa.org/fulltext/202... #visionscience #psychscisky 🧵1/13
Example stimuli of 15 object types are shown. Quadrants of object space are represented by different colors (orange: stubby animate-looking; purple: spiky animate-looking; pink: stubby inanimate-looking; green: spiky inanimate-looking; gray: all objects). Exerc. eq. = exercise equipment.
Reposted by Andrea Costantino
linateichmann.bsky.social
🚨PhD opportunity Fall/Winter
2025🚨
Join me in Geneva Switzerland #unige to learn more about colour perception. Using neuroimaging & computational modelling, you'll be working with an international & interdisciplinary team to understand how we transform light into a colourful world!🧠👁️🌈 #neurojobs