Christina Sartzetaki
@sargechris.bsky.social
75 followers 100 following 13 posts
PhD candidate @ UvA 🇳🇱, ELLIS 🇪🇺 | {video, neuro, cognitive}-AI Neural networks 🤖 and brains 🧠 watching videos 🔗 https://sites.google.com/view/csartzetaki/
Posts Media Videos Starter Packs
Pinned
sargechris.bsky.social
Excited to be presenting this paper at #ICLR2025 this week!
Come to the poster if you want to know more about how human brains and DNNs process video 🧠🤖

📆 Sat 26 Apr, 10:00-12:30 - Poster session 5 (#64)
📄 openreview.net/pdf?id=LM4PY...
🌐 sergeantchris.github.io/hundred_mode...
sargechris.bsky.social
Excited to be presenting this paper at #ICLR2025 this week!
Come to the poster if you want to know more about how human brains and DNNs process video 🧠🤖

📆 Sat 26 Apr, 10:00-12:30 - Poster session 5 (#64)
📄 openreview.net/pdf?id=LM4PY...
🌐 sergeantchris.github.io/hundred_mode...
Reposted by Christina Sartzetaki
neurosteven.bsky.social
New preprint (#neuroscience #deeplearning doi.org/10.1101/2025...)! We trained 20 DCNNs on 941235 images with varying scene segmentation (original. object-only, silhouette, background-only). Despite object recognition varying (27-53%), all networks showed similar EEG prediction.
Reposted by Christina Sartzetaki
cgmsnoek.bsky.social
✨ The VIS Lab at the #University of #Amsterdam is proud and excited to announce it has #TWELVE papers 🚀 accepted for the leading #AI-#makers conference on representation learning ( #ICLR2025 ) in Singapore 🇸🇬. 1/n
👇👇👇 @ellisamsterdam.bsky.social
Reposted by Christina Sartzetaki
algonautsproject.bsky.social
(1/4) The Algonauts Project 2025 challenge is now live!

Participate and build computational models that best predict how the human brain responds to multimodal movies!

Submission deadline: 13th of July.

#algonauts2025 #NeuroAI #CompNeuro #neuroscience #AI

algonautsproject.com
The Algonauts Project 2025
homepage
algonautsproject.com
sargechris.bsky.social
9/ This is our first research output in this interesting new direction and I’m actively working on this - so stay tuned for updates and follow-up works!
Feel free to discuss your ideas and opinions with me ⬇️
sargechris.bsky.social
8/ 🎯 With this work we aim to forge a path that widens our understanding of temporal and semantic video representations in brains and machines, ideally leading towards more efficient video models and more mechanistic explanations of processing in the human brain.
sargechris.bsky.social
7/ We report a significant negative correlation of model FLOPs to alignment in several high-level brain areas, indicating that computationally efficient neural networks can potentially produce more human-like semantic representations.
sargechris.bsky.social
6/ Training dataset biases related to a certain functional selectivity (e.g. face features) can be transferred in brain alignment with the respective functionally selective brain area (e.g. face region FFA).
sargechris.bsky.social
5/ Comparing model architectures, CNNs exhibit a better hierarchy overall (with a clear mid-depth peak for early regions and gradual improvement as depth increases for late regions). Transformers however, achieve an impressive correlation to early regions even from one tenth of layer depth.
sargechris.bsky.social
4/ We decouple temporal modeling from action space optimization by adding image action recognition models as control. Our results show that temporal modeling is key for alignment to early visual brain regions, while a relevant classification task is key for alignment to higher-level regions.
sargechris.bsky.social
3/ We disentangle 4 factors of variation (temporal modeling, classification task, architecture, and training dataset) that affect model-brain alignment, which we measure by conducting Representational Similarity Analysis (RSA) across multiple brain regions and model layers.
sargechris.bsky.social
2/ We take a step in this direction by performing a large-scale benchmarking of models on their representational alignment to the recently released Bold Moments Dataset of fMRI recordings from humans watching videos.
sargechris.bsky.social
1/ Humans are very efficient in processing continuous visual input, neural networks trained to process videos are still not up to that standard.
What can we learn from comparing the internal representations of the two systems (biological and artificial)?
Reposted by Christina Sartzetaki
cogcompneuro.bsky.social
After a great conference in Boston, CCN is going to take place in Amsterdam in 2025! To help the exchange of ideas between #neuroscience, cognitive science, and #AI, CCN will for the first time have full length paper submissions (alongside the established 2 pagers)! Info below👇
#NeuroAI #CompNeuro