Sam Nastase
@samnastase.bsky.social
2.4K followers 750 following 62 posts
assistant professor of psychology at USC丨he/him丨semiprofessional dungeon master丨https://snastase.github.io/
Posts Media Videos Starter Packs
Pinned
samnastase.bsky.social
I'm recruiting PhD students to join my new lab in Fall 2026! The Shared Minds Lab at @usc.edu will combine deep learning and ecological human neuroscience to better understand how we communicate our thoughts from one brain to another.
Reposted by Sam Nastase
rodbraga.bsky.social
📣 New preprint from the Braga Lab! 📣

The ventral visual stream for reading converges on the transmodal language network

Congrats to Dr. Joe Salvo for this epic set of results

Big Q: What brain systems support the translation of writing to concepts and meaning?

Thread 🧵 ⬇️
samnastase.bsky.social
Check out the lab website for ideas about the kind of work we'll be doing: shared-minds.github.io

All admissions are through the Brain and Cognitive Science area of the Department of Psychology at USC: dornsife.usc.edu/psyc/doctora...

Feel free to reach out via email as well!
Doctoral Program - Department of Psychology
USC Dornsife Department of Psychology
dornsife.usc.edu
samnastase.bsky.social
I'm recruiting PhD students to join my new lab in Fall 2026! The Shared Minds Lab at @usc.edu will combine deep learning and ecological human neuroscience to better understand how we communicate our thoughts from one brain to another.
Reposted by Sam Nastase
neuranna.bsky.social
As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @neurotaha.bsky.social
built a library to easily compare design choices & model features across datasets!

We hope it will be useful to the community & plan to keep expanding it!
1/
neurotaha.bsky.social
🚨 Paper alert:
To appear in the DBM Neurips Workshop

LITcoder: A General-Purpose Library for Building and Comparing Encoding Models

📄 arxiv: arxiv.org/abs/2509.091...
🔗 project: litcoder-brain.github.io
Reposted by Sam Nastase
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Sam Nastase
markthornton.bsky.social
The psych job market may not be dead... but it is gravely injured 😬 So far it's looking like the Trump administration's attacks on higher ed/research are going to have more than 2x the impact on the job market as the covid-19 pandemic. #psychjobs #neurojobs #academicjobs
Bar plot showing the number of psychology jobs posted each year by area. There are major dips in 2020 due to covid, and in 2025 (now).
Reposted by Sam Nastase
jungheejung.bsky.social
New Open dataset alert:
🧠 Introducing "Spacetop" – a massive multimodal fMRI dataset that bridges naturalistic and experimental neuroscience!

N = 101 x 6 hours each = 606 functional iso-hours combining movies, pain, faces, theory-of-mind and other cognitive tasks!

🧵below
Reposted by Sam Nastase
cnspworkshop.bsky.social
🚨 Just over a week left to register for the #CNSP2025 Online Workshop (details in post below)! 🚨

Link to the workshop registration form: docs.google.com/forms/d/e/1F...
Reposted by Sam Nastase
neurograce.bsky.social
The rumors are true! #CCN2026 will be held at NYU. @toddgureckis.bsky.social and I will be executive-chairing. Get in touch if you want to be involved!
Reposted by Sam Nastase
kanishka.bsky.social
I will unfortunately have to skip SCiL this year, but I am thrilled to share that Jwalanthi will be presenting this work by her, @rjha.bsky.social, me, and @kmahowald.bsky.social on a tool that allows you to project contextualized embeddings from LMs to interpretable semantic spaces!
Title page of SCIL extended abstract titled: semantic-features: A User-Friendly Tool for Studying Contextual Word Embeddings in Interpretable Semantic Spaces
Reposted by Sam Nastase
jayneuro.bsky.social
Music is an incredibly powerful retrieval cue. What is the neural basis of music-evoked memory reactivation? And how does this reactivation relate to later memory for the retrieved events? In our new study, we used Eternal Sunshine of the Spotless Mind to find out. www.biorxiv.org/content/10.1...
Music-evoked reactivation during continuous perception is associated with enhanced subsequent recall of naturalistic events
Music is a potent cue for recalling personal experiences, yet the neural basis of music-evoked memory remains elusive. We address this question by using the full-length film Eternal Sunshine of the Spotless Mind to examine how repeated musical themes reactivate previously encoded events in cortex and shape next-day recall. Participants in an fMRI study viewed either the original film (with repeated musical themes) or a no-music version. By comparing neural activity patterns between these groups, we found that music-evoked reactivation of neural patterns linked to earlier scenes in the default mode network was associated with improved subsequent recall. This relationship was specific to the music condition and persisted when we controlled for a proxy measure of initial encoding strength (spatial intersubject correlation), suggesting that music-evoked reactivation may play a role in making event memories stick that is distinct from what happens at initial encoding. ### Competing Interest Statement The authors have declared no competing interest. National Institutes of Health, https://ror.org/01cwqze88, F99 NS118740, R01 MH112357
www.biorxiv.org
samnastase.bsky.social
Finally, we developed a set of interactive tutorials for preprocessing and running encoding models to get you started. Happy to hear any feedback or field any questions about the dataset! hassonlab.github.io/podcast-ecog...
samnastase.bsky.social
We validated both the data and stimulus features using encoding models, replicating previous findings showing an advantage for LLM embeddings.
samnastase.bsky.social
We also provide word-level transcripts and stimulus features ranging from low-level acoustic features to large language model embeddings.
samnastase.bsky.social
We recorded ECoG data in nine subjects while they listened to a 30-minute story. We provide a minimally preprocessed derivative of the raw data, ready to be used.
samnastase.bsky.social
Check out Zaid's open "Podcast" ECoG dataset for natural language comprehension (w/ Hasson Lab). The paper is now out at Scientific Data (nature.com/articles/s41...) and the data are available on OpenNeuro (openneuro.org/datasets/ds0...).
samnastase.bsky.social
These findings suggest that, despite the diversity of languages, shared meaning emerges from our interactions with one another and our shared world.
samnastase.bsky.social
Our results suggest that neural representations of meaning underlying different languages are shared across speakers of various languages, and that LMs trained on different languages converge on this shared meaning.
samnastase.bsky.social
We then tested the extent to which each of these 58 languages can predict the brain activity of our participants. We found that languages that are more similar to the listener’s native language, the better the prediction:
samnastase.bsky.social
What about multilingual models? We translated the story from English to 57 other languages spanning 14 families, and extracted embeddings for each from multilingual BERT. We visualized the dissimilarity matrix using MDS and found clusters corresponding to language family types.
samnastase.bsky.social
We found that models trained to predict neural activity for one language generalize to different subjects listening to the same content in a different language, across high-level language and default-mode regions.
samnastase.bsky.social
We then used the encoding models trained on one language to predict the neural activity in listeners of other languages.
samnastase.bsky.social
We then aimed to find if a similar shared space exists in the brains of native speakers of the three different languages. We used voxelwise encoding models that align the LM embeddings with brain activity from one group of subjects listening to the story in their native language.