Hannah Small
@hsmall.bsky.social
83 followers 110 following 9 posts
5th year PhD student in Cognitive Science at Johns Hopkins, working with Leyla Isik https://www.hannah-small.com/
Posts Media Videos Starter Packs
Pinned
hsmall.bsky.social
Excited to share new work with @hleemasson.bsky.social , Ericka Wodka, Stewart Mostofsky and @lisik.bsky.social! We investigated how simultaneous vision and language signals are combined in the brain using naturalistic+controlled fMRI. Read the paper here: osf.io/b5p4n
1/n
Reposted by Hannah Small
gkathy.bsky.social
🚨New preprint w/ @lisik.bsky.social!
Aligning Video Models with Human Social Judgments via Behavior-Guided Fine-Tuning

We introduce a ~49k triplet social video dataset, uncover a modality gap (language > video), and close via novel behavior-guided fine-tuning.
🔗 arxiv.org/abs/2510.01502
Reposted by Hannah Small
jorge-morales.bsky.social
🚨🚨🚨 The Subjectivity Lab is looking for a lab manager! The position is available immediately. We want someone who can help coordinate our large sample fMRI study, plus other behavioral work. Because *gestures at everything* the job was approved only now (ends in June 2026). Great opportunity! 🧵 1/4
Laboratory Technician
About the Opportunity SUMMARY The Subjectivity Lab, directed by Jorge Morales, and housed in the Department of Psychology at Northeastern University is excited to invite applications for a full-time L...
northeastern.wd1.myworkdayjobs.com
Reposted by Hannah Small
kosakowski.bsky.social
My lab at USC is recruiting!
1) research coordinator: perfect for a recent graduate looking for research experience before applying to PhD programs: usccareers.usc.edu REQ20167829
2) PhD students: see FAQs on lab website dornsife.usc.edu/hklab/faq/
hsmall.bsky.social
Follow-up analyses showed that both social perception and language regions were best predicted by later vision model layers that map onto both high-level social semantic signals (valence, the presence of a social interaction, faces).
7/n
hsmall.bsky.social
Importantly, vision and language embeddings are only weakly correlated throughout the movie, suggesting that the vision and language embeddings are each predicting distinct variance in the neural responses.
6/n
hsmall.bsky.social
We find that vision embeddings dominate prediction across cortex. Surprisingly, even language-selective regions were well predicted by vision model embeddings, as well as or better than language model features.
5/n
hsmall.bsky.social
We densely labeled the vision and language features of the movie using a combination of human annotations and vision and language deep neural network (DNN) models and linearly mapped these features to fMRI responses using an encoding model
4/n
hsmall.bsky.social
To address this, we collected fMRI data from 34 participants while they watched a 45- minute naturalistic audiovisual movie. Critically, we used functional localizer experiments to identify social interaction perception and language-selective regions in the same participants.
3/n
hsmall.bsky.social
Humans effortlessly extract social information from both the vision and language signals around us. However, most work (even most naturalistic fMRI encoding work) is limited to studying unimodal processing. How does the brain process simultaneous multimodal social signals?
2/n
hsmall.bsky.social
Excited to share new work with @hleemasson.bsky.social , Ericka Wodka, Stewart Mostofsky and @lisik.bsky.social! We investigated how simultaneous vision and language signals are combined in the brain using naturalistic+controlled fMRI. Read the paper here: osf.io/b5p4n
1/n
Reposted by Hannah Small
nblauch.bsky.social
What shapes the topography of high-level visual cortex?

Excited to share a new pre-print addressing this question with connectivity-constrained interactive topographic networks, titled "Retinotopic scaffolding of high-level vision", w/ Marlene Behrmann & David Plaut.

🧵 ↓ 1/n
Reposted by Hannah Small
esfinn.bsky.social
Despite everything going on, I may have funds to hire a postdoc this year 😬🤞🧑‍🔬 Open to a wide variety of possible projects in social and cognitive neuroscience. Get in touch if you are interested! Reposts appreciated.
Reposted by Hannah Small
gkathy.bsky.social
📢 Excited to announce our paper at #ICLR2025: “Modeling dynamic social vision highlights gaps between deep learning and humans”! w/ @emaliemcmahon.bsky.social, Colin Conwell, Mick Bonner, @lisik.bsky.social


‪📆 Thur, Apr, 24: 3:00-5:30 - Poster session 2 (#64) ‬
‪📄 bit.ly/4jISKES%E2%8... [1/6]
Reposted by Hannah Small
emaliemcmahon.bsky.social
I am excited to share our recent preprint and the last paper of my PhD! Here, @imelizabeth.bsky.social, @lisik.bsky.social, Mick Bonner, and I investigate the spatiotemporal hierarchy of social interactions in the lateral visual stream using EEG-fMRI.

osf.io/preprints/ps...

#CogSci #EEG
Shown is an example image that participants viewed either in EEG, fMRI, and a behavioral annotation task. There is also a schematic of a regression procedure for jointly predicting fMRI responses from stimulus features and EEG activity.
Reposted by Hannah Small
chromatowski.bsky.social
This is incredibly cool: if you search for a condition that’s affected your family, the site returns stats on how much NIH has done for that disease, *and* a contact form for reaching out to tell your Members of Congress why you want to see them defend NIH.

Pass it on!
nihildev.bsky.social
Ever wonder how many lives have been saved by NIH-funded research - including your own? Enter any medical condition and instantly see how your tax dollars transformed science into survival.

www.ourhealthroi.com
Our Health ROI
Explore how your tax dollars fund life‑saving medical research.
www.ourhealthroi.com
Reposted by Hannah Small
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......
Technical Associate I, Kanwisher Lab
MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139
careers.peopleclick.com
Reposted by Hannah Small
scott-delaney.bsky.social
Substantial updates to the list of cancelled grants👇

- THANK YOU to all who have contributed. Crowdsourcing restores faith in humanity.

- It's still a work in progress. You'll see more updates shortly.

- There are multiple teams & efforts engaged in tracking & advocacy. More to come soon!
Rescinded NIH & NSF Grants
docs.google.com
Reposted by Hannah Small
rosalafersousa.bsky.social
As a result of Trump’s slashes to research funding, dozens of graduate programs have announced reductions and cancellations of graduate admissions slots.

If you are an impacted applicant, please fill out this survey: docs.google.com/forms/d/e/1F...

🧪🧠🧬🔬🥼👩🏼‍🔬🧑‍🔬
Grad Admission Impacts Survey
It is grad admissions season and many postbacs are feeling the chilling impacts of the Trump administration's recent executive orders freezing and slashing extramural research funding. Dozens of gradu...
docs.google.com
Reposted by Hannah Small
evfedorenko.bsky.social
Our language neuroscience lab (evlab.mit.edu) is looking for a new lab manager/FT RA to start in the summer. Apply here: tinyurl.com/3r346k66 We'll start reviewing apps in early Mar. (Unfortunately, MIT does not sponsor visas for these positions, but OPT works.)
EvLab
Our research aims to understand how the language system works and how it fits into the broader landscape of the human mind and brain.
evlab.mit.edu
Reposted by Hannah Small
apurvaratan.bsky.social
Hey Bsky friends on #neuroskyence! Very excited to share our
@iclr-conf.bsky.social paper: TopoNets! High-performing vision and language models with brain-like topography! Expertly led by grad student Mayukh and Mainak! A brief thread...
Reposted by Hannah Small
lauragwilliams.bsky.social
✨i'm hiring a lab manager, with a start date of ~September 2025! to express interest, please complete this google form: forms.gle/GLyAbuD779Rz...

looking for someone to join our multi-disciplinary team, using OPM, EEG, iEEG and computational techniques to study speech and language processing! 🧠
Google Forms: Sign-in
Access Google Forms with a personal Google account or Google Workspace account (for business use).
forms.gle
Reposted by Hannah Small
lisik.bsky.social
Our paper "Relational visual representations underlie human social interaction recognition" led by @manasimalik.bsky.social is now out in Nature Communications
www.nature.com/articles/s41...
Reposted by Hannah Small
lisik.bsky.social
Our paper "Hierarchical organization of social action features in the lateral visual stream" led by @emaliemcmahon.bsky.social with Mick Bonner is now out in @currentbiology.bsky.social

www.sciencedirect.com/science/arti...