Rachel Ryskin
banner
ryskin.bsky.social
Rachel Ryskin
@ryskin.bsky.social
Cognitive scientist @ UC Merced
http://raryskin.github.io
PI of Language, Interaction, & Cognition (LInC) lab: http://linclab0.github.io
Pinned
Does our "semantic space" get stuck in the past as we age?

New work by @ellscain.bsky.social uses historical embeddings + behavioral data to show we are truly lifelong learners.

Older adults don't rely on historical meanings—they update them to match current language! 🧠✨

doi.org/10.1162/OPMI...
Reposted by Rachel Ryskin
I always thought preschoolers were too egocentric to do well on communication tasks where they had to talk about novel referents. Old papers reported they'd say stuff like "this one looks like my uncle's hat."

@vboyce.bsky.social shows that this is wrong!

osf.io/preprints/ps...
February 12, 2026 at 11:38 PM
Reposted by Rachel Ryskin
I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science 📕👇

Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!

Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!
February 12, 2026 at 10:22 PM
Reposted by Rachel Ryskin
AI agents are becoming a serious threat to research data quality.

Today we’re rolling out Bot authenticity checks on @joinprolific.bsky.social, detecting agentic AI with 100% accuracy in testing.

Comes with a native Qualtrics integration! More info:

www.prolific.com/resources/in...
February 4, 2026 at 3:09 PM
Reposted by Rachel Ryskin
Out today! "How Does a Deep Neural Network Look at Lexical Stress in English Words?" w/ I. Allouche, I. Asael, R. Rousso, V. Dassa, A. Bradlow, S.-E. Kim & @keshet.bsky.social doi.org/10.1121/10.0... 1/
How does a deep neural network look at lexical stress in English words?
Despite their success in speech processing, neural networks often operate as black boxes, prompting the following questions: What informs their decisions, and h
doi.org
February 11, 2026 at 2:41 PM
Reposted by Rachel Ryskin
Students need to remember that Inigo Montoya method for emails and greetings:
"Hello, my name is Inigo Montoya. You killed my father. Prepare to die.”

Polite Greeting
Name
Relevant Personal Link
Manage Expectations

Keep it BRIEF.
am i the only 1 who wishes ppl would remind me that we've interacted before if it's plausible i've forgotten? I've had like 3-4 emails that seems like 'cold emails' ab research opportunities but then it turns out we've exchanged emails in the past & it's awkward to not have realized this? just me?
February 10, 2026 at 4:44 PM
Reposted by Rachel Ryskin
This study was an amazing collaborative experience. I'm really really grateful to all the wonderful people who contributed and made this happen.

It's the closest I have ever come to finding something like a "universal" in human cognition.
February 9, 2026 at 12:32 PM
Reposted by Rachel Ryskin
Recently published in @nature.com :the human brain stores what happened and the context in mostly separate neurons—binding them only when needed, which enables flexible memory (and hopefully avoids confusion) 🧪 www.nature.com/articles/s41...
Distinct neuronal populations in the human brain combine content and context - Nature
Single-neuron recordings in humans reveal largely separate content and context neurons whose coordinated activity flexibly places memory items in context.
www.nature.com
January 20, 2026 at 8:50 PM
Reposted by Rachel Ryskin
Imagination in bonobos!

I am thrilled to share a new paper w/ Amalia Bastos, out now in @science.org

We provide the first experimental evidence that a nonhuman animal can follow along a pretend scenario & track imaginary objects. Work w/ Kanzi, the bonobo, at Ape Initiative

youtu.be/NUSHcQQz2Ko
Apes Share Human Ability to Imagine
YouTube video by Johns Hopkins University
youtu.be
February 5, 2026 at 7:18 PM
Reposted by Rachel Ryskin
How do diverse context structures reshape representations in LLMs?
In our new work, we explore this via representational straightening. We found LLMs are like a Swiss Army knife: they select different computational mechanisms reflected in different representational structures. 1/
February 4, 2026 at 2:54 AM
Reposted by Rachel Ryskin
The Visual Learning Lab is hiring TWO lab coordinators!

Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)—with flexible summer start dates.
January 30, 2026 at 11:21 PM
Reposted by Rachel Ryskin
The cerebellum supports high-level language?? Now out in @cp-neuron.bsky.social, we systematically examined language-responsive areas of the cerebellum using precision fMRI and identified a *cerebellar satellite* of the neocortical language network!
authors.elsevier.com/a/1mUU83BtfH...
1/n 🧵👇
January 22, 2026 at 5:21 PM
Reposted by Rachel Ryskin
Interpreting EEG requires understanding how the skull smears electrical fields as they propagate from the cortex. I made a browser-based simulator for my EEG class to visualize how dipole depth/orientation change the topomap.
dbrang.github.io/EEG-Dipole-D...

Github page: github.com/dbrang/EEG-D...
January 20, 2026 at 5:00 PM
Reposted by Rachel Ryskin
New paper with @inbalarnon.bsky.social and @simonkirby.bsky.social! Learnability pressures drive the emergence of core statistical properties of language–e.g. Zipf's laws–in an iterated sequence learning experiment, with learners’ RTs indicating sensitivity to the emerging sequence information.
Cultural Transmission Promotes the Emergence of Statistical Properties That Support Language Learning
Language is passed across generations through cultural transmission. Prior experimental work, where participants reproduced sets of non-linguistic sequences in transmission chains, shows that this pr...
doi.org
January 6, 2026 at 2:39 PM
Does our "semantic space" get stuck in the past as we age?

New work by @ellscain.bsky.social uses historical embeddings + behavioral data to show we are truly lifelong learners.

Older adults don't rely on historical meanings—they update them to match current language! 🧠✨

doi.org/10.1162/OPMI...
January 2, 2026 at 7:33 PM
Reposted by Rachel Ryskin
A quick read to start off 2026…
January 1, 2026 at 6:44 PM
Reposted by Rachel Ryskin
I may be a *little* biased but this 📘 is GREAT! If you ever found language structure interesting, but were turned off by implausible and overly complicated accounts, this book is 4U: a simple and empirically grounded account of the syntax of natural lgs. A must-read for lang researchers+aficionados!
New book! I have written a book, called Syntax: A cognitive approach, published by MIT Press.

This is open access; MIT Press will post a link soon, but until then, the book is available on my website:
tedlab.mit.edu/tedlab_websi...
tedlab.mit.edu
December 24, 2025 at 8:42 PM
Reposted by Rachel Ryskin
New book! I have written a book, called Syntax: A cognitive approach, published by MIT Press.

This is open access; MIT Press will post a link soon, but until then, the book is available on my website:
tedlab.mit.edu/tedlab_websi...
tedlab.mit.edu
December 24, 2025 at 7:55 PM
Reposted by Rachel Ryskin
New preprint on prosody in the brain!
tinyurl.com/2ndswjwu
HeeSoKim NiharikaJhingan SaraSwords @hopekean.bsky.social @coltoncasto.bsky.social JenniferCole @evfedorenko.bsky.social

Prosody areas are distinct from pitch, speech, and multiple-demand areas, and partly overlap with lang+social areas→🧵
A distinct set of brain areas process prosody--the melody of speech
Human speech carries information beyond the words themselves: pitch, loudness, duration, and pauses--jointly referred to as 'prosody'--emphasize critical words, help group words into phrases, and conv...
tinyurl.com
December 15, 2025 at 7:28 PM
Reposted by Rachel Ryskin
The Press and @openmindjournal.bsky.social are pleased to announce a partnership with Lyrasis through the Open Access Community Investment Program (OACIP).

Learn how your institution can support this initiative to continue providing the latest #cogsci research—free of charge—here: bit.ly/452nMma
The MIT Press and Open Mind partner with Lyrasis to support diamond open access publishing through the Open Access Community Investment Program
The Open Access Community Investment Program (OACIP), an innovative model for community action, will seek support for MIT Press journal Open Mind through July 2026
bit.ly
December 11, 2025 at 2:30 PM
Reposted by Rachel Ryskin
The last chapter of my PhD (expanded) is finally out as a preprint!

“Semantic reasoning takes place largely outside the language network” 🧠🧐

www.biorxiv.org/content/10.6...

What is semantic reasoning? Read on! 🧵👇
Semantic reasoning takes place largely outside the language network
The brain's language network is often implicated in the representation and manipulation of abstract semantic knowledge. However, this view is inconsistent with a large body of evidence suggesting that...
www.biorxiv.org
December 11, 2025 at 6:34 PM
Reposted by Rachel Ryskin
Using a large-scale individual differences investigation (with ~800 participants each performing an ~8-hour battery of non-literal comprehension tasks), we found that pragmatic language use fractionates into 3 components: Social conventions, intonation, and world knowledge–based causal reasoning.
December 9, 2025 at 8:10 PM
Reposted by Rachel Ryskin
A couple years (!) in the making: we’re releasing a new corpus of embodied, collaborative problem solving dialogues. We paid 36 people to play Portal 2’s co-op mode and collected their speech + game recordings.

Paper: arxiv.org/abs/2512.03381
Website: berkeley-nlp.github.io/portal-dialo...

1/n
December 5, 2025 at 6:54 PM
Reposted by Rachel Ryskin
Now out in Scientific Data, OneStop: A 360-Participant English Eye Tracking Dataset with Different Reading Regimes.

www.nature.com/articles/s41...
OneStop: A 360-Participant English Eye Tracking Dataset with Different Reading Regimes - Scientific Data
Scientific Data - OneStop: A 360-Participant English Eye Tracking Dataset with Different Reading Regimes
www.nature.com
December 5, 2025 at 6:43 AM
Reposted by Rachel Ryskin
#NeurIPS2025 Check out EyeBench 👀, a mega-project which provides a much needed infrastructure for loading & preprocessing eye-tracking for reading datasets, and addressing super exciting modeling challenges: decoding linguistic knowledge 👩 and reading interactions 👩+📖 from gaze!

eyebench.github.io
December 2, 2025 at 10:21 AM
Reposted by Rachel Ryskin
Looking forward to #NeurIPS25 this week 🏝️! I'll be presenting at Poster Session 3 (11-2 on Thursday). Feel free to reach out!
Excited to announce that I’ll be presenting a paper at #NeurIPS this year! Reach out if you’re interested in chatting about LM training dynamics, architectural differences, shortcuts/heuristics, or anything at the CogSci/NLP/AI interface in general! #Neurips2025
December 1, 2025 at 10:12 PM