Jennifer Hu
jennhu.bsky.social
Jennifer Hu
@jennhu.bsky.social
Asst Prof at Johns Hopkins Cognitive Science • Director of the Group for Language and Intelligence (GLINT) ✨• Interested in all things language, cognition, and AI

jennhu.github.io
Pinned
Interested in doing a PhD at the intersection of human and machine cognition? ✨ I'm recruiting students for Fall 2026! ✨

Topics of interest include pragmatics, metacognition, reasoning, & interpretability (in humans and AI).

Check out JHU's mentoring program (due 11/15) for help with your SoP 👇
The department of Cognitive Science @jhu.edu is seeking motivated students interested in joining our interdisciplinary PhD program! Applications due 1 Dec

Our PhD students also run an application mentoring program for prospective students. Mentoring requests due November 15.

tinyurl.com/2nrn4jf9
Yeah exactly -- @kanishka.bsky.social in examples like yours above, if we assume that g=1 and those strings aren't likely to be ungrammatical realizations of some other messages, then diffs in p(string) will reflect diffs in p(m). Which is what we want, no?
November 11, 2025 at 4:17 PM
This work was done with an amazing team: @wegotlieb.bsky.social, @siyuansong.bsky.social, @kmahowald.bsky.social, @rplevy.bsky.social

Preprint (pre-TACL version): arxiv.org/abs/2510.16227

10/10
November 10, 2025 at 10:11 PM
Our work also raises new Qs. If LMs virtually always produce grammatical strings, then why is there so much overlap between the probs assigned to grammatical/ungrammatical strings?

This connects to tensions btwn language generation/identification (e.g., openreview.net/forum?id=FGT...)
9/10
November 10, 2025 at 10:11 PM
An offshoot of our analysis: if you use minimal pairs that are not tightly controlled, you risk underestimating the grammatical competence of models, due to differences in underlying message probabilities. 8/10
November 10, 2025 at 10:11 PM
As mentioned above, Prediction #3 shows that recent criticism about the overlap in probabilities across gram/ungram strings should NOT be interpreted as a failure of probability to tell us about grammaticality.

This overlap is to be expected if prob is influenced by factors other than gram. 7/10
November 10, 2025 at 10:11 PM
We use our framework to derive 3 predictions, which we validate empirically:

1. Correlation btwn the prob of string probs within minimal pairs

2. Correlation btwn LMs’ and humans’ deltas within minimal pairs

3. Poor separation btwn prob of unpaired grammatical and ungrammatical strings

6/10
November 10, 2025 at 10:11 PM
In other words, when messages aren’t controlled for, gram strings won't always be more probable than ungram strings.

This phenomenon has previously been used to argue that probability is a bad tool for measuring grammatical knowledge -- but in fact, it follows directly from our framework! 5/10
November 10, 2025 at 10:11 PM
Minimal pairs are pairs of strings with the same underlying m but different values of g.

Good LMs have low P(g=0), so they prefer the grammatical string in the minimal pair.

But for non-minimal string pairs with different underlying messages, differences in P(m) can overwhelm even good LMs. 4/10
November 10, 2025 at 10:11 PM
Returning to first principles:

In our framework, the probability of a string comes from two latent variables: m, the message to be conveyed; and g, whether the message is realized grammatically.

Ungrammatical strings get probability mass when g=0: the message is not realized grammatically. 3/10
November 10, 2025 at 10:11 PM
Here we develop and give evidence for a formal framework that reconciles these two observations.

Our framework provides theoretical justification for the widespread practice of using *minimal pairs* to test what grammatical generalizations LMs have acquired. 2/10
November 10, 2025 at 10:11 PM
New work to appear @ TACL!

Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.

Yet they often assign higher probability to ungrammatical strings than to grammatical strings.

How can both things be true? 🧵👇
November 10, 2025 at 10:11 PM
Reposted by Jennifer Hu
It’s grad school application season, and I wanted to give some public advice.

Caveats:
-*-*-*-*


> These are my opinions, based on my experiences, they are not secret tricks or guarantees

> They are general guidelines, not meant to cover a host of idiosyncrasies and special cases
November 6, 2025 at 2:55 PM
Interested in doing a PhD at the intersection of human and machine cognition? ✨ I'm recruiting students for Fall 2026! ✨

Topics of interest include pragmatics, metacognition, reasoning, & interpretability (in humans and AI).

Check out JHU's mentoring program (due 11/15) for help with your SoP 👇
The department of Cognitive Science @jhu.edu is seeking motivated students interested in joining our interdisciplinary PhD program! Applications due 1 Dec

Our PhD students also run an application mentoring program for prospective students. Mentoring requests due November 15.

tinyurl.com/2nrn4jf9
November 4, 2025 at 2:44 PM
Reposted by Jennifer Hu
New preprint!

"Non-commitment in mental imagery is distinct from perceptual inattention, and supports hierarchical scene construction"

(by Li, Hammond, & me)

link: doi.org/10.31234/osf...

-- the title's a bit of a mouthful, but the nice thing is that it's a pretty decent summary
October 14, 2025 at 1:22 PM
At #COLM2025 and would love to chat all things cogsci, LMs, & interpretability 🍁🥯 I'm also recruiting!

👉 I'm presenting at two workshops (PragLM, Visions) on Fri

👉 Also check out "Language Models Fail to Introspect About Their Knowledge of Language" (presented by @siyuansong.bsky.social Tue 11-1)
October 7, 2025 at 1:39 AM
Can AI models introspect? What does introspection even mean for AI?

We revisit a recent proposal by Comșa & Shanahan, and provide new experiments + an alternate definition of introspection.

Check out this new work w/ @siyuansong.bsky.social, @harveylederman.bsky.social, & @kmahowald.bsky.social 👇
How reliable is what an AI says about itself? The answer depends on whether models can introspect. But, if an LLM says its temperature parameter is high (and it is!)….does that mean it’s introspecting? Surprisingly tricky to pin down. Our paper: arxiv.org/abs/2508.14802 (1/n)
August 26, 2025 at 5:59 PM
Due to popular demand, we are extending the CogInterp submission deadline again! 🗓️🥳

Submit by *8/27* (midnight AoE)
Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! 📣

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4
Home
First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)
coginterp.github.io
August 22, 2025 at 12:53 PM
🗓️ The submission deadline for CogInterp @ NeurIPS has officially been *extended* to 8/22 (AoE)! 👇

Looking forward to seeing your submissions!
Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! 📣

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4
Home
First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)
coginterp.github.io
August 14, 2025 at 1:22 PM
Heading to CogSci this week! ✈️

Find me giving talks on:
💬 Prod-comp asymmetry in children and LMs (Thu 7/31)
💬 How people make sense of nonsense (Sat 8/2)

📣 Also, I’m recruiting grad students + postdocs for my new lab at Hopkins! 📣

If you’re interested in language / cognition / AI, let’s chat! 😄
July 28, 2025 at 4:04 PM
Join us at NeurIPS in San Diego this December for talks by experts in the field, including James McClelland, @cgpotts.bsky.social, @scychan.bsky.social, @ari-holtzman.bsky.social, @mtoneva.bsky.social, & @sydneylevine.bsky.social!

🗓️ Submit your 4-page paper (non-archival) by August 15!

4/4
July 16, 2025 at 1:08 PM
We're bringing together researchers in fields such as machine learning, psychology, linguistics, and neuroscience to discuss new empirical findings + theories which help us interpret high-level cognitive abilities in deep learning models.

3/4
July 16, 2025 at 1:08 PM
Deep learning models (e.g. LLMs) show impressive abilities. But what generalizations have these models acquired? What algorithms underlie model behaviors? And how do these abilities develop?

Cognitive science offers a rich body of theories and frameworks which can help answer these questions.

2/4
July 16, 2025 at 1:08 PM
Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! 📣

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4
Home
First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)
coginterp.github.io
July 16, 2025 at 1:08 PM
Reposted by Jennifer Hu
Happy to announce the first workshop on Pragmatic Reasoning in Language Models — PragLM @ COLM 2025! 🎉
How do LLMs engage in pragmatic reasoning, and what core pragmatic capacities remain beyond their reach?
🌐 sites.google.com/berkeley.edu/praglm/
📅 Submit by June 23rd
PragLM @ COLM '25
IMPORTANT DATES
sites.google.com
May 28, 2025 at 6:21 PM
Preprint link: arxiv.org/abs/2504.14107

A huge thank you to my amazing collaborators Michael Lepori (@michael-lepori.bsky.social) & Michael Franke (@meanwhileina.bsky.social)!

(12/12)
Signatures of human-like processing in Transformer forward passes
Modern AI models are increasingly being used as theoretical tools to study human cognition. One dominant approach is to evaluate whether human-derived measures are predicted by a model's output: that ...
arxiv.org
May 20, 2025 at 2:26 PM