Ryan Truong
banner
heyodogo.bsky.social
Ryan Truong
@heyodogo.bsky.social
ryanvt.com
Reposted by Ryan Truong
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.
January 9, 2026 at 1:27 AM
Reposted by Ryan Truong
Another fun project from @yangxiang.bsky.social. She asks the question: do people assign responsibility to personality traits in the same way that they assign reponsibility to people? The answer: sort of!

osf.io/preprints/ps...
OSF
osf.io
December 6, 2025 at 3:11 PM
Reposted by Ryan Truong
Reposted by Ryan Truong
It’s grad school application season, and I wanted to give some public advice.

Caveats:
-*-*-*-*


> These are my opinions, based on my experiences, they are not secret tricks or guarantees

> They are general guidelines, not meant to cover a host of idiosyncrasies and special cases
November 6, 2025 at 2:55 PM
Reposted by Ryan Truong
How do people flexibly integrate visual & textual information to draw mental inferences about agents we've never met?

In a new paper led by @lanceying.bsky.social, we introduce a cognitive model that achieves this by synthesizing rational agent models on-the-fly -- presented at #EMNLP2025!
November 5, 2025 at 3:55 PM
Reposted by Ryan Truong
It's been 15 years since Edna Ullmann-Margalit passed away, and I keep going back to stuff she's written.

I highly recommend 'Normal Rationality', which collects her essays.

If you're looking to start, maybe look here:

bit.ly/4qk2GZS

bit.ly/46XudIV

bit.ly/4nkfLQc

bit.ly/3KUIiOy
October 16, 2025 at 7:57 PM
Reposted by Ryan Truong
arxiv.org/abs/2510.11144

"Using teacher models that answer at varying levels of abstraction, from executable action sequences to high-level subgoal descriptions, we show that lifelong learning agents benefit most from answers that are abstracted and decoupled from the current state."
$How^{2}$: How to learn from procedural How-to questions
An agent facing a planning problem can use answers to how-to questions to reduce uncertainty and fill knowledge gaps, helping it solve both current and future tasks. However, their open ended nature, ...
arxiv.org
October 14, 2025 at 3:41 PM
Reposted by Ryan Truong
Q: Why did the LLM cross the road?

A: We're not sure, but it achieved 94.7% on CHIKENBench-Large
September 29, 2025 at 1:00 PM
Reposted by Ryan Truong
Does predictive coding work in SPACE or in TIME? Most neuroscientists assume TIME, i.e. neurons predict their future sensory inputs. We show that in visual cortex predictive coding actually works across SPACE, just like the original Rao+Ballard theory #neuroscience
www.biorxiv.org/cgi/content/...
September 22, 2025 at 7:09 PM
Reposted by Ryan Truong
🚨Our preprint is online!🚨

www.biorxiv.org/content/10.1...

How do #dopamine neurons perform the key calculations in reinforcement #learning?

Read on to find out more! 🧵
September 19, 2025 at 1:05 PM
Reposted by Ryan Truong
Belated update #2: my year at Meta FAIR through the AIM program was so nice that I’m sticking around for the long haul.

I’m excited to stay at FAIR and work with @asli-celikyilmaz.bsky.social and friends on fun LLM questions; I’ll be working from the New York office so we’re sticking around.
September 19, 2025 at 5:27 PM
Reposted by Ryan Truong
Now out in Cognition, work with the great @gershbrain.bsky.social @tobigerstenberg.bsky.social on formalizing self-handicapping as rational signaling!
📃 authors.elsevier.com/a/1lo8f2Hx2-...
September 19, 2025 at 3:46 AM
Reposted by Ryan Truong
Our NeurIPS submission arxiv.org/abs/2502.08938 did not get in, but it's one of my favorite papers and I think one of the better papers we've ever put out so I want to highlight it
Reevaluating Policy Gradient Methods for Imperfect-Information Games
In the past decade, motivated by the putative failure of naive self-play deep reinforcement learning (DRL) in adversarial imperfect-information games, researchers have developed numerous DRL algorithm...
arxiv.org
September 18, 2025 at 8:35 PM
Reposted by Ryan Truong
Can’t afford therapy. I was talking to my neighbor’s cat just so I could pretend, but they changed the locks.
September 18, 2025 at 3:58 AM
Reposted by Ryan Truong
Excited to share a new preprint based on my work this past year:

**TreeIRL** is a novel planner that combines classical search with learning-based methods to achieve state-of-the-art performance in simulation and in **real-world autonomous driving**! 🚘 🤖 🚀
September 18, 2025 at 3:39 PM
Reposted by Ryan Truong
🚨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!

Humans are capable of sophisticated theory of mind, but when do we use it?

We formalize & document a new cognitive shortcut: belief neglect — inferring others' preferences, as if their beliefs are correct🧵
September 17, 2025 at 12:58 AM
Reposted by Ryan Truong
🚨 NEW PREPRINT: Multimodal inference through mental simulation.

We examine how people figure out what happened by combining visual and auditory evidence through mental simulation.

Paper: osf.io/preprints/ps...
Code: github.com/cicl-stanfor...
September 16, 2025 at 7:04 PM
Reposted by Ryan Truong
Very exciting preprint from Dan Yamins' NeuroAI lab, proposing Probabilistic Structure Integration (PSI), a way to bootstrap from pixels to higher-level visual abstractions through a kind of visual prompting. One of the deepest and most original ideas I've read in a while.

arxiv.org/abs/2509.09737
World Modeling with Probabilistic Structure Integration
We present Probabilistic Structure Integration (PSI), a system for learning richly controllable and flexibly promptable world models from data. PSI consists of a three-step cycle. The first step, Prob...
arxiv.org
September 15, 2025 at 1:45 PM
Reposted by Ryan Truong
thinking about the time someone tried to impersonate our Department Chair on July 4th
August 26, 2025 at 12:51 PM
Reposted by Ryan Truong
"Everyone agrees that emergence is important, but they don’t agree on what the word should mean" arxiv.org/abs/2410.15468
What Emergence Can Possibly Mean
We consider emergence from the perspective of dynamics: states of a system evolving with time. We focus on the role of a decomposition of wholes into parts, and attempt to characterize relationships b...
arxiv.org
August 20, 2025 at 5:33 PM
Reposted by Ryan Truong
@gershbrain.bsky.social and I have a new paper in PLOS Comp Bio!

We study how two cognitive constraints—action consideration set size & policy complexity—interact in context-dependent decision making, and how humans exploit their synergy to reduce behavioral suboptimality.

osf.io/preprints/ps...
OSF
osf.io
August 19, 2025 at 3:56 AM
Reposted by Ryan Truong
If you work on artificial or natural intelligence and are finishing your PhD, consider applying for a Kempner research fellowship at Harvard:
kempnerinstitute.harvard.edu/kempner-inst...
Kempner Research Fellowship - Kempner Institute
The Kempner brings leading, early-stage postdoctoral scientists to Harvard to work on projects that advance the fundamental understanding of intelligence.
kempnerinstitute.harvard.edu
August 18, 2025 at 5:27 PM
Reposted by Ryan Truong
This is unfortunate:
www.nytimes.com/2025/07/28/u...
I wrote to the president of Harvard. I hope other faculty will speak their conscience, even if it means more struggle ahead.
July 29, 2025 at 9:54 AM
Reposted by Ryan Truong
📢🚨I’m elated to share that I’ll be starting as a tenure-track 𝗔𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝘁 𝗧𝗲𝗮𝗰𝗵𝗶𝗻𝗴 𝗣𝗿𝗼𝗳𝗲𝘀𝘀𝗼𝗿 👩🏻‍🏫 in the Department of Cognitive Science at UC San Diego this July! ☀️ @ucsandiego.bsky.social 1/

#ucsd #newprofessor #womeninSTEM
June 25, 2025 at 11:54 AM