Kunal Jha
@kjha02.bsky.social
97 followers 350 following 31 posts
CS PhD Student @University of Washington, CSxPhilosophy @Dartmouth College Interested in MARL, Social Reasoning, and Collective Decision making in people, machines, and other organisms kjha02.github.io
Posts Media Videos Starter Packs
kjha02.bsky.social
The big takeaway: framing behavior prediction as a program synthesis problem is an accurate, scalable, and efficient path to human-compatible AI!

It allows multi-agent systems to rapidly and accurately anticipate others' actions for more effective collaboration.
kjha02.bsky.social
ROTE doesn’t sacrifice accuracy for speed!

While initial program generation takes time, the inferred code can be executed rapidly, making it orders of magnitude more efficient than other LLM-based methods for long-horizon predictions.
kjha02.bsky.social
What explains this performance gap? ROTE handles complexity better. It excels with intricate tasks like cleaning and interacting with objects (e.g., turning items on/off) in Partnr, while baselines only showed success with simpler navigation and object manipulation.
kjha02.bsky.social
We scaled up to the embodied robotics simulator Partnr, a complex, partially observable environment with goal-directed LLM-agents.

ROTE still significantly outperformed all LLM-based and behavior cloning baselines for high-level action prediction in this domain!
kjha02.bsky.social
A key strength of code: zero-shot generalization.

Programs inferred from one environment transfer to new settings more effectively than all other baselines. ROTE's learned programs transfer without needing to re-incur the cost of text generation.
kjha02.bsky.social
Can scripts model nuanced, real human behavior?

We collected human gameplay data and found ROTE not only outperformed all baselines but also achieved human-level performance when predicting the trajectories of real people!
kjha02.bsky.social
Introducing ROTE (Representing Others’ Trajectories as Executables)!

We use LLMs to generate Python programs 💻 that model observed behavior, then uses Bayesian inference to select the most likely ones. The result: A dynamic, composable, and analyzable predictive representation!
kjha02.bsky.social
Traditional AI is stuck! Predicting behavior is either brittle (Behavior Cloning) or too slow with endless belief space enumeration (Inverse Planning).

How can we avoid mental state dualism while building scalable, robust predictive models?
kjha02.bsky.social
Forget modeling every belief and goal! What if we represented people as following simple scripts instead (i.e "cross the crosswalk")?

Our new paper shows AI which models others’ minds as Python code 💻 can quickly and accurately predict human behavior!

shorturl.at/siUYI%F0%9F%...
Reposted by Kunal Jha
claireyang.bsky.social
Still catching up on my notes after my first #cogsci2025, but I'm so grateful for all the conversations and new friends and connections! I presented my poster "When Empowerment Disempowers" -- if we didn't get the chance to chat or you would like to chat more, please reach out!
Person standing next to poster titled "When Empowerment Disempowers"
Reposted by Kunal Jha
maxkw.bsky.social
Our new paper is out in PNAS: "Evolving general cooperation with a Bayesian theory of mind"!

Humans are the ultimate cooperators. We coordinate on a scale and scope no other species (nor AI) can match. What makes this possible? 🧵

www.pnas.org/doi/10.1073/...
Evolving general cooperation with a Bayesian theory of mind | PNAS
Theories of the evolution of cooperation through reciprocity explain how unrelated self-interested individuals can accomplish more together than th...
www.pnas.org
kjha02.bsky.social
Really pumped for my Oral presentation on this work today!!! Come check out the RL session from 3:30-4:30pm in West Ballroom B

You can also swing by our poster from 4:30-7pm in West Exhibition Hall B2-B3 # W-713

See you all there!
kjha02.bsky.social
Our new paper (first one of my PhD!) on cooperative AI reveals a surprising insight: Environment Diversity > Partner Diversity.

Agents trained in self-play across many environments learn cooperative norms that transfer to humans on novel tasks.

shorturl.at/fqsNN%F0%9F%...
kjha02.bsky.social
I'll be at ICML next week! If anyone wants to chat about single/multi-agent RL, continual learning, cognitive science, or something else, shoot me a message!!!
kjha02.bsky.social
Oral @icmlconf.bsky.social !!! Can't wait to share our work and hear the community's thoughts on it, should be a fun talk!

Can't thank my collaborators enough: @cogscikid.bsky.social y.social @liangyanchenggg @simon-du.bsky.social @maxkw.bsky.social @natashajaques.bsky.social
kjha02.bsky.social
Our new paper (first one of my PhD!) on cooperative AI reveals a surprising insight: Environment Diversity > Partner Diversity.

Agents trained in self-play across many environments learn cooperative norms that transfer to humans on novel tasks.

shorturl.at/fqsNN%F0%9F%...
kjha02.bsky.social
The big takeaway: Environment diversity > Partner diversity

Training across diverse tasks teaches agents how to cooperate, not just whom to cooperate with. This enables zero-shot coordination with novel partners in novel environments, a critical step toward human-compatible AI.
kjha02.bsky.social
Our work used NiceWebRL, a Python-based package we helped develop for evaluating Human, Human-AI, and Human-Human gameplay on Jax-based RL environments!

This tool makes crowdsourcing data for CS and CogSci studies easier than ever!

Learn more: github.com/wcarvalho/ni...
GitHub - wcarvalho/nicewebrl: Python library for easily making web Apps to compare humans and AI
Python library for easily making web Apps to compare humans and AI - wcarvalho/nicewebrl
github.com
kjha02.bsky.social
Why do humans prefer CEC agents? They collide less and adapt better to human behavior.
This increased adaptability reflects general norms for cooperation learned across many environments, not just memorized strategies.
kjha02.bsky.social
Human studies confirm our findings! CEC agents achieve higher success rates with human partners than population based methods like FCP and are rated qualitatively better to collaborate with than the SOTA approach (E3T) despite never having seen the level during training.
kjha02.bsky.social
Using empirical game theory analysis, we show CEC agents emerge as the dominant strategy in a population of different agent types during Ad-hoc Teamplay!

When diverse agents must collaborate, the CEC-trained agents are selected for their adaptability and cooperative skills.
kjha02.bsky.social
The result? CEC agents significantly outperform baselines when collaborating zero-shot with novel partners on novel environments.

Even more impressive: CEC agents outperform methods that were specifically trained on the test environment but struggle to adapt to new partners!