Cassidy Laidlaw
cassidylaidlaw.bsky.social
Cassidy Laidlaw
@cassidylaidlaw.bsky.social
PhD student at UC Berkeley studying RL and AI safety.
https://cassidylaidlaw.com
Real human users rate our AssistanceZero assistant much higher than one trained via a pretraining+SFT pipeline! And, it enables people to build houses while placing fewer blocks than building alone.
April 11, 2025 at 10:17 PM
Our new RL algorithm, AssistanceZero, trains an assistant that displays emergent helpful behaviors like *active learning* and *learning from corrections*.
April 11, 2025 at 10:17 PM
In Minecraft, we use an assistance game formulation where a simulated human is given random houses to build, and an AI assistants learns via RL to help the human out. The assistant can't see the goal house, so it has to predict the goal and maintain uncertainty to be helpful.
April 11, 2025 at 10:17 PM
Unlike RLHF, assistance games explicitly treat the user-assistant interaction as a two player game, where the user knows their goal but the assistant doesn't. AGs model *communication* about the goal from the user to the assistant and *collaboration* between them to achieve it.
April 11, 2025 at 10:17 PM
RLHF is great but it encourages short-term optimization: trying to solve the user's entire problem in a single response. For example, if you ask ChatGPT to "clean up some disk space," it will immediately give you a program to run without asking which files are okay to delete!
April 11, 2025 at 10:17 PM
We built an AI assistant that plays Minecraft with you.
Start building a house—it figures out what you’re doing and jumps in to help.

This assistant *wasn't* trained with RLHF. Instead, it's powered by *assistance games*, a better path forward for building AI assistants. 🧵
April 11, 2025 at 10:17 PM
Experiments show that χ² occupancy measure regularization outperforms KL action distribution regularization in all the environments we study! Our regularization scheme allows for larger improvements in true reward compared to base policies while preventing reward hacking.
December 19, 2024 at 5:17 PM
Regularization is already used to prevent reward hacking in RLHF, but our theory suggests two key changes: regularize based on occupancy measures rather than action distributions and use χ² divergence instead of KL divergence.
December 19, 2024 at 5:17 PM
Our definition also leads to a principled method for preventing reward hacking: regularize optimization to the base policy based on χ² occupancy measure divergence. We prove that this regularized objective gives a lower bound on improvement in the true reward.
December 19, 2024 at 5:17 PM
We define reward hacking as when optimizing a proxy breaks the correlation, resulting in lower true reward than the base policy. Our definition captures intuitive cases of reward hacking in realistic environments, including RLHF, traffic control, and glucose monitoring.
December 19, 2024 at 5:17 PM
We argue that a good proxy *correlates* with the true reward for states and actions sampled from some reasonable "base policy.” For example, in RLHF a natural base policy is the SFT policy.
December 19, 2024 at 5:17 PM
Reward hacking is when we optimize a reward function that seems reasonable, but it ceases to be a good proxy and we end up with a policy that performs poorly under the unknown "true" reward function. It's ubiquitous because real-world objectives are really hard to specify.
December 19, 2024 at 5:17 PM
When RLHFed models engage in “reward hacking” it can lead to unsafe/unwanted behavior. But there isn’t a good formal definition of what this means! Our new paper provides a definition AND a method that provably prevents reward hacking in realistic settings, including RLHF. 🧵
December 19, 2024 at 5:17 PM