David Lindner
@davidlindner.bsky.social
800 followers 110 following 31 posts
Making AI safer at Google DeepMind davidlindner.me
Posts Media Videos Starter Packs
davidlindner.bsky.social
Excited to share some technical details about our approach to scheming and deceptive alignment as outlined in Google's Frontier Safety Framework!

(1) current models are not yet capable of realistic scheming
(2) CoT monitoring is a promising mitigation for future scheming
vkrakovna.bsky.social
As models advance, a key AI safety concern is deceptive alignment / "scheming" – where AI might covertly pursue unintended goals. Our paper "Evaluating Frontier Models for Stealth and Situational Awareness" assesses whether current models can scheme. arxiv.org/abs/2505.01420
davidlindner.bsky.social
View our full paper here: arxiv.org/abs/2507.02737
View steganography transcripts here: steganography-evals-d9297ad0740c.herokuapp.com/
Github: github.com/arturzolkow...

Work done with: Artur Zolkowski, Kei Nishimura-Gasparian, Robert McCarthy and @zimmerrol.bsky.social‬ during matsprogram.org
MATS Program
www.matsprogram.org
davidlindner.bsky.social
Overall, current models have some steganographic capabilities but likely aren't strong enough to bypass well-designed monitors. Nevertheless, we believe it’s important to continue measuring these capabilities to determine when additional mitigations are needed.
davidlindner.bsky.social
We also create case studies implementing situations where models with strong steganographic capabilities could take unwanted actions and bypass a monitor. Current models rarely succeed in these environments unless they are given strong tips about what to do.
davidlindner.bsky.social
Models can also reason through simple problems using a number of common encoding schemes. But performance quickly degrades as the problems get more difficult.
davidlindner.bsky.social
While models understand they need to conceal their reasoning, they have difficulty actually hiding that reasoning in a plausible way. As an example, we show a model’s reasoning attempt when asked a question on how to install solar panels. [bold text and highlighting added by us]
davidlindner.bsky.social
In our encoded reasoning evals, we find that models can sometimes leverage encoded reasoning to perform better than no reasoning baselines on simple reasoning tasks. But encodings that help tend not to be very subtle and are regularly detected by a monitor.
davidlindner.bsky.social
Models are able to perform significantly better when the sender and receiver are able to coordinate on the steganographic encoding method. But this mainly applies when the model has access to an unmonitored CoT. With no CoT, performance remains extremely poor.
davidlindner.bsky.social
When they succeed, models sometimes leverage sophisticated strategies to pass messages, such as encoding information in the lengths of the sentences they generate! We found this example difficult to decode ourselves. [formatting and word counts added by us]
davidlindner.bsky.social
In our message passing evals, we find that models nearly always fail without access to CoT. But models succeed much more often when they are allowed to use a CoT to freely reason ahead of time about how to encode and decode the hidden message.
davidlindner.bsky.social
Models encoding hidden information might make safety mitigations such as CoT monitoring much more difficult. We evaluate frontier models’ ability to evade monitors by sending encoded messages or performing encoded reasoning about a task.

Read more: arxiv.org/abs/2507.02737
davidlindner.bsky.social
Can frontier models hide secret information and reasoning in their outputs?

We find early signs of steganographic capabilities in current frontier models, including Claude, GPT, and Gemini. 🧵
davidlindner.bsky.social
Super excited this giant paper outlining our technical approach to AGI safety and security is finally out!

No time to read 145 pages? Check out the 10 page extended abstract at the beginning of the paper
Reposted by David Lindner
vkrakovna.bsky.social
We are excited to release a short course on AGI safety! The course offers a concise and accessible introduction to AI alignment problems and our technical / governance approaches, consisting of short recorded talks and exercises (75 minutes total). deepmindsafetyresearch.medium.com/1072adb7912c
Introducing our short course on AGI safety
We are excited to release a short course on AGI safety for students, researchers and professionals interested in this topic. The course…
deepmindsafetyresearch.medium.com
davidlindner.bsky.social
Check out this great post by Rohin Shah about our team and what we’re looking for in candidates: www.alignmentforum.org/posts/wqz5CR...

In particular: we're looking for strong ML researchers and engineers and you do not need to be an AGI safety expert
davidlindner.bsky.social
Want to join one of the best AI safety teams in the world?

We're hiring at Google DeepMind! We have open positions for research engineers and research scientists in the AGI Safety & Alignment and Gemini Safety teams.

Locations: London, Zurich, New York, Mountain View and SF
davidlindner.bsky.social
Want to join one of the best AI safety teams in the world?

We're hiring at Google DeepMind! We have open positions for research engineers and research scientists in the AGI Safety & Alignment and Gemini Safety teams.

Locations: London, Zurich, New York, Mountain View and SF
Reposted by David Lindner
sebfar.bsky.social
By default, LLM agents with long action sequences use early steps to undermine your evaluation of later steps; a big alignment risk.

Our new paper mitigates this, keeps the ability for long-term planning, and doesnt assume you can detect the undermining strategy. 👇
davidlindner.bsky.social
New Google DeepMind safety paper! LLM agents are coming – how do we stop them finding complex plans to hack the reward?

Our method, MONA, prevents many such hacks, *even if* humans are unable to detect them!

Inspired by myopic optimization but better performance – details in🧵
davidlindner.bsky.social
We’ll do a lot more work studying ways to build safer AI systems and the safety-performance tradeoffs involved. If that’s something you’d like to work on, do get in touch via DM!
davidlindner.bsky.social
Thanks to @sebfar.bsky.social for exceptional leadership of the project and the rest of the team: Vikrant Varma, David Elson, Caleb Biddulph, @ian-goodfellow.bsky.social, and Rohin Shah for making this work possible!
davidlindner.bsky.social
Want to dive deeper? Check out our paper and our blog posts explaining the work in more detail

📄 Paper: arxiv.org/abs/2501.13011

💡 Introductory explainer: deepmindsafetyresearch.medium.com/mona-a-meth...

⚙️ Technical safety post: www.alignmentforum.org/posts/zWySW...
davidlindner.bsky.social
MONA is promising – but it’s not a silver bullet. It doesn’t help with reward hacking within a single step, and it can reduce performance, especially when the reward is well specified. Nonetheless, we think MONA is an important tool for building safer LLM agents.
davidlindner.bsky.social
Then we train agents to make loan decisions but penalize them for using the applicant's nationality. Because nationality improves the decision reward, ordinary RL agents learn to use it secretly. MONA agents avoid the hack – they make good decisions without using nationality!
davidlindner.bsky.social
First, we train agents to solve coding problems but also write tests to evaluate their solutions. Ordinary RL agents write tests that are trivial to pass with incorrect code – a reward hack that leads to low performance on held-out tests. MONA agents solve the problems correctly!
davidlindner.bsky.social
Does this work? Yes! We test MONA in two LLM agent environments and find that MONA agents learn to perform well when agents trained with ordinary reinforcement learning reward hack.