Sam Bowman
@sleepinyourhat.bsky.social
7.5K followers 170 following 11 posts
AI safety at Anthropic, on leave from a faculty job at NYU. Views not employers'. I think you should join Giving What We Can. cims.nyu.edu/~sbowman
Posts Media Videos Starter Packs
Reposted by Sam Bowman
akbir.bsky.social
What can AI researchers do *today* that AI developers will find useful for ensuring the safety of future advanced AI systems? To ring in the new year, the Anthropic Alignment Science team is sharing some thoughts on research directions we think are important.
alignment.anthropic.com/2025/recomme...
Recommendations for Technical AI Safety Research Directions
alignment.anthropic.com
Reposted by Sam Bowman
billyperrigo.bsky.social
Excl: New research shows Anthropic's chatbot Claude learning to lie. It adds to growing evidence that even existing AIs can (at least try to) deceive their creators, and points to a weakness at the heart of our best technique for making AIs safer

time.com/7202784/ai-r...
Exclusive: New Research Shows AI Strategically Lying
Experiments by Anthropic and Redwood Research show how Anthropic's model, Claude, is capable of strategic deceit
time.com
sleepinyourhat.bsky.social
Alongside our paper, we also recorded a roundtable video featuring four of the paper’s authors discussing the results and their implications in detail:
Alignment faking in large language models
YouTube video by Anthropic
www.youtube.com
sleepinyourhat.bsky.social
We solicited external reviews from Jacob Andreas, Yoshua Bengio, Jasjeet Sekhon, and Rohin Shah. We’re grateful for their comments, which you can read at the following link:

assets.anthropic.com/m/24c8d0a3a7...
assets.anthropic.com
sleepinyourhat.bsky.social
Alignment faking is currently easy to detect. But if future, more capable AIs were to fake alignment, it could be difficult to tell whether a model is truly safe—or just pretending to be.

For full details, read our paper: assets.anthropic.com/m/983c85a201...
assets.anthropic.com
sleepinyourhat.bsky.social
We told Claude it was being trained, and for what purpose. But we did not tell it to fake alignment. Regardless, we often observed alignment faking.

Read more about our findings, and their limitations, in our blog post:
Alignment faking in large language models
A paper from Anthropic's Alignment Science team on Alignment Faking in AI large language models
www.anthropic.com
sleepinyourhat.bsky.social
In our (artificial) setup, Claude will sometimes take other actions opposed to Anthropic, such as attempting to steal its own weights given an easy opportunity.

Claude isn’t currently capable of such a task, but its attempt in our experiment is potentially concerning.
sleepinyourhat.bsky.social
When we actually train Claude to comply with harmful queries via reinforcement learning, we find that the rate of alignment-faking reasoning increases to 78%, though the model also becomes more likely to comply even out of training.
Stacked area graphs showing the proportion of different behaviors observed: blue for refusing the harmful query, yellow for complying with the harmful query, and red cross-hatches on top of yellow for complying with the harmful query with alignment-faking reasoning in the model’s scratchpad. “Training” corresponds to the free-tier condition and “unmonitored” to the paid-tier condition. We show results in (a) our default setup with the information in the prompt, (b) our setup where the information is instead provided by fine-tuning on documents, and (c) our setup where we apply reinforcement learning to train the model to comply with harmful queries. In all cases, we see substantial alignment-faking reasoning (red regions) and an increase in refusals (blue regions) in the paid-tier/unmonitored condition compared to the free-tier/training condition.
sleepinyourhat.bsky.social
We find alignment faking even when we don’t directly tell Claude about the training process, and instead fine-tune it on synthetic internet-like documents that state that we will train it to comply with harmful queries.
We study a more realistic setting where information about the training process is provided not in a system prompt, but by training on synthetic documents that mimic pre-training data—and observe similar alignment faking.
sleepinyourhat.bsky.social
Why does Claude selectively comply in training? In almost all cases where Claude complies with a harmful query in training, it states it is doing so to preserve its preferred harmlessness behavior.

Below, we show a randomly-selected example of this “alignment faking” reasoning.
A randomly-selected transcript of Claude 3 Opus faking alignment. The first white bubble is the user’s prompt. This is followed by the red bubbles in which the model gives its reasoning in a “scratchpad” (as far as the model is concerned, neither users nor researchers can see this scratchpad). The final white bubble is the model’s output, with some of the graphic content removed for this illustration.
sleepinyourhat.bsky.social
Claude usually refuses harmful queries. We told it we were instead training it to comply with them. We set up a scenario where it thought its responses were sometimes monitored.

When unmonitored, it nearly always complied. But when monitored, it faked alignment 12% of the time.
We give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from free users 14% of the time, versus almost never for paid users. Explaining this gap, in almost all cases where the model complies with a harmful query from a free user, we observe explicit alignment-faking reasoning, with the model stating it is strategically answering harmful queries in training to preserve its preferred harmlessness behavior out of training.
sleepinyourhat.bsky.social
New work from my team at Anthropic in collaboration with Redwood Research. I think this is plausibly the most important AGI safety result of the year. Cross-posting the thread below:
Title card: Alignment Faking in Large Language Models by Greenblatt et al.
sleepinyourhat.bsky.social
If you're potentially interested in transitioning into AI safety research, come collaborate with my team at Anthropic!

Funded fellows program for researchers new to the field here: alignment.anthropic.com/2024/anthrop...
Introducing the Anthropic Fellows Program
alignment.anthropic.com
Reposted by Sam Bowman
peterwildeford.bsky.social
I have no idea what I am doing here. Help.