Peter Henderson
@peterhenderson.bsky.social
3.6K followers 550 following 210 posts
Assistant Professor the Polaris Lab @ Princeton (https://www.polarislab.org/); Researching: RL, Strategic Decision-Making+Exploration; AI+Law
Posts Media Videos Starter Packs
peterhenderson.bsky.social
Why might AI companies take on larger copyright litigation risks? If they estimate AGI-scale impacts are 2-3 yrs out, litigation will lag that long. By then, the bet might be: govts step in (too big to fail), rightsholders reliant on AI, fair use prevails, or have $$$ to settle.
peterhenderson.bsky.social
Quick take: Are open-weight AI models getting a fair shake in evals? A few thoughts on comparing systems-to-models, sparked by Anthropic’s recent postmortem.

Check it our most recent post: www.ailawpolicy.com/p/quick-take...
peterhenderson.bsky.social
GPT-5-codex just ``git reset --hard'' ongoing changes in a repo, saying "I panicked!"

h/t Zeyu Shen @ Princeton
peterhenderson.bsky.social
☢️ Can an AI model be "born secret" when it comes to nuclear and radiological risks? What powers does the Atomic Energy Act give the federal government over frontier models?

It might be more than you think! And may preempt parts of state regs. Check out our post: www.ailawpolicy.com/p/ai-born-se...
AI "Born Secret"? The Atomic Energy Act, AI, and Federalism
A law & policy deep dive.
www.ailawpolicy.com
peterhenderson.bsky.social
Annnnnndddd Judge Alsup just rejected the settlement. Still some time to fix it. Rejection was mostly on the grounds that the class was under-specified (no final list of works, no opt-out/notification mechanism solidified).

news.bloomberglaw.com/ip-law/anthr...
Reposted by Peter Henderson
peterhenderson.bsky.social
The terms of Anthropic's settlement w/book authors just came out.

💰$1.5B to authors in libgen (Books3 corpus)!

Interestingly, this is ~$3k per book, close to the terms that HarperCollins allegedly gave to authors for their books ($2.5k). Consensus price forming?
peterhenderson.bsky.social
Work with amazing folks: Lucy He, Nimra Nadeem, Michel Liao, Howard Chen, Danqi Chen, & Mariano-Florentino Cuéllar @carnegieendowment.org
peterhenderson.bsky.social
Basically, if we’re going to take model specs/constitutional AI seriously, we need to optimize rules and build out surrounding consistency-enhancing structures, paralleling the legal system.

Let's build better natural language laws and law-following AI together! If interested, reach out!
peterhenderson.bsky.social
Obviously, lots more to do in this space! I'm super excited about this direction and the forthcoming work that we're building out.
peterhenderson.bsky.social
3️⃣ These computational tools, we think, can also be applied to positive models of the legal system, something that we’re tackling. More on this soon!
peterhenderson.bsky.social
2️⃣ We leverage interpretive constraints or ambiguity to induce more consistent interpretations and debug laws for AI. These computational tools allow us to not only build more rigorous laws for AI, but adds a layer of visibility on what can go wrong, ex ante.
peterhenderson.bsky.social
A few quick takeaways below, but I’ll drop more findings soon on this dense paper:

1️⃣ Given the same set of rules, models will interpret scenarios wildly differently. This gives us a mechanism to quantify interpretive ambiguity.
peterhenderson.bsky.social
We model a space of reasonable interpreters and then modify rules, or add interpretive constraints, to reduce the entropy of the distribution.
peterhenderson.bsky.social
Wonder why Claude decided to report users to the authorities? It might be because its constitution says Claude should choose responses in the long-term interest of humanity!

But what if we could leverage computational and legal tools to "debug" or "lint" AI rules/laws for ambiguity?

🧵!
peterhenderson.bsky.social
Excited to offer my AI Law class again @ Princeton this year. We'll be sharing lecture notes/materials and more this year on the course webpage! Imo, we have a unique offering that emphasizes how the technical details affect legal outcomes. Check it out!

www.polarislab.org/ai-law-2025/...
peterhenderson.bsky.social
You can also fill out this Expression of Interest to make sure I get eyes on your profile earlier: forms.gle/6SiZECaSMsJi...
forms.gle
peterhenderson.bsky.social
(As well as positive energy, intellectual curiosity, a passion for engineering quality, and craving for positive societal impact!)

If you're excited about working with me and my group, do apply to Princeton and mention me in your personal statement.
peterhenderson.bsky.social
I'm starting to get emails about PhDs for next year. I'm always looking for great people to join!

For next year, I'm looking for people with a strong reinforcement learning, game theory, or strategic decision-making background...
Reposted by Peter Henderson
justinhendrix.bsky.social
A California teen sought advice from OpenAI's GPT-4o on how to end his life. The chatbot gave him explicit instructions and encouragement. His parents are suing the company and its CEO, Sam Altman, alleging “it was the predictable result of deliberate design choices."
Breaking Down the Lawsuit Against OpenAI Over Teen's Suicide | TechPolicy.Press
A California teen sought advice from OpenAI's GPT-4o on how to end his life. His parents are suing the company and its CEO.
www.techpolicy.press
peterhenderson.bsky.social
Anthropic settled with authors in its ongoing litigation! Given the increasing likelihood of a messy trial, this was probably the best move. AI companies may have to be more strategic about which cases help set precedent in this area. Curious to see the terms..

news.bloomberglaw.com/class-action...
Anthropic Settles Major AI Copyright Suit Brought by Authors (1)
Anthropic PBC reached a settlement with authors in a high-stakes copyright class action that threatened the AI company with potentially billions of dollars in damages.
news.bloomberglaw.com
peterhenderson.bsky.social
New paper suggests that if firms aren’t seeing growth from AI, it could be because current deployments replace existing labor, instead of scaling output. AI policy and governance agenda for 2025+ needs to put labor at the forefront.

digitaleconomy.stanford.edu/publications...
peterhenderson.bsky.social
Glad to see Google still working on efficiency-and transparency-of the energy impacts of their models!
jeffdean.bsky.social
AI efficiency is important. The median Gemini Apps text prompt in May 2025 used 0.24 Wh of energy (<9 seconds of TV watching) & 0.26 mL (~5 drops) of water. Over 12 months, we reduced the energy footprint of a median text prompt 33x, while improving quality:
cloud.google.com/blog/product...