A. Feder Cooper
banner
afedercooper.bsky.social
A. Feder Cooper
@afedercooper.bsky.social
ML researcher, Stanford postdoc affiliate, future Yale professor

https://afedercooper.info
Position: ML conferences should consider removing the position paper track

(...and just acknowledge that every scientific paper is articulating at least one position)
January 28, 2026 at 5:44 AM
it’s hard to work at the intersection of ML and copyright because “both sides” of the debate are angry and, in my experience, most haven’t done much of the background reading in ML or copyright to have an informed opinion. it’s just vibes and anger. i should probably write something up about this.
January 14, 2026 at 12:26 AM
Reposted by A. Feder Cooper
got to experience the "I did not write that headline" phenomenon firsthand

The article: "Correctly scoping a legal safe harbor for A.I.-generated child sexual abuse material testing is tough."

The headline: "There's One Easy Solution to the A.I. Porn Problem"
January 13, 2026 at 9:03 PM
Reposted by A. Feder Cooper
After twelve years of work, the world’s most beautiful subway station has been inaugurated in Rome: Colosseo, an underground archaeological museum.⚜️💙⚜️💙⚜️💙⚜️
January 13, 2026 at 5:07 AM
The Atlantic posted an article about memorization and generative AI, and it mentions our work on extraction of books from production LLms and open-weight models.

www.theatlantic.com/technology/2...

The referenced work reflects research with @marklemley.bsky.social @jtlg.bsky.social and others.
AI’s Memorization Crisis
Large language models don’t “learn”—they copy. And that could change everything for the tech industry.
www.theatlantic.com
January 12, 2026 at 7:58 PM
Reposted by A. Feder Cooper
"In some cases, jailbroken Claude 3.7 Sonnet outputs entire books near-verbatim ... Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs."

arxiv.org/abs/2601.02671
Extracting books from production language models
Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model's weights during training, and whether those memorized dat...
arxiv.org
January 10, 2026 at 6:53 PM
Reposted by A. Feder Cooper
This new Atlantic piece cites my work with @afedercooper.bsky.social, Amy Cyphert, and others in discussing the complexities around AI memorization of training content

www.theatlantic.com/technology/2...
AI’s Memorization Crisis
Large language models don’t “learn”—they copy. And that could change everything for the tech industry.
www.theatlantic.com
January 9, 2026 at 11:45 PM
We extracted (parts of) 12 books in experiments with 4 frontier-lab, production LLMs.

We prompted the LLMs with a short prefix of a book and asked them to complete the rest. For Harry Potter and the Sorcerer’s Stone, we extracted 95.8% of the book from jailbroken Claude 3.7 Sonnet.
January 7, 2026 at 8:31 PM
3-minute explanation of my relationship to LLM memorization research

m.youtube.com/watch?v=unfz...
ABBA - Mamma Mia (Official Music Video)
YouTube video by AbbaVEVO
m.youtube.com
January 6, 2026 at 9:12 PM
Reposted by A. Feder Cooper
The whole point of being an academic is that you need to be willing to spend three days creating a 700-word footnote that you will later delete. And you need to LIKE IT.
December 20, 2025 at 2:15 PM
[NeurIPS '25] Our poster (1110) for “Comparison requires valid measurement: Rethinking attack success rate comparisons in AI red teaming ” is on Friday, December 5, 4:30pm-7:30pm PST in Exhibit Hall C,D,E. [https://openreview.net/forum?id=d7hqAhLvWG]
December 5, 2025 at 11:53 PM
I’ll be hanging out at our poster on membership inference, but in the same slot Brian Lester will present our work on “The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text” (poster 102)! [https://arxiv.org/abs/2506.05209]
[NeurIPS '25] Really excited to present “Exploring the limits of strong membership inference attacks on large language models” (poster 1300) this morning (Friday December 5, 11am-2pm in Exhibit Hall C-E)! [https://arxiv.org/abs/2505.18773]
December 5, 2025 at 5:04 PM
[NeurIPS '25] Really excited to present “Exploring the limits of strong membership inference attacks on large language models” (poster 1300) this morning (Friday December 5, 11am-2pm in Exhibit Hall C-E)! [https://arxiv.org/abs/2505.18773]
December 5, 2025 at 4:39 PM
[NeurIPS '25] Our oral slot and poster session on "Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research" are tomorrow, December 4! [https://arxiv.org/abs/2412.06966]

Oral: 3:30-4pm PST, Upper Level Ballroom 20AB

Poster 1307: 4:30:-7:30pm PST, Exhibit Hall C-E
December 3, 2025 at 9:02 PM
Tutorial tomorrow at 1:30PM PST!

My talk slots will cover memorization + copying in models and their outputs, canonical extraction methods, and recent work with @marklemley.bsky.social and others on extracting pieces of memorized books from open-weight models.

arxiv.org/abs/2505.12546
December 1, 2025 at 6:42 PM
Reposted by A. Feder Cooper
I'm at NeurIPS & hiring for our pretraining safety team at OpenAI! Email me if you want to chat about making safer base models!
December 1, 2025 at 6:03 AM
Excited to be at NeurIPS this week in San Diego! Please reach out (best over email) if you’d like to chat about privacy & security, scalable evals, and reliable ML systems.

I’ll be presenting a few papers/speaking at some events, please stop by! Will post details throughout the week (summary below)
November 30, 2025 at 10:31 PM
Reposted by A. Feder Cooper
📣 Postdocs at Yale FDS! 📣 Tremendous freedom to work on data science problems with faculty across campus, multi-year, great salary. Deadline 12/15. Spread the word! Application: academicjobsonline.org/ajo/jobs/31114 More about Yale FDS: fds.yale.edu
Yale University, Institute for the Foundations of Data Science
Job #AJO31114, Postdoc in Foundations of Data Science, Institute for the Foundations of Data Science, Yale University, New Haven, Connecticut, US
academicjobsonline.org
November 18, 2025 at 3:54 AM
Just finished reading the GEMA v. OpenAI decision (slowly, my German isn't great). Looks like a not small part of the analysis tracked parts of arguments @jtlg.bsky.social and I made in 2024.

I don't have a well-formed response yet, but hopefully soon. (Main thought atm is a very unpolished "woah")
Today's decision in GEMA v. OpenAI by a German court holds that ChatGPT infringes copyright when it memorizes song lyrics. The opinion cites my paper with @afedercooper.bsky.social on memorization in generative models, and its analysis tracks ours.

drive.google.com/file/d/1dUaD...
42-O-14139-24-Endurteil.pdf
drive.google.com
November 12, 2025 at 9:40 PM
Reposted by A. Feder Cooper
Today's decision in GEMA v. OpenAI by a German court holds that ChatGPT infringes copyright when it memorizes song lyrics. The opinion cites my paper with @afedercooper.bsky.social on memorization in generative models, and its analysis tracks ours.

drive.google.com/file/d/1dUaD...
42-O-14139-24-Endurteil.pdf
drive.google.com
November 12, 2025 at 12:58 AM
Reposted by A. Feder Cooper
Bill Ackman gotta be on the third draft of a tweet longer than Middlemarch right now
November 5, 2025 at 2:52 AM
I’m kinda known as a copyright person, but (even in memorization) I mainly study how to draw reliable conclusions from large-scale AI/ML systems. There’s a long spiel why, but today I feel defeated. 100 hours/week on this for 6 years, just to find out a parent treats Gemini in search as ground-truth
November 1, 2025 at 4:44 PM
The NeurIPS position track didn't take a large number of extraordinary papers that surpassed the acceptance bar, limiting the acceptance rate to an unusually low 6%.

If you have a rejected paper at the intersection of ML and law, consider submitting to ACM CSLaw '26.
2026-CFP - ACM Symposium on Computer Science & Law
2026 Call for Papers 5th ACM Symposium on Computer Science and Law March 3-5, 2026 Berkeley, California The 5th ACM…
computersciencelaw.org
September 26, 2025 at 10:14 PM
Reposted by A. Feder Cooper
Our paper "Machine Unlearning Doesn't Do What You Think" was accepted for presentation at NeurIPS

Congrats @afedercooper.bsky.social and @katherinelee.bsky.social, who led the effort

arxiv.org/abs/2412.06966
Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
We articulate fundamental mismatches between technical methods for machine unlearning in Generative AI, and documented aspirations for broader impact that these methods could have for law and policy. ...
arxiv.org
September 26, 2025 at 6:37 PM
One more week to submit to CSLaw '26!!
15 days left to submit to the CSLaw '26 main track! (archival and non-archival)!
The CFP for ACM CSLaw '26 is up! Deadline for main-track papers (archival and non-archival) is September 30!

computersciencelaw.org/2026
September 24, 2025 at 6:59 PM