MIRI
banner
intelligence.org
MIRI
@intelligence.org
For over two decades, the Machine Intelligence Research Institute (MIRI) has worked to understand and prepare for the critical challenges that humanity will face as it transitions to a world with artificial superintelligence.
Pinned
#7 Combined Print & E-Book Nonfiction (www.nytimes.com/books/best-s...)

#8 Hardcover Nonfiction (www.nytimes.com/books/best-s...)
Final Update: From ~$450k earlier today, we’re now down to just over $250k left in unclaimed matching funds!

4 hours left to go, and by golly it looks like we’ve got a real shot at securing all the matching.

Thanks everyone! Happy New Year 🎉
Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...
intelligence.org
January 1, 2026 at 4:06 AM
Update 2: We’re down to ~$450k left of unclaimed matching funds, with just over 12 hours to go!

Thanks to all those who stepped up in the last couple of days to close the gap by ~$500k. ❤️
Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...
intelligence.org
December 31, 2025 at 7:48 PM
PSA: It’s worth reaching out to old donors, because sometimes this happens 🙂
December 30, 2025 at 6:44 PM
Update: We’ve received over $250k since this was posted.

~$700k in matching funds remaining.
Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...
intelligence.org
December 30, 2025 at 6:41 PM
Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...
intelligence.org
December 29, 2025 at 10:55 PM
”If Anyone Builds It, Everyone Dies” was recently added to the New Yorker's “The Best Books of the Year So Far” list!

newyorker.com/best-books-2...
October 31, 2025 at 2:30 AM
“If Anyone Builds It, Everyone Dies” coauthor Nate Soares recently chatted with Major Garrett on @cbsnews.com.
New book argues superhuman AI puts humans on path to extinction
Nate Soares, the co-author of "If Anyone Builds It, Everyone Dies," argues in his new book that if any company builds an artificial superintelligence, it would end in human extinction. He joins "The…
www.youtube.com
October 31, 2025 at 1:29 AM
Reposted by MIRI
@hankgreen.bsky.social rarely does interviews or 30+ min long videos.

His latest video, an hour+ long interview with Nate Soares about “If Anyone Builds It, Everyone Dies,” is a banger. My new favorite!

www.youtube.com/watch?v=5CKu...
October 30, 2025 at 8:52 PM
In the Bay Area? Come join Nate Soares, in conversation with Semafor Tech Editor Reed Albergotti, about Nate's NYT bestselling book “If Anyone Builds It, Everyone Dies.”

🗓️ Tuesday Oct 28 @ 7:30pm at Manny’s in SF.

Get your tickets:
Nate Soares - If Anyone Builds It, Everyone Dies
Nate Soares discusses the scramble to create superhuman AI that has us on a path to extinction. But it’s not too late to change course.
www.eventbrite.com
October 24, 2025 at 10:09 PM
Academy Award winning director Kathryn Bigelow is reading “If Anyone Builds It, Everyone Dies.”

From an interview in The Guardian by Danny Leigh: www.theguardian.com/film/2025/oc...
October 18, 2025 at 3:12 PM
“The book uses parables, very well told, to argue that evolutionary processes are not predictable, at least not easily. [...] I came away far more concerned than I had been before opening the book.”

www.forbes.com/sites/billco...
October 18, 2025 at 1:18 AM
Reposted by MIRI
Today’s episode of The Ezra Klein Show.

The researcher Eliezer Yudkowsky argues that we should be very afraid of artificial intelligence’s existential risks.
www.nytimes.com/2025/10/15/o...

youtu.be/2Nn0-kAE5c0?...
How Afraid of the AI Apocalypse Should We Be? | The Ezra Klein Show
YouTube video by The Ezra Klein Show
youtu.be
October 15, 2025 at 1:40 PM
Reposted by MIRI
Michael talks with Nate Soares, co-author of "If Anyone Builds It, Everyone Dies", on the risks of advanced artificial intelligence. Soares argues that humanity must treat AI risk as seriously as pandemics or nuclear war.
Hear the #bookclub #podcast 🎧📖 https://loom.ly/w1hBbWM
October 15, 2025 at 8:30 PM
Reposted by MIRI
🎙️ w/ Nate Soares on his and E. Yudkowsky’s book *If Anybody Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.* @intelligence.org

Why mitigating existential AI risk should be a top global priority, the problem of pointing minds, a treaty to ban the race to superintelligence, and more.
EP 327 Nate Soares on Why Superhuman AI Would Kill Us All - The Jim Rutt Show
Jim talks with Nate Soares about his and Eliezer Yudkowsky's book If Anybody Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.
www.jimruttshow.com
October 16, 2025 at 1:29 PM
😮 Whoopie Goldberg recommends “If Anyone Builds It, Everyone Dies” on The View!
October 15, 2025 at 10:46 PM
Great event last week at @politicsprose.bsky.social (The Wharf) in DC, with “If Anyone Builds It, Everyone Dies” coauthor Nate Soares.

Many thanks to all those who attended, and to @jonatomic.bsky.social, Director of Global Risk at FAS, for the great conversation.
October 3, 2025 at 12:05 AM
Happening tonight!
🗓️ Next Friday Sept 26th in DC at @politicsprose.bsky.social (The Wharf)

Join us for a conversation between co-author Nate Soares and @jonatomic.bsky.social, Director of Global Risk at the Federation of American Scientists.

Audience Q&A, book signing, and more:
politics-prose.com/nate-soares
September 26, 2025 at 7:31 PM
#7 Combined Print & E-Book Nonfiction (www.nytimes.com/books/best-s...)

#8 Hardcover Nonfiction (www.nytimes.com/books/best-s...)
September 24, 2025 at 11:00 PM
Reposted by MIRI
This was a great event. Really enjoyed chatting with Joel and Ollie on the first panel.

Thanks @scientistsorg.bsky.social and @futureoflife.org for putting this event together.
Dear diary, we had a great time on the Hill last week with our friends at @futureoflife.org

We kicked off our AGI x Global Risk day with remarks from @repbillfoster.bsky.social, @reptedlieu.bsky.social, and John Bailey — setting the stage for a day of bold dialogue on the future of AGI 🌎
September 23, 2025 at 6:06 PM
Reposted by MIRI
I think my favorite interview Eliezer and Nate have done so far for the book has been for the Making Sense podcast with Sam Harris.

Unfortunately the full episode is for subscribers only.

Fortunately, as a subscriber, I can share the full thing 🙂
Sam Harris | #434 - Can We Survive AI?
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI.
samharris.org
September 22, 2025 at 8:48 PM
“[...] everyone with an interest in the future has a duty to read what he and Soares have to say.”
September 22, 2025 at 1:24 PM
In the last couple of days “If Anyone Builds It, Everyone Dies” co-authors Eliezer Yudkowsky and Nate Soares have appeared live on @cnn.com, ABC News Live, and Bannon's War Room!

📺 Full segments below starting with @cnn.com:
next.frame.io/share/c0b240...
September 20, 2025 at 8:59 PM
Reposted by MIRI
"If Anyone Builds It, Everyone Dies". It's a shocking headline. How well does it hold up? Today I review.

peterwildeford.substack.com/p/if-we-buil...
If We Build AI Superintelligence, Do We All Die?
If you're not at least a little doomy about AI, you're not paying attention
peterwildeford.substack.com
September 18, 2025 at 1:57 PM
🗓️ Next Friday Sept 26th in DC at @politicsprose.bsky.social (The Wharf)

Join us for a conversation between co-author Nate Soares and @jonatomic.bsky.social, Director of Global Risk at the Federation of American Scientists.

Audience Q&A, book signing, and more:
politics-prose.com/nate-soares
September 18, 2025 at 2:42 PM
Reposted by MIRI
AI researcher Eliezer Yudkowsky warned superintelligent AI could threaten humanity by pursuing its own goals over human survival.
Forget woke chatbots — an AI researcher says the real danger is an AI that doesn't care if we live or die
AI researcher Eliezer Yudkowsky warned superintelligent AI could threaten humanity by pursuing its own goals over human survival.
www.businessinsider.com
September 16, 2025 at 10:41 AM