Malo Bourgon
banner
malo.online
Malo Bourgon
@malo.online
CEO at Machine Intelligence Research Institute (MIRI, @intelligence.org)
Pinned
Really enjoyed chatting with @anthonyaguirre.bsky.social, @livboeree.bsky.social, and the folks who came out for the Win-Win podcast's second-ever IRL event in Austin. Great audience with lots of good and tough questions.

Thanks for putting it on!

www.youtube.com/watch?v=XWZg...
Superintelligent AI - Our Best or Worst Idea?
YouTube video by Win-Win with Liv Boeree
www.youtube.com
We’re so close!

Very grateful for all our donors. Your support enables everything we do. Also grateful for the awesome gang at MIRI who worked their asses off this year. You guys crushed it!

Thanks everyone 🙏 Happy New Year 🎉
Final Update: From ~$450k earlier today, we’re now down to just over $250k left in unclaimed matching funds!

4 hours left to go, and by golly it looks like we’ve got a real shot at securing all the matching.

Thanks everyone! Happy New Year 🎉
Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵
January 1, 2026 at 4:48 AM
Reposted by Malo Bourgon
Final Update: From ~$450k earlier today, we’re now down to just over $250k left in unclaimed matching funds!

4 hours left to go, and by golly it looks like we’ve got a real shot at securing all the matching.

Thanks everyone! Happy New Year 🎉
Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...
intelligence.org
January 1, 2026 at 4:06 AM
Reposted by Malo Bourgon
Update 2: We’re down to ~$450k left of unclaimed matching funds, with just over 12 hours to go!

Thanks to all those who stepped up in the last couple of days to close the gap by ~$500k. ❤️
Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...
intelligence.org
December 31, 2025 at 7:48 PM
Reposted by Malo Bourgon
Update: We’ve received over $250k since this was posted.

~$700k in matching funds remaining.
Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...
intelligence.org
December 30, 2025 at 6:41 PM
Reposted by Malo Bourgon
Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...
intelligence.org
December 29, 2025 at 10:55 PM
You don't have to take my word for it. LLMs are dumb in a bunch of ways, but I think this is a powerful and convincing consensus on this question.
December 5, 2025 at 1:54 AM
Seems like you’re confusing a punchy post title about a standard norm in Bayesian epistemology (i.e., don’t give empirical claims credence 0 or 1, or you can’t update) with a claim about the formal definition of probability, where 0 and 1 are of course valid probabilities.
December 5, 2025 at 1:19 AM
Of course, by definition probabilities are real numbers on [0,1], which includes the endpoints.
December 5, 2025 at 1:00 AM
I understand that you believe I’m a huckster. I was hoping you might elaborate on why you think that.
December 5, 2025 at 12:44 AM
Huckster? Say more. Conversations I was having in the comments seemed pretty reasonable and polite, with me just sharing context/info/perspective, and folks following up on that.
December 4, 2025 at 11:37 PM
... and banned almost immediately. ¯\_(ツ)_/¯
December 4, 2025 at 9:05 PM
Reposted by Malo Bourgon
”If Anyone Builds It, Everyone Dies” was recently added to the New Yorker's “The Best Books of the Year So Far” list!

newyorker.com/best-books-2...
October 31, 2025 at 2:30 AM
@hankgreen.bsky.social rarely does interviews or 30+ min long videos.

His latest video, an hour+ long interview with Nate Soares about “If Anyone Builds It, Everyone Dies,” is a banger. My new favorite!

www.youtube.com/watch?v=5CKu...
October 30, 2025 at 8:52 PM
Reposted by Malo Bourgon
In the Bay Area? Come join Nate Soares, in conversation with Semafor Tech Editor Reed Albergotti, about Nate's NYT bestselling book “If Anyone Builds It, Everyone Dies.”

🗓️ Tuesday Oct 28 @ 7:30pm at Manny’s in SF.

Get your tickets:
Nate Soares - If Anyone Builds It, Everyone Dies
Nate Soares discusses the scramble to create superhuman AI that has us on a path to extinction. But it’s not too late to change course.
www.eventbrite.com
October 24, 2025 at 10:09 PM
IDK man, feels like you are fixated on one particular thing he said (and your interpretation of what he meant by it), as part of a longer conversation on the pod about the analogy. I’m not trying to pick a fight here, just wanted to clarify that he’s not making the mistake you think he’s making.
October 17, 2025 at 2:02 AM
I listened to it. Also, I run org (@intelligence.org) that he founded, so I’m quite familiar with the argument he’s making. This interview didn’t result in the best exposition of the analogy in question, but I can assure you he isn’t making the mistake you think he is.
October 17, 2025 at 12:56 AM
How is he anthropomorphizing natural selection? One can think of evolution as an optimization process, and the analogy is between that optimization process and the one used to train AI systems.
October 17, 2025 at 12:43 AM
It is by no means a "nearly unanimous view" that LLMs are a dead end among AI experts. Also the argument for future very powerful AI systems being an extinction threat does not depend on those systems being LLM based.
October 16, 2025 at 8:14 PM
Reposted by Malo Bourgon
😮 Whoopie Goldberg recommends “If Anyone Builds It, Everyone Dies” on The View!
October 15, 2025 at 10:46 PM
Reposted by Malo Bourgon
Should be a fun conversation.

I'll be there, if you're in town come say hi!
🗓️ Next Friday Sept 26th in DC at @politicsprose.bsky.social (The Wharf)

Join us for a conversation between co-author Nate Soares and @jonatomic.bsky.social, Director of Global Risk at the Federation of American Scientists.

Audience Q&A, book signing, and more:
politics-prose.com/nate-soares
September 18, 2025 at 3:11 PM
LFG!
#7 Combined Print & E-Book Nonfiction (www.nytimes.com/books/best-s...)

#8 Hardcover Nonfiction (www.nytimes.com/books/best-s...)
September 25, 2025 at 3:50 AM
This was a great event. Really enjoyed chatting with Joel and Ollie on the first panel.

Thanks @scientistsorg.bsky.social and @futureoflife.org for putting this event together.
Dear diary, we had a great time on the Hill last week with our friends at @futureoflife.org

We kicked off our AGI x Global Risk day with remarks from @repbillfoster.bsky.social, @reptedlieu.bsky.social, and John Bailey — setting the stage for a day of bold dialogue on the future of AGI 🌎
September 23, 2025 at 6:06 PM
(Totally above board. Sharing full-length episodes is one of the benefits of being a subscriber.)
September 22, 2025 at 9:13 PM
I think my favorite interview Eliezer and Nate have done so far for the book has been for the Making Sense podcast with Sam Harris.

Unfortunately the full episode is for subscribers only.

Fortunately, as a subscriber, I can share the full thing 🙂
Sam Harris | #434 - Can We Survive AI?
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI.
samharris.org
September 22, 2025 at 8:48 PM
Reposted by Malo Bourgon
“[...] everyone with an interest in the future has a duty to read what he and Soares have to say.”
September 22, 2025 at 1:24 PM