Collective Action for Existential Safety
banner
aisafetyaction.bsky.social
Collective Action for Existential Safety
@aisafetyaction.bsky.social
We serve all Existential Safety Advocates globally. See 80+ ways individuals, organizations, and nations can help with ensuring our existential safety: existentialsafety.org
When the Fed starts to forecast human extinction scenarios, you know the world is finally starting to catch on to the imminent danger we all face.

www.dallasfed.org/research/eco...
November 18, 2025 at 1:28 AM
Reposted by Collective Action for Existential Safety
Across the world, people are standing up to reckless AI development.
November 17, 2025 at 6:22 PM
53% of Americans now believe that AI will destroy humanity at some point. This is a major milestone in public awareness, decades in the making.

The rest of the world may follow suit soon, as evidence of AI's extreme risks continue to accumulate.

www.yahoo.com/news/article...
Poll: Most Americans think AI will 'destroy humanity' someday
A new Yahoo/YouGov survey finds that real people are much more pessimistic about artificial intelligence — and its potential impact on their lives — than Silicon Valley and Wall Street.
www.yahoo.com
November 14, 2025 at 9:09 AM
1/ We signed this Statement on Superintelligence: superintelligence-statement.org. Please consider signing as well.

We vote no to superintelligence if it may cause humanity's extinction.

To AI companies: first prove your product won't kill everyone.
October 22, 2025 at 4:52 AM
www.youtube.com/watch?v=f9Hw...

The fire alarm is blaring. Yet AI companies are still putting fuel on the fire.
It Begins: An AI Literally Attempted Murder To Avoid Shutdown
YouTube video by Species | Documenting AGI
www.youtube.com
October 10, 2025 at 5:27 AM
58% of Americans realize AI could risk humanity's future: www.reuters.com/world/us/ame...

77% want us to move slowly on AI development:
www.axios.com/2025/05/27/a...

Common sense seems to be spreading, believe it or not.
www.reuters.com
September 8, 2025 at 3:47 PM
x.com/antonioguter...

This is a remarkable move forward for humanity. We are immensely grateful for all the thousands of people who helped make this happen.

But it's not enough and it's many years too late.
António Guterres on X: "I welcome the General Assembly's decision to establish two new mechanisms within the @UN to promote international cooperation on AI governance. I call on all stakeholders to support this historic initiative & help build a future where AI serves the common good of all humanity." / X
I welcome the General Assembly's decision to establish two new mechanisms within the @UN to promote international cooperation on AI governance. I call on all stakeholders to support this historic initiative & help build a future where AI serves the common good of all humanity.
x.com
September 3, 2025 at 6:49 AM
It only took ~162 years since humanity was first warned about AI to start becoming concerned about AI.

en.wikipedia.org/wiki/Darwin_...
August 11, 2025 at 2:05 PM
We're holding our next Strategy Coordination Call for existential safety focused organizations this Thursday: lu.ma/yedo3sk2.

This will be our eight call. They've consistently been illuminating, according to participants.

Please join us if you're keen!
Existential Safety Strategy Coordination Call · Zoom · Luma
The Center for Existential Safety offers a monthly all hands meeting on Zoom for leaders of existential safety organizations to discuss and coordinate on…
lu.ma
August 5, 2025 at 3:00 AM
Here's an excellent reminder that humanity can do incredibly good things when it collectively gets its act together. Never forget that we eradicated smallpox from the planet: youtu.be/ybVZ7vluYhQ?...
Defeating a Virus That Killed Half a Billion People – The Plea
YouTube video by Neil Halloran
youtu.be
July 11, 2025 at 6:16 AM
It's clear we don't know how to control these AIs. The risk is extraordinary, yet some companies carry on unabated. This must stop.
Palisade Research on X: "🔌OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down." / X
🔌OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down.
x.com
June 2, 2025 at 4:54 AM
www.404media.co/republicans-...

"House Republicans introduced new language to the Budget Reconciliation bill that will immiserate the lives of millions of Americans by cutting their access to Medicaid, and making life much more difficult for millions more by making them pay higher fees..."
Republicans Try to Cram Ban on AI Regulation Into Budget Reconciliation Bill
Republicans try to use the Budget Reconciliation bill to stop states from regulating AI entirely for 10 years.
www.404media.co
May 13, 2025 at 11:18 AM
We're asking for the public to support an effort to create a supranational organization capable of effectively mitigating extinction risks from AI and fairly distributing its economic benefits to all. We call this the International Artificial Intelligence Governance Alliance (IAIGA): iaiga.org.
Milestones in the History of U.S. Foreign Relations - Office of the Historian
history.state.gov 3.0 shell
history.state.gov
May 5, 2025 at 2:57 PM
Our next Existential Safety Strategy Coordination Call is coming up.

If you're leading or helping with strategy for an existential safety-focused organization, please join us: lu.ma/k4litt6x.

We strategize and ideate. We laugh and despair. They have been surprisingly motivating.
Existential Safety Strategy Coordination Call · Zoom · Luma
The Center for Existential Safety offers a monthly all hands meeting on Zoom for leaders of existential safety organizations to discuss and coordinate on…
lu.ma
April 29, 2025 at 1:32 PM
"Imagine a nonprofit with the mission of ensuring nuclear technology is developed safely and for the benefit of humanity selling its control over the Manhattan Project in 1943 to a for-profit entity so the nonprofit could pursue other charitable initiatives."

See notforprivategain.org.
Not For Private Gain
We write in opposition to OpenAI’s proposed restructuring that would transfer control of the development and deployment of artificial general intelligence (AGI) from a nonprofit charity to a for-profi...
notforprivategain.org
April 24, 2025 at 3:31 AM
1/ Even when things look bleak, there are always rays of hope.

"The nations of the world made history in Geneva today," said Dr Tedros Adhanom Ghebreyesus, WHO Director-General.

"In reaching consensus on the Pandemic Agreement, not only did they put in place a generational accord...
April 17, 2025 at 1:10 PM
The AI Futures Project has released a well-researched forecast of how the future of AI development could go over the next few years: ai-2027.com.

It predicts artificial superintelligence in 2027.

This means the end of humanity as we know it shortly afterward.

We must collectively intervene, now.
AI 2027
A research-backed AI scenario forecast.
ai-2027.com
April 10, 2025 at 7:04 AM
Today we again spoke at the UN Stakeholders Consultations – AI Panel and Dialogue: un.org/global-digit....

It was troubling so few speakers explicitly mentioned the alien elephant in the room: we face likely extinction or permanent disempowerment from AI unless we change course as a species.
AI Panel and Dialogue | Global Digital Compact
UN Intergovernmental Process for Independent International Scientific Panel on AI & Global Dialogue on AI governance | Global Digital Compact | Co-facilitated by Costa Rica & Spain
un.org
April 2, 2025 at 6:31 PM
This is an excellent look at the rapidly increasing risks and rewards of AI systems.
📢 ❗Siliconversations on YouTube released an animated explainer for FLI Executive Director Anthony Aguirre’s new essay, "Keep The Future Human"!

🎥 Watch at the link in the replies for a breakdown of the risks from smarter-than-human AI - and Anthony's proposals to steer us toward a safer future:
March 12, 2025 at 11:43 AM
AI risk is finally becoming dinner table conversation.

But is AI safety action?

Please consider doing what you can do help with the fight. Take the Existential Safety Action Pledge: actionforsafety.org.
Existential Safety Action Pledge
I commit to taking at least one action every workday to reduce existential risks until an international body of recognized experts have declared humanity’s risk of extinction is under 0.1% per year.
actionforsafety.org
March 12, 2025 at 11:41 AM
We recently participated in a consultation for the design of an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance within the United Nations.

They just published a short paper which summarized the input of the 500+ individuals involved: un.org/global-digit...
March 4, 2025 at 2:46 PM
If you have the time, you can join the livestream now: www.iaseai.org/conference/l....

Given the stakes, this may go down in history as one of the most important conferences ever.
Livestream of the IASEAI 2025 Conference
Livestream of The International Association for Safe and Ethical AI inaugural conference (IASEAI ‘25) on Feb 6-7, 2025.
www.iaseai.org
February 7, 2025 at 9:23 AM
One of the best ways you can ensure your survival is to tell world leaders loudly and clearly that you do not want any one person, company or nation to have godlike power.

Get to the streets and protest.
📢 Don't let AI companies gamble with our future. Join the next PauseAI protests from February 8th to 11th in 15+ cities.

Protests: pauseai.info/2025-february

Sign the petition: chng.it/WJh5XL52K4
January 28, 2025 at 1:51 PM
For concerned citizens who are or want to get involved in existential safety advocacy, please join us for our all hands calls: www.existentialsafety.org/community-ev...
Community Events · Collective Action for Existential Safety
We periodically run all hands calls to connect more deeply with others fighting for our existential safety.
www.existentialsafety.org
January 14, 2025 at 3:11 PM