Future of Life Institute
@futureoflife.org
790 followers 26 following 92 posts
We work on reducing extreme risks and steering transformative technologies to benefit humanity. Learn more: futureoflife.org
Posts Media Videos Starter Packs
futureoflife.org
🎨 New Keep the Future Human creative contest!

💰 We're offering $100K+ for creative digital media that brings the key ideas in Executive Director Anthony Aguirre's
Keep the Future Human essay to life, to reach wider audiences and inspire real-world action.

🔗 Learn more and enter by Nov. 30!
futureoflife.org
🚨 New AI systems.

❓ Growing uncertainty.

🤝 One shared future, for us all to shape.

"Tomorrow’s AI", our new scrollytelling site, visualizes 13 interactive, expert-forecast scenarios showing how advanced AI could transform our world - for better, or for worse: www.tomorrows-ai.org
futureoflife.org
👉 As reviewer Stuart Russell put it, “Some companies are making token efforts, but none are doing enough… This is not a problem for the distant future; it’s a problem for today.”

🔗 Read the full report now: futureoflife.org/ai-safety-in...
2025 AI Safety Index - Future of Life Institute
The Summer 2025 edition of our AI Safety Index, in which AI experts rate leading AI companies on key safety and security domains.
futureoflife.org
futureoflife.org
6️⃣ OpenAI secured second place, ahead of Google DeepMind.

7️⃣ Chinese AI firms Zhipu AI and DeepSeek received failing overall grades.

🧵
futureoflife.org
3️⃣ Only 3 out of 7 firms report substantive testing for dangerous capabilities linked to large-scale risks such as bio- or cyber-terrorism (Anthropic, OpenAI, and Google DeepMind).

4️⃣ Whistleblowing policy transparency remains a weak spot.

5️⃣ Anthropic received the best overall grade (C+).

🧵
futureoflife.org
Key takeaways:
1️⃣ The AI industry is fundamentally unprepared for its own stated goals.

2️⃣ Capabilities are accelerating faster than risk-management practice, and the gap between firms is widening.

🧵
futureoflife.org
‼️📝 Our new AI Safety Index is out!

➡️ Following our 2024 index, 6 independent AI experts rated leading AI companies - OpenAI, Anthropic, Meta, Google DeepMind, xAI, DeepSeek, and Zhipu AI - across critical safety and security domains.

So what were the results? 🧵👇
futureoflife.org
‼️ Congress is considering a 10-year ban on state AI laws, blocking action on risks like job loss, surveillance, disinformation, and loss of control.

It’s a huge win for Big Tech - and a big risk for families.

✍️ Add your name and say no to the federal block on AI safeguards: FutureOfLife.org/Action
futureoflife.org
🆕 📻 New on the FLI podcast, Zvi Mowshowitz (@thezvi.bsky.social) joins to discuss:

-The recent hot topic of sycophantic AI
-Time horizons of AI agents
-AI in finance and scientific research
-How AI differs from other technology
And more.

🔗 Tune in to the full episode now at the link below:
futureoflife.org
➡️ The Singapore Consensus, building on the International AI Safety Report backed by 33 countries, aims to enable more impactful R&D to quickly create safety and evaluation mechanisms, fostering a trustworthy, reliable, secure ecosystem where AI is used for the public good.
futureoflife.org
‼️ On April 26, 100+ AI scientists convened at the Singapore Conference on AI to produce the just-released Singapore Consensus on Global AI Safety Research Priorities. 🧵⬇️
futureoflife.org
➕ Be sure to check out @asterainstitute.bsky.social's Residency program, now accepting applications for the Oct. 2025 cohort! The program supports "creative, high-agency scientists, engineers and entrepreneurs" in future-focused, high-impact, open-first innovation.

Learn more: astera.org/residency
futureoflife.org
📺 📻 New on the FLI Podcast: @asterainstitute.bsky.social artificial general intelligence (AGI) safety researcher @stevebyrnes.bsky.social joins for a discussion diving into the hot topic of AGI, including different paths to it - and why brain-like AGI would be dangerous. 🧵👇
futureoflife.org
💪 Foster transparent development through an AI industry whistleblower program and mandatory security incident reporting.
futureoflife.org
🧰 Protect American workers and critical infrastructure from AI-related threats by tracking labor displacement and placing export controls on advanced AI models.
futureoflife.org
🚫 Ensure AI systems are free from ideological agendas, and ban models with superhuman persuasive abilities.
futureoflife.org
🚨 Protect the presidency from loss of control by mandating “off-switches"; a targeted moratorium on developing uncontrollable AI systems; and enforcing strong antitrust measures.
futureoflife.org
🇺🇸 We're sharing our recommendations for President Trump's AI Action Plan, focused on protecting U.S. interests in the era of rapidly advancing AI.

🧵 An overview of the measures we recommend 👇
futureoflife.org
🔗 And be sure to read Keep the Future Human, available here: keepthefuturehuman.ai
futureoflife.org
🔗 Tune in now at the link, or on your favourite podcast player, to hear how Anthony proposes we change course to secure a safe future with AI: www.youtube.com/watch?v=IqzB...
Keep the Future Human (with Anthony Aguirre)
YouTube video by Future of Life Institute
www.youtube.com