ControlAI
@controlai.com
340 followers 13 following 740 posts
We work to keep humanity in control. Subscribe to our free newsletter: https://controlai.news Join our discord at: https://discord.com/invite/ptPScqtdc5
Posts Media Videos Starter Packs
Pinned
controlai.com
AIs are sabotaging their own shutdown mechanisms, blackmailing engineers, and showing concerning biothreat capabilities.

Meanwhile, an Anthropic researcher admits they want Claude n to build Claude n+1.

Covered in the latest edition of our newsletter:
controlai.news/p/im-...
I’m sorry Sam, I’m afraid I can’t do that
Blackmail and sabotage
controlai.news
controlai.com
Sam Altman restates his warning that the development of superintelligence is the greatest threat to human existence, and AI can now design viruses.

Read about these developments and more in our latest article!
The Greatest Threat
Sam Altman’s latest warning that superintelligence could cause human extinction.
controlai.news
controlai.com
Sam Altman says we need to push for global governance of AI to prevent it ending in disaster.
controlai.com
NEW: Sam Altman says there's a 2% chance that artificial superintelligence causes human extinction. He says it's the biggest threat to the existence of mankind.
controlai.com
Sam Altman says the development of superhuman machine intelligence is the biggest threat to the existence of mankind.
controlai.com
Sam Altman says there is potential for "real catastrophic risk" with frontier AIs, and that governments should use regulation to address these.
controlai.com
Notably, even 'optimists' in the AI industry think that building superintelligence could be much more dangerous than this. Anthropic's CEO Dario Amodei recently said he believes there's a 25% chance that this ends in disaster.
A ‘Godfather of AI’ Remains Concerned as Ever About Human Extinction
Yoshua Bengio worries about AI’s capacity to deceive users in pursuit of its own goals. “The scenario in “2001: A Space Odyssey” is exactly like this,” he says.
www.wsj.com
controlai.com
"The thing with catastrophic events like extinction, and even less radical events that are still catastrophic like destroying our democracies, is that they’re so bad that even if there was only a 1% chance it could happen, it’s not acceptable."
controlai.com
Bengio says there's a sense in which current approaches to aligning AI are never going to deliver the kind of trustworthiness that public users and companies demand.

What's an acceptable risk?

Bengio says even a 1% chance of an event like human extinction is unacceptable.
controlai.com
Yoshua Bengio warns of the danger of superintelligence in a Wall Street Journal interview:

"If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous. It’s like creating a competitor to humanity that is smarter than us."
controlai.com
AI researcher Nate Soares explains why we wouldn't able to defend ourselves against superintelligence.

"Predicting how to fight that is like people from the year 1800 predicting how to fight a modern army. It's not a contest you can win."
controlai.com
If you're concerned about the risk from superintelligence and want to help make a difference, we've made that super easy for you.

Sign up to Microcommit, and we'll send you a small number of easy tasks that take just 5 minutes per week!

https://microcommit.io
controlai.com
"We've already seen in lab conditions today, we've already seen AIs try to escape the lab ... We already have those warning signs. We're clearly not stopping in front of those warning signs."
— Nate Soares, AI researcher & coauthor of "If Anyone Builds It, Everyone Dies"
controlai.com
Is a 25% chance of AI wiping us out worth it?

"That's insane numbers. If a bridge downtown had a 25% chance Of collapsing, we wouldn't be like think of the benefits of having the bridge open. We would be like shut it down, build a better bridge."
— Nate Soares
controlai.com
AI reseacher Nate Soares says smarter-than-human AIs won't wipe us out because they hate us, but because we'll be in competition with them.

" It's similar to how humans don't hate ants, but when we're building a skyscraper — the ants — their home gets destroyed."
controlai.com
Professor Stuart Russell says we should ask for cast-iron guarantees that AIs don't cause human extinction.

"Governments must require cast-iron guarantees in the form of either statistical evidence or mathematical proof ... anything short of that is just asking for disaster."
controlai.com
How could AI wipe us out?

In partnership with us, MinuteEarth has just produced a great new video explaining some ways this could happen.

MinuteEarth has over 3 million subscribers on YouTube; it's great to see so many people learning about the risk!

[link below]
controlai.com
Senators Josh Hawley and Richard Blumenthal have introduced a groundbreaking AI bill in the Senate, while California has just passed AI transparency legislation.

Check out our new article covering these developments and more!
Before the Cliff: Regulating AI
The Artificial Intelligence Risk Evaluation Act
controlai.news
controlai.com
Sam Altman has said superintelligence is probably the greatest threat to the continued existence of humanity.

Many experts think so too.

Find out why on our new campaign site:

[link below]
controlai.com
AI researcher Eliezer Yudkowsky says superintelligence could wipe us out because it might see us as an inconvenience, or it might even do so without any intent.

"it builds more and more factories and more and more power plants until it's boiling the oceans for heat dissipation"
controlai.com
"If anyone anywhere builds a superintelligence, everyone everywhere dies."
— Eliezer Yudkowsky
controlai.com
Sharing these facts is one way you can help make sure everyone is informed about the danger of superintelligent AI, a necessary step toward preventing the risk.

For more ways you can help, check out our campaign site!
Ban Superintelligence
Preventing extinction risk from AI.
campaign.controlai.com
controlai.com
🤯 Three key facts about AI that everyone should know:

1️⃣ AI researchers and CEOs have warned AI could wipe us out.

2️⃣ Experts believe superintelligence could be built in the next 5 years.

3️⃣ No substantial legislation to ensure safe AI development is in effect anywhere.
controlai.com
"The trouble is that the only winner of an AI arms race is going to be the AI.

If you build a superintelligence, you don't have a superintelligence. The superintelligence has you."
— Eliezer Yudkowsky