ControlAI
banner
controlai.com
ControlAI
@controlai.com
We work to keep humanity in control.

Subscribe to our free newsletter: https://controlai.news

Join our discord at: https://discord.com/invite/ptPScqtdc5
Pinned
We built a coalition of 100+ UK lawmakers who are taking a stance against the extinction risk from superintelligent AI and back regulating the most powerful AIs!

From the former AI Minister to the former Defence Secretary, cross-party support is crystal clear.

Time to act!
See our breakdown of AISI's Frontier AI Trends report here:
Self-replicating AIs
The AI Security Institute finds that AIs are improving rapidly across all tested domains, including in relation to the risk of losing control of AI.
controlai.news
January 12, 2026 at 5:56 PM
AISI's frontier AI trends report states "AI capabilities are improving rapidly across all tested domains", which includes capabilities relevant to biology, chemistry, and loss-of-control risks.

Losing control of smarter-than-human AIs could be disastrous and could lead to human extinction.
January 12, 2026 at 5:56 PM
But if this observed trend holds, AIs are going to get a lot better at cyber, rapidly.

Cyber isn't a special case.
January 12, 2026 at 5:55 PM
AIs have already become incredibly useful for cyberthreat actors, with Anthropic recently revealing that its Claude AI was used to perform a sophisticated campaign of cyberattacks across government and industry, mostly without human input, at scale.
January 12, 2026 at 5:55 PM
They estimated this after testing different AIs on vulnerability discovery/exploitation, malware-development, CTF challenges, and more. Unassisted task lengths went from less than 10 minutes in early 2023 to over an hour by mid 2025.

2025 saw the first expert-level task completions.
January 12, 2026 at 5:55 PM
Cybersecurity AI time horizons are growing exponentially.

The UK's AI Security Institute found that the length of tasks that AIs could do is doubling roughly every 8 months. That’s actually an upper bound; it could be even faster.
January 12, 2026 at 5:55 PM
Former Northern Ireland First Minister Baroness Foster: Modern AIs aren't built piece by piece, they're grown. Even AI developers don't understand them.

"We simply do not know what a world with smarter-than-human AI would look like, much less how to manage or grow it safely."
January 12, 2026 at 1:56 PM
Lord Goldsmith: We can't just dismiss the hundreds of experts and tech leaders who've warned that AI poses a risk of extinction.

"They recognise that superintelligent AI is far more powerful than any of us can understand, that it has the capacity to overwhelm us"
January 12, 2026 at 11:04 AM
Baroness Ritchie: Even AI CEOs have stated AI poses an extinction risk.

"This is sobering and opens the question of what has been done by these companies to address these risks."
January 11, 2026 at 3:33 PM
From the House of Lords debate on AI: Lord Fairfax urges the UK government to acknowledge the extinction threat superintelligence poses to humanity, prevent its development, and champion an international prohibition on development of the technology!
January 11, 2026 at 10:29 AM
From the Lords debate on AI: Baroness Cass says we might have less than 5 years to act.

Citing Anthropic co-founder Jack Clark's anxiety about frontier AI development, Cass says "if the AI executives are worried, then I'm worried and we all should be worried."
January 10, 2026 at 10:02 AM
Former Northern Ireland First Minister Baroness Foster says it would be reckless to ignore the risk posed by superintelligent AI.

Countless experts and 100+ UK politicians have acknowledged the risk of extinction posed by AI, which comes from superintelligence.
January 9, 2026 at 3:40 PM
Lord Goldsmith calls for the UK government to support a prohibition on the development of superintelligence and recognise the risk of extinction that advanced AI poses to humanity.
January 9, 2026 at 11:43 AM
What happens if AIs can copy themselves online?

The UK's AI Security Institute finds rapid AI capability gains across all tested domains, including loss-of-control relevant skills like self-replication.

Our breakdown of the Frontier AI Trends Report:
Self-replicating AIs
The AI Security Institute finds that AIs are improving rapidly across all tested domains, including in relation to the risk of losing control of AI.
controlai.news
January 8, 2026 at 7:10 PM
NEW: In the House of Lords AI debate today, Lord Fairfax says that mitigating the risk of extinction from AI should not be "a" global priority, it should be "the" global priority, because of the seriousness of the situation.
January 8, 2026 at 3:16 PM
"How concerned are you about the development of superintelligent AI, say on a scale of one to 10?"

"11."

Just before Christmas, we sent a copy of If Anyone Builds It, Everyone Dies to every MP and peer.

Sir Desmond Swayne MP says he'll be reading it!
January 8, 2026 at 11:32 AM
There’s an interesting article in Axios where you can read more about the recent experiments:
www.axios.com/2025/1...

Or you can find OpenAI's blog post here:
openai.com/index/acc...
Exclusive: GPT-5 demonstrates ability to do novel lab work
Early evidence shows that AI can improve real-world laboratory workflows.
www.axios.com
January 7, 2026 at 6:22 PM
Despite this progress, and with top AI company CEOs predicting superintelligence will arrive in the coming years, nobody knows how to ensure that AIs vastly smarter than humans are safe or controllable. Hundreds of AI experts have warned that this poses a risk of human extinction.
January 7, 2026 at 6:22 PM
OpenAI already treats recent AIs they’ve built as capable of assisting novices in developing bioweapons.

It also underscores the rapid progress in AI we’ve seen in recent years.
January 7, 2026 at 6:22 PM
This demonstrates that AIs are becoming more capable in the domain of biology. This could lead to beneficial use cases, but the use of powerful biology-capable AIs also poses a risk in the hands of bad actors who would use them to bioengineer deadly pathogens.
January 7, 2026 at 6:22 PM
AIs can now do novel wet lab work.

“GPT‑5 created novel wet lab protocol improvements, optimizing the efficiency of a molecular cloning protocol by 79x.”

Those are the words of OpenAI’s recent blog post on measuring AI’s capability to accelerate wet lab research.

Why does this matter?
January 7, 2026 at 6:22 PM
Elon Musk predicts AGI will be developed this year and says "AI will exceed the intelligence of all humans combined" by 2030.

Musk has often warned that the development of artificial superintelligence could lead to human extinction.
January 7, 2026 at 3:01 PM
Ben Lake MP asks whether ministers should have last-resort powers to direct the shutdown of data centres or AI systems in a security emergency.

Lake says that given the evolving nature of cyberthreats, this could be one way to future-proof the cybersecurity bill.
January 7, 2026 at 11:02 AM
Over 100 UK politicians now support our call for binding regulation on the most powerful AI systems, publicly acknowledging the extinction threat from superintelligence.
January 6, 2026 at 5:42 PM