ControlAI
banner
controlai.com
ControlAI
@controlai.com
We work to keep humanity in control.

Subscribe to our free newsletter: https://controlai.news

Join our discord at: https://discord.com/invite/ptPScqtdc5
Cybersecurity AI time horizons are growing exponentially.

The UK's AI Security Institute found that the length of tasks that AIs could do is doubling roughly every 8 months. That’s actually an upper bound; it could be even faster.
January 12, 2026 at 5:55 PM
Former Northern Ireland First Minister Baroness Foster: Modern AIs aren't built piece by piece, they're grown. Even AI developers don't understand them.

"We simply do not know what a world with smarter-than-human AI would look like, much less how to manage or grow it safely."
January 12, 2026 at 1:56 PM
Lord Goldsmith: We can't just dismiss the hundreds of experts and tech leaders who've warned that AI poses a risk of extinction.

"They recognise that superintelligent AI is far more powerful than any of us can understand, that it has the capacity to overwhelm us"
January 12, 2026 at 11:04 AM
Baroness Ritchie: Even AI CEOs have stated AI poses an extinction risk.

"This is sobering and opens the question of what has been done by these companies to address these risks."
January 11, 2026 at 3:33 PM
From the House of Lords debate on AI: Lord Fairfax urges the UK government to acknowledge the extinction threat superintelligence poses to humanity, prevent its development, and champion an international prohibition on development of the technology!
January 11, 2026 at 10:29 AM
From the Lords debate on AI: Baroness Cass says we might have less than 5 years to act.

Citing Anthropic co-founder Jack Clark's anxiety about frontier AI development, Cass says "if the AI executives are worried, then I'm worried and we all should be worried."
January 10, 2026 at 10:02 AM
Former Northern Ireland First Minister Baroness Foster says it would be reckless to ignore the risk posed by superintelligent AI.

Countless experts and 100+ UK politicians have acknowledged the risk of extinction posed by AI, which comes from superintelligence.
January 9, 2026 at 3:40 PM
Lord Goldsmith calls for the UK government to support a prohibition on the development of superintelligence and recognise the risk of extinction that advanced AI poses to humanity.
January 9, 2026 at 11:43 AM
NEW: In the House of Lords AI debate today, Lord Fairfax says that mitigating the risk of extinction from AI should not be "a" global priority, it should be "the" global priority, because of the seriousness of the situation.
January 8, 2026 at 3:16 PM
"How concerned are you about the development of superintelligent AI, say on a scale of one to 10?"

"11."

Just before Christmas, we sent a copy of If Anyone Builds It, Everyone Dies to every MP and peer.

Sir Desmond Swayne MP says he'll be reading it!
January 8, 2026 at 11:32 AM
AIs can now do novel wet lab work.

“GPT‑5 created novel wet lab protocol improvements, optimizing the efficiency of a molecular cloning protocol by 79x.”

Those are the words of OpenAI’s recent blog post on measuring AI’s capability to accelerate wet lab research.

Why does this matter?
January 7, 2026 at 6:22 PM
Elon Musk predicts AGI will be developed this year and says "AI will exceed the intelligence of all humans combined" by 2030.

Musk has often warned that the development of artificial superintelligence could lead to human extinction.
January 7, 2026 at 3:01 PM
Ben Lake MP asks whether ministers should have last-resort powers to direct the shutdown of data centres or AI systems in a security emergency.

Lake says that given the evolving nature of cyberthreats, this could be one way to future-proof the cybersecurity bill.
January 7, 2026 at 11:02 AM
Great to see the House of Lords Library publish this excellent briefing on the risk of losing control of AI, ahead of the Lords debate on Thursday.
January 6, 2026 at 5:42 PM
Professor Stuart Russell: "Dario Amodei says a 25% chance of extinction, Elon Musk has a 30% chance ...  So what are they doing? They are playing Russian roulette with  every human being on Earth, without our permission."

Developing superintelligence is a dangerous gamble.
January 6, 2026 at 4:17 PM
AI godfather Geoffrey Hinton says he's more worried about AI than he was 2 years ago when he quit Google and began warning about the extinction risk from superintelligent AI.

"I'm probably more worried. It's progressed even faster than I thought."
January 6, 2026 at 11:31 AM
"I am working to try to make things go better but it’s very high risk and human civilisation is on the whole sleep walking into this transition."

In a new Guardian interview, AI expert and Programme Director at the UK's ARIA David Dalrymple gives a clear warning.
January 5, 2026 at 5:40 PM
Do you have a few minutes per week to help prevent the worst risks of AI?

With just 5 minutes, you can make a difference by participating in our Microcommit project.

Once per week we’ll send you a small number of easy tasks you can do to help!

Join us!
https://microcommit.io
January 2, 2026 at 10:36 AM
Have you contacted your representatives about the threat posed by superintelligence?

It doesn't take long! With our contact tools it takes less than a minute!

Every call, every email, and every letter from you makes a difference. Make your voice heard!

[link below]
December 29, 2025 at 11:30 AM
Senator Bernie Sanders says the possibility of superintelligence replacing humans in controlling the planet is not science fiction.
December 26, 2025 at 12:12 PM
Victoria Collins MP asks what the UK government is doing to clamp down on AI threats, including the threat posed by superintelligence.

Collins says the UK could take the lead and build an AI safety agency.
December 23, 2025 at 4:56 PM
Have you contacted your representatives about the threat posed by superintelligence?

It doesn't take long! With our contact tools it takes less than a minute!

Every call, every email, and every letter from you makes a difference. Make your voice heard!

[link below]
December 22, 2025 at 11:30 AM
Elon Musk says he has nightmares about AI and would slow it down if he could.
December 17, 2025 at 12:11 PM
Lord Hunt of Kings Heath supports our campaign for binding regulation on the most powerful AI systems!

“AI has the potential to transform our society for the better, but we cannot be blind to the risks posed by advanced AI systems.
December 15, 2025 at 10:00 AM
Spotted "If Anyone Builds It, Everyone Dies" recommended by The Guardian as one of its books of the year in its Saturday edition!

We also recommend reading it! It's important that everyone is informed about the danger of superintelligent AI.
December 12, 2025 at 5:43 PM