ControlAI
banner
controlai.com
ControlAI
@controlai.com
We work to keep humanity in control.

Subscribe to our free newsletter: https://controlai.news

Join our discord at: https://discord.com/invite/ptPScqtdc5
See our breakdown of AISI's Frontier AI Trends report here:
Self-replicating AIs
The AI Security Institute finds that AIs are improving rapidly across all tested domains, including in relation to the risk of losing control of AI.
controlai.news
January 12, 2026 at 5:56 PM
AISI's frontier AI trends report states "AI capabilities are improving rapidly across all tested domains", which includes capabilities relevant to biology, chemistry, and loss-of-control risks.

Losing control of smarter-than-human AIs could be disastrous and could lead to human extinction.
January 12, 2026 at 5:56 PM
But if this observed trend holds, AIs are going to get a lot better at cyber, rapidly.

Cyber isn't a special case.
January 12, 2026 at 5:55 PM
AIs have already become incredibly useful for cyberthreat actors, with Anthropic recently revealing that its Claude AI was used to perform a sophisticated campaign of cyberattacks across government and industry, mostly without human input, at scale.
January 12, 2026 at 5:55 PM
They estimated this after testing different AIs on vulnerability discovery/exploitation, malware-development, CTF challenges, and more. Unassisted task lengths went from less than 10 minutes in early 2023 to over an hour by mid 2025.

2025 saw the first expert-level task completions.
January 12, 2026 at 5:55 PM
There’s an interesting article in Axios where you can read more about the recent experiments:
www.axios.com/2025/1...

Or you can find OpenAI's blog post here:
openai.com/index/acc...
Exclusive: GPT-5 demonstrates ability to do novel lab work
Early evidence shows that AI can improve real-world laboratory workflows.
www.axios.com
January 7, 2026 at 6:22 PM
Despite this progress, and with top AI company CEOs predicting superintelligence will arrive in the coming years, nobody knows how to ensure that AIs vastly smarter than humans are safe or controllable. Hundreds of AI experts have warned that this poses a risk of human extinction.
January 7, 2026 at 6:22 PM
OpenAI already treats recent AIs they’ve built as capable of assisting novices in developing bioweapons.

It also underscores the rapid progress in AI we’ve seen in recent years.
January 7, 2026 at 6:22 PM
This demonstrates that AIs are becoming more capable in the domain of biology. This could lead to beneficial use cases, but the use of powerful biology-capable AIs also poses a risk in the hands of bad actors who would use them to bioengineer deadly pathogens.
January 7, 2026 at 6:22 PM
Over 100 UK politicians now support our call for binding regulation on the most powerful AI systems, publicly acknowledging the extinction threat from superintelligence.
January 6, 2026 at 5:42 PM