Subscribe to our free newsletter: https://controlai.news
Join our discord at: https://discord.com/invite/ptPScqtdc5
"We simply do not know what a world with smarter-than-human AI would look like, much less how to manage or grow it safely."
"We simply do not know what a world with smarter-than-human AI would look like, much less how to manage or grow it safely."
"They recognise that superintelligent AI is far more powerful than any of us can understand, that it has the capacity to overwhelm us"
"They recognise that superintelligent AI is far more powerful than any of us can understand, that it has the capacity to overwhelm us"
"This is sobering and opens the question of what has been done by these companies to address these risks."
"This is sobering and opens the question of what has been done by these companies to address these risks."
Citing Anthropic co-founder Jack Clark's anxiety about frontier AI development, Cass says "if the AI executives are worried, then I'm worried and we all should be worried."
Citing Anthropic co-founder Jack Clark's anxiety about frontier AI development, Cass says "if the AI executives are worried, then I'm worried and we all should be worried."
Countless experts and 100+ UK politicians have acknowledged the risk of extinction posed by AI, which comes from superintelligence.
Countless experts and 100+ UK politicians have acknowledged the risk of extinction posed by AI, which comes from superintelligence.
"11."
Just before Christmas, we sent a copy of If Anyone Builds It, Everyone Dies to every MP and peer.
Sir Desmond Swayne MP says he'll be reading it!
"11."
Just before Christmas, we sent a copy of If Anyone Builds It, Everyone Dies to every MP and peer.
Sir Desmond Swayne MP says he'll be reading it!
Musk has often warned that the development of artificial superintelligence could lead to human extinction.
Musk has often warned that the development of artificial superintelligence could lead to human extinction.
Lake says that given the evolving nature of cyberthreats, this could be one way to future-proof the cybersecurity bill.
Lake says that given the evolving nature of cyberthreats, this could be one way to future-proof the cybersecurity bill.
Developing superintelligence is a dangerous gamble.
Developing superintelligence is a dangerous gamble.
"I'm probably more worried. It's progressed even faster than I thought."
"I'm probably more worried. It's progressed even faster than I thought."
Collins says the UK could take the lead and build an AI safety agency.
Collins says the UK could take the lead and build an AI safety agency.