"Why we aren't getting any better at AI alignment" by ControlAI would probably be agreeable to you.
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
March 20, 2025 at 8:48 AM
"Why we aren't getting any better at AI alignment" by ControlAI would probably be agreeable to you.
This fairly solidly explains the AI/AGI/ASI problem though ASI technically means any AI smarter than us in all ways, right up to "machine god".
The race to AI is based upon a faulty premise: We have more to work together... remember that
#AI #AGI #ASI #AITakeover #ControlAI #LLM #Superintelligence
The race to AI is based upon a faulty premise: We have more to work together... remember that
#AI #AGI #ASI #AITakeover #ControlAI #LLM #Superintelligence
What happens if AI just keeps getting smarter?
YouTube video by Rational Animations
www.youtube.com
May 3, 2025 at 4:48 PM
This fairly solidly explains the AI/AGI/ASI problem though ASI technically means any AI smarter than us in all ways, right up to "machine god".
The race to AI is based upon a faulty premise: We have more to work together... remember that
#AI #AGI #ASI #AITakeover #ControlAI #LLM #Superintelligence
The race to AI is based upon a faulty premise: We have more to work together... remember that
#AI #AGI #ASI #AITakeover #ControlAI #LLM #Superintelligence
Owing to text limits, I've enclosed the bulk of my message in an image below.
The link included will also be enclosed here
URL: www.youtube.com/watch?v=3df6...
#AGI #AI #AIEthics #AIGovernance #ArtificialIntelligence #ASI #ControlAI #DataCenter #DataCenters #Ethics #Superintelligence #TechBros
The link included will also be enclosed here
URL: www.youtube.com/watch?v=3df6...
#AGI #AI #AIEthics #AIGovernance #ArtificialIntelligence #ASI #ControlAI #DataCenter #DataCenters #Ethics #Superintelligence #TechBros
AI: The New Nuclear Option
YouTube video by The Lincoln Project
www.youtube.com
November 9, 2025 at 1:42 AM
Owing to text limits, I've enclosed the bulk of my message in an image below.
The link included will also be enclosed here
URL: www.youtube.com/watch?v=3df6...
#AGI #AI #AIEthics #AIGovernance #ArtificialIntelligence #ASI #ControlAI #DataCenter #DataCenters #Ethics #Superintelligence #TechBros
The link included will also be enclosed here
URL: www.youtube.com/watch?v=3df6...
#AGI #AI #AIEthics #AIGovernance #ArtificialIntelligence #ASI #ControlAI #DataCenter #DataCenters #Ethics #Superintelligence #TechBros
🚨 BREAKING: 200+ former heads of state, ministers, diplomats, Nobel laureates, AI scientists, political leaders and 70+ organizations just made a joint call for global red lines on AI.
ControlAI is proud to join this coalition in support!
ControlAI is proud to join this coalition in support!
September 22, 2025 at 4:06 PM
🚨 BREAKING: 200+ former heads of state, ministers, diplomats, Nobel laureates, AI scientists, political leaders and 70+ organizations just made a joint call for global red lines on AI.
ControlAI is proud to join this coalition in support!
ControlAI is proud to join this coalition in support!
When it comes to the matter of AI, it's important to remember that even for those who cannot commit an enormous amount of efforts, there are small amounts one can commit.
#Activism #AI #AGI #AIGovernance #ControlAI #CriticalThinking #Climate #DataCenters #Environment #Environmentalism
#Activism #AI #AGI #AIGovernance #ControlAI #CriticalThinking #Climate #DataCenters #Environment #Environmentalism
microcommit.io
October 9, 2025 at 12:09 PM
When it comes to the matter of AI, it's important to remember that even for those who cannot commit an enormous amount of efforts, there are small amounts one can commit.
#Activism #AI #AGI #AIGovernance #ControlAI #CriticalThinking #Climate #DataCenters #Environment #Environmentalism
#Activism #AI #AGI #AIGovernance #ControlAI #CriticalThinking #Climate #DataCenters #Environment #Environmentalism
Scheint so als hätten sie mehr Ahnung von der Realität und Zustimmung mit den meisten AI Wissenschaftlern. Redet mal mit euren Politikern #controlai
May 6, 2025 at 5:16 AM
Scheint so als hätten sie mehr Ahnung von der Realität und Zustimmung mit den meisten AI Wissenschaftlern. Redet mal mit euren Politikern #controlai
"Ex-OpenAI Researcher Warns AI Companies Will Lose Control of AI" - this ControlAI podcast with Steven Adler is highly recommended listening ahead of PauseCon this weekend www.youtube.com/watch?v=dMQW...
Ex-OpenAI Researcher Warns AI Companies Will Lose Control of AI | ControlAI Podcast w/ Steven Adler
YouTube video by ControlAI
www.youtube.com
June 27, 2025 at 8:29 AM
"Ex-OpenAI Researcher Warns AI Companies Will Lose Control of AI" - this ControlAI podcast with Steven Adler is highly recommended listening ahead of PauseCon this weekend www.youtube.com/watch?v=dMQW...
September 2, 2025 at 5:14 PM
Yes, also scientifically there have been papers written about this. That is why people (EU parliament, controlai, bluedot) are working on establishing rules, codes of conduct, metrics..
August 20, 2025 at 8:37 PM
Yes, also scientifically there have been papers written about this. That is why people (EU parliament, controlai, bluedot) are working on establishing rules, codes of conduct, metrics..
🗣️ "Cette déclaration est une occasion ratée" - Andrea Miotti, directrice de ControlAI
Le texte ne prolonge pas les avancées des sommets de Bletchley et de Séoul sur la sécurité de l'IA
Le texte ne prolonge pas les avancées des sommets de Bletchley et de Séoul sur la sécurité de l'IA
February 8, 2025 at 7:28 PM
🗣️ "Cette déclaration est une occasion ratée" - Andrea Miotti, directrice de ControlAI
Le texte ne prolonge pas les avancées des sommets de Bletchley et de Séoul sur la sécurité de l'IA
Le texte ne prolonge pas les avancées des sommets de Bletchley et de Séoul sur la sécurité de l'IA
Why aren't we getting any better at alignment?
ControlAI advisor Gabe Alfour was recently interviewed on Dr Waku’s podcast to discuss this topic.
We’re planning to expand our content and do more interviews, so we’d be really keen to know what you think!
www.youtube.com/watc...
ControlAI advisor Gabe Alfour was recently interviewed on Dr Waku’s podcast to discuss this topic.
We’re planning to expand our content and do more interviews, so we’d be really keen to know what you think!
www.youtube.com/watc...
Why we aren't getting any better at AI alignment
Gabe co-founded Conjecture and has been involved in AI safety for a long time, but he has never spoken publicly before. In this interview, we discuss his unique perspective on AI alignment. AI alignment is a subproblem of what Gabe calls the "general alignment problem", which is arguably the most im
www.youtube.com
March 18, 2025 at 5:00 PM
Why aren't we getting any better at alignment?
ControlAI advisor Gabe Alfour was recently interviewed on Dr Waku’s podcast to discuss this topic.
We’re planning to expand our content and do more interviews, so we’d be really keen to know what you think!
www.youtube.com/watc...
ControlAI advisor Gabe Alfour was recently interviewed on Dr Waku’s podcast to discuss this topic.
We’re planning to expand our content and do more interviews, so we’d be really keen to know what you think!
www.youtube.com/watc...
SciShow's video (in collaboration with ControlAI) is definitely worth a watch! www.youtube.com/watch?v=90C3...
We’ve Lost Control of AI
If you find these trends concerning and you want to make a difference, you can go to http://controlai.com/scishow, where ControlAI has created tools to help you easily voice your concerns to your…
www.youtube.com
November 7, 2025 at 6:30 PM
SciShow's video (in collaboration with ControlAI) is definitely worth a watch! www.youtube.com/watch?v=90C3...
[Shoper Gamer] OpenAI ปรับแนวทางควบคุมพฤติกรรม AI
โดย
โดย
www.blockdit.com
February 14, 2025 at 3:15 PM
If you want to contribute to AI policy, we'd love for you to take part in Red Teaming A Narrow Path: ControlAI Policy Sprint with Apart Research!
Sign up and participate for the chance to win money and mentoring from us for your governance work:
apartresearch.com/sp...
Sign up and participate for the chance to win money and mentoring from us for your governance work:
apartresearch.com/sp...
Red Teaming A Narrow Path: ControlAI Policy Sprint | Apart Research
Apart Research is an independent research organization focusing on AI safety. We accelerate AI safety research through mentorship,
collaborations, and research sprints
apartresearch.com
June 9, 2025 at 5:34 PM
If you want to contribute to AI policy, we'd love for you to take part in Red Teaming A Narrow Path: ControlAI Policy Sprint with Apart Research!
Sign up and participate for the chance to win money and mentoring from us for your governance work:
apartresearch.com/sp...
Sign up and participate for the chance to win money and mentoring from us for your governance work:
apartresearch.com/sp...
ControlAI has just launched our new campaign pushing for a ban on superintelligence! We've gathered some big-name endorsements and more are on the way. Check it out at campaign.controlai.com
We also put together a release video for the launch! www.youtube.com/watch?v=oAJU...
We also put together a release video for the launch! www.youtube.com/watch?v=oAJU...
Why experts fear superintelligent AI – and what we can do about it
YouTube video by ControlAI
www.youtube.com
September 12, 2025 at 5:01 PM
ControlAI has just launched our new campaign pushing for a ban on superintelligence! We've gathered some big-name endorsements and more are on the way. Check it out at campaign.controlai.com
We also put together a release video for the launch! www.youtube.com/watch?v=oAJU...
We also put together a release video for the launch! www.youtube.com/watch?v=oAJU...
📩 ControlAI Weekly Roundup: US-China Detente or AGI Suicide Race?
1️⃣ Biden and Xi agree AI shouldn’t control nuclear weapons
2️⃣ A US government commission recommends a race to AGI
3️⃣ Bengio writes about advances in the ability of AI to reason
controlai.news/p/con...
1️⃣ Biden and Xi agree AI shouldn’t control nuclear weapons
2️⃣ A US government commission recommends a race to AGI
3️⃣ Bengio writes about advances in the ability of AI to reason
controlai.news/p/con...
ControlAI Weekly Roundup #5: US-China Detente or AGI Suicide Race?
Biden and Xi agree AI shouldn’t control nuclear weapons, a US government commission recommends a race to AGI, and Yoshua Bengio writes about advances in the ability of AI to reason.
controlai.news
November 21, 2024 at 6:08 PM
📩 ControlAI Weekly Roundup: US-China Detente or AGI Suicide Race?
1️⃣ Biden and Xi agree AI shouldn’t control nuclear weapons
2️⃣ A US government commission recommends a race to AGI
3️⃣ Bengio writes about advances in the ability of AI to reason
controlai.news/p/con...
1️⃣ Biden and Xi agree AI shouldn’t control nuclear weapons
2️⃣ A US government commission recommends a race to AGI
3️⃣ Bengio writes about advances in the ability of AI to reason
controlai.news/p/con...
Second experiment in masking content from AI - text content is sent from server encrypted with a really basic key (like "1" or "2") - user is given a slider that functions as the input to decrypt and can visually see when its correct #noAI #controlAI
August 28, 2025 at 1:20 AM
Second experiment in masking content from AI - text content is sent from server encrypted with a really basic key (like "1" or "2") - user is given a slider that functions as the input to decrypt and can visually see when its correct #noAI #controlAI
📩 ControlAI Weekly Roundup: Sneaky Machines
1️⃣ OpenAI launches o1, in tests tries to avoid shutdown
2️⃣ Google DeepMind launches Gemini 2.0
3️⃣ Comments by incoming AI czar David Sacks on AGI threat resurface
Get our free newsletter here 👇
controlai.news/p/sub...
1️⃣ OpenAI launches o1, in tests tries to avoid shutdown
2️⃣ Google DeepMind launches Gemini 2.0
3️⃣ Comments by incoming AI czar David Sacks on AGI threat resurface
Get our free newsletter here 👇
controlai.news/p/sub...
ControlAI Weekly Roundup #8: Sneaky Machines
OpenAI launches o1, which in tests tried to avoid shutdown, Google DeepMind launches Gemini 2.0, and comments by incoming US AI czar David Sacks expressing concern about the threat from AGI resurface.
controlai.news
December 12, 2024 at 5:54 PM
📩 ControlAI Weekly Roundup: Sneaky Machines
1️⃣ OpenAI launches o1, in tests tries to avoid shutdown
2️⃣ Google DeepMind launches Gemini 2.0
3️⃣ Comments by incoming AI czar David Sacks on AGI threat resurface
Get our free newsletter here 👇
controlai.news/p/sub...
1️⃣ OpenAI launches o1, in tests tries to avoid shutdown
2️⃣ Google DeepMind launches Gemini 2.0
3️⃣ Comments by incoming AI czar David Sacks on AGI threat resurface
Get our free newsletter here 👇
controlai.news/p/sub...
📩 ControlAI Weekly Roundup: Time to Unplug?
1️⃣ Voters back AI policy focus on preventing extreme risks
2️⃣ Meta asks the government to block OpenAI's for-profit switch
3️⃣ Eric Schmidt warns there's a time to unplug AI
Get our free newsletter:
controlai.news/p/con...
1️⃣ Voters back AI policy focus on preventing extreme risks
2️⃣ Meta asks the government to block OpenAI's for-profit switch
3️⃣ Eric Schmidt warns there's a time to unplug AI
Get our free newsletter:
controlai.news/p/con...
ControlAI Weekly Roundup #9: Time to Unplug?
Voters back an AI policy focus on preventing extreme risks, Meta asks the government to block OpenAI switching to a for-profit, and Eric Schmidt warns there’s a time to consider unplugging AI systems.
controlai.news
December 19, 2024 at 7:53 PM
📩 ControlAI Weekly Roundup: Time to Unplug?
1️⃣ Voters back AI policy focus on preventing extreme risks
2️⃣ Meta asks the government to block OpenAI's for-profit switch
3️⃣ Eric Schmidt warns there's a time to unplug AI
Get our free newsletter:
controlai.news/p/con...
1️⃣ Voters back AI policy focus on preventing extreme risks
2️⃣ Meta asks the government to block OpenAI's for-profit switch
3️⃣ Eric Schmidt warns there's a time to unplug AI
Get our free newsletter:
controlai.news/p/con...
Excellent work! If you're ever interested in covering the AI safety movement, I work at Conjecture, who recently put out thecompendium.ai, and also collaborate with ControlAI, which focuses on lobbying, and PauseAI, which is the mass-movement protest organization. Can get you contacts/interviews :)
The Compendium
The Compendium
thecompendium.ai
December 17, 2024 at 3:31 PM
Excellent work! If you're ever interested in covering the AI safety movement, I work at Conjecture, who recently put out thecompendium.ai, and also collaborate with ControlAI, which focuses on lobbying, and PauseAI, which is the mass-movement protest organization. Can get you contacts/interviews :)
Yup. And on Scishow he's pushing ControlAI a TESCREAL/EA organisation where CEO is connected to LessWrong (Eliezer Yudkowsky) too. :/
It's messy AF.
It's messy AF.
November 1, 2025 at 10:18 PM
Yup. And on Scishow he's pushing ControlAI a TESCREAL/EA organisation where CEO is connected to LessWrong (Eliezer Yudkowsky) too. :/
It's messy AF.
It's messy AF.
Polling from ControlAI/YouGov - controlai.com/polls
January 2025 Polls | ControlAI
At ControlAI we are fighting to keep humanity in control.
controlai.com
September 8, 2025 at 2:53 PM
Polling from ControlAI/YouGov - controlai.com/polls