Atoosa Kasirzadeh
@atoosakz.bsky.social
160 followers 93 following 49 posts
societal impacts of AI | assistant professor of philosophy & software and societal systems at Carnegie Mellon University & AI2050 Schmidt Sciences early-career fellow | system engineering + AI + philosophy | https://kasirzadeh.org/
Posts Media Videos Starter Packs
Reposted by Atoosa Kasirzadeh
andyliu.bsky.social
🚨New Paper: LLM developers aim to align models with values like helpfulness or harmlessness. But when these conflict, which values do models choose to support? We introduce ConflictScope, a fully-automated evaluation pipeline that reveals how models rank values under conflict.
(📷 xkcd)
Reposted by Atoosa Kasirzadeh
bennettmcintosh.com
If you read one review of If Anyone Builds It Everyone Dies make it @sigalsamuel.bsky.social 's about competing #AI worldviews

"It was hard to seriously entertain both [doomer and AI-as-normal tech] views at the same time."

www.vox.com/future-perfe...
The AI doomers are not making an argument. They’re selling a worldview.
How rational is Eliezer Yudkowsky’s prophecy?
www.vox.com
Reposted by Atoosa Kasirzadeh
sigalsamuel.bsky.social
I had a very trippy experience reading the new Eliezer Yudkowsky book, IF ANYONE BUILDS IT, EVERYONE DIES.

I agree with the basic idea that the current speed & trajectory of AI progress is incredibly dangerous!

But I don't buy his general worldview. Here's why:

www.vox.com/future-perfe...
“AI will kill everyone” is not an argument. It’s a worldview.
How rational is Eliezer Yudkowsky’s prophecy?
www.vox.com
Reposted by Atoosa Kasirzadeh
lilianedwards.bsky.social
Very happy to announce brilliant new paper on AI by the wonderful @atoosakz.bsky.social , Phillip Hacker, and er, me, on systemic risk and how it is confusingly differently interpreted through the EU AIA, the DSA and its origin, financial regulation.

Paper at https://arxiv.org/pdf/2509.17878
Reposted by Atoosa Kasirzadeh
bennettmcintosh.com
Kasirzadeh also has a really insightful, heartbreaking thread about what "normal" hides here: bsky.app/profile/atoo...
atoosakz.bsky.social
Normal isn't a fact; it's a perception built from repetition & power. My brother can call a night of missile fire normal simply because he's used to it.To an outsider, the scene is clearly abnormal. Judgements of normality rest on thin, shifting ground. History shows how quickly baselines move. 5/n
Reposted by Atoosa Kasirzadeh
bennettmcintosh.com
Anyway, big fan of the 3rd option Sigal presents, @atoosakz.bsky.social 's warning of "gradual accumulation of smaller, seemingly non-existential, AI risks" until catastrophe.

A warning that suggests AI safetyists should take AI ethics & sociology much more seriously! arxiv.org/pdf/2401.07836
arxiv.org
atoosakz.bsky.social
How good are the current AI agents for scientific discovery? We have answers in our new paper: lnkd.in/dxzmZXpR!
We look at 4 ways AI scientists can go wrong; design experiments to show the manifestation of these failures in 2 open source AI scientists; and recommend detection strategies.
atoosakz.bsky.social
We are very excited to have you, Bálint!
atoosakz.bsky.social
🔊📢 Call for paper is now open (iaseai.org/iaseai26)! I’m thrilled to share that I will once again be serving as Program Co-Chair for International Association for Safe & Ethical AI Conference, this time for the 2026 edition along with the Co-Chair Jaime Fernández Fisac.
IASEAI'26
Building a Global Movement for Safe and Ethical AI
iaseai.org
Reposted by Atoosa Kasirzadeh
ryantlowe.bsky.social
Introducing: Full-Stack Alignment 🥞

A research program dedicated to co-aligning AI systems *and* institutions with what people value.

It's the most ambitious project I've ever undertaken.

Here's what we're doing: 🧵
atoosakz.bsky.social
At hashtag#AIforGood Summit 2025: ‘AI agents’ is mentioned almost every panel, from law and governance to the developer community; yet the term remains so opaque. A plug in to our paper with Iason Gabriel which offers one of the clearest definition I’ve seen: arxiv.org/pdf/2504.21848
arxiv.org
atoosakz.bsky.social
I was planning to launch my substack on "Human, life, AI, and future" in a few months, with something very different. But life doesn’t always care about our timelines. Events erupt. Emotions build. And suddenly, waiting feels like avoidance. My first post: atoosatopia.substack.com/p/valid-and-...
atoosakz.bsky.social
But because judgments of normal are too subjective—and because the world we live in entangled with advanved AI systems is anything but normal—the metaphor can conceals more than it reveals. So here is my proposal: Keep the demystification project live; drop the label “normal”. 10/n
atoosakz.bsky.social
—I’ve pushed back against the “super-intelligence will kill us all” narrative myself for years.
(See for example: Two types of AI existential risk: decisive and accumulative: link.springer.com/article/10.1... AI safety for everyone: www.nature.com/articles/s42...) 9/n
atoosakz.bsky.social
A system that decides who lives or dies is nothing like a household appliance. I admire the scholars Arvind Narayanan and Sayash Kapoor who coined the phrase and share their wish to redefine the notion of “normal” to move the AI governance debate beyond extreme doom. 8/n
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
atoosakz.bsky.social
The metaphor of "AI as normal technology" appeals cause it demystifies in the first place. But the label normal collapses the moment AI directs drones, misidentifies civilians, rewrites battlefield doctrine, or nudges users toward self-harm. 7/n
atoosakz.bsky.social
Seat belts were fussy in 1960, masks were not normal in 2019; both now signal basic care. Those who control information circulation or policy can freeze and stabilize a narrative in place, branding a live hazard normal by sheer repetition. 6/n
atoosakz.bsky.social
Normal isn't a fact; it's a perception built from repetition & power. My brother can call a night of missile fire normal simply because he's used to it.To an outsider, the scene is clearly abnormal. Judgements of normality rest on thin, shifting ground. History shows how quickly baselines move. 5/n
atoosakz.bsky.social
A toaster browns bread & stops; an agentic chatbot can keep learning and might suddenly write ransomware. Consumer devices have fixed functions & well-tested safety rules. Self-modifying agentic systems don't. Using the same label 4 both narrows the questions we ask about oversight & liability. 4/n
atoosakz.bsky.social
When most people hear "normal technology" I bet they picture safe, familiar gadgets—kettles, routers, maybe a smartphone. They don't picture autonomous drones, facial-recognition gunsights, or psychosis chatbots. The "normal" metaphor understates the scale of impact that modern AI can inflict. 3/n
atoosakz.bsky.social
Yesterday, Israel’s president warned, “Tehran will burn,” while my brother in Tehran told me on video, “Don’t worry—everything feels normal.” The clash of concepts in our conversation - his sense of normality against my sense of raw abnormality- pushed me to unpack what normal hides. 2/n
atoosakz.bsky.social
Ever since I first heard the slogan “AI as normal technology,” I’ve felt uneasy. Tonight that unease crystallised. In this 🧵 I unpack what normal hides and why the metaphor may ultimately fail to capture AI’s abnormal impacts on human life and societies. 1/n
atoosakz.bsky.social
What can policy makers learn from “AI safety for everyone” (Read here: www.nature.com/articles/s42... ; joint work with @gbalint.bsky.social )? I wrote about some policy lessons for Tech Policy Press.
atoosakz.bsky.social
Congrats, Dr. Alex, for writing a fantastic dissertation! I had the pleasure of co-supervising Alex during my time at the U of Edinburgh 2022-2024! To have a glimpse at Alex's research read this amazing paper: link.springer.com/article/10.1...