Evan Selinger
@evanselinger.bsky.social
1.9K followers 400 following 62 posts

Prof. Philosophy at RIT. Contributing writer at Boston Globe Ideas. Tech (yes, AI), ethics, privacy, policy, and power. http://www.eselinger.org/

Neuroscience 23%
Philosophy 15%
Posts Media Videos Starter Packs

Reposted by Evan Selinger

keanbirch.bsky.social
I have a new article out: "Do artifacts have political economy?" It's a riff on an old argument by Langdon Winner about the embedding of politics in technology

#STS #sociology #technoscience #technology #innovation

journals.sagepub.com/doi/10.1177/...
Do Artifacts Have Political Economy? - Kean Birch, 2025
Harking back to Langdon Winner's now classic essay “Do artifacts have politics?,” my aim in this article is to ask a very similar question—namely, do artif...
journals.sagepub.com
nathaliesmuha.bsky.social
It's out! You can now access The Cambridge Handbook of the Law, Ethics and Policy of #AI: www.cambridge.org/core/books/t...

20 #openaccess chapters covering topics on AI, ethics, philosophy, legal domains and sectoral applications.

Huge thanks to all the authors who made this possible!

evanselinger.bsky.social
Mary! It’s been too long.

I’ll put a version online next week and send you the URL.

evanselinger.bsky.social
Thomas Carroll and I put our heads together to articulate the main ethical concerns with using AI to address the empathy crisis in medicine. “The Ethics of Empathetic AI in Medicine” is now out in IEEE Transactions on Technology and Society.

ieeexplore.ieee.org/document/110...
The Ethics of Empathetic AI in Medicine
The expression of empathy is an important part of effective and humane medical care. Modern medicine faces a significant challenge in this area, at least in part due to the ever-increasing demands on ...
ieeexplore.ieee.org

evanselinger.bsky.social
Not much on social media these days. But if anyone is interested in why I think the entire paradigm of human-like AI is wrong, here’s a short post at the APA Public Philosophy blog. They leaned into the “shit on a stick” story for the cover art. 😆

blog.apaonline.org/2025/07/01/t...
The Precautionary Approach to AI: Less Human, More Honest
Have you ever caught yourself thanking Siri or saying please to ChatGPT? If so, you’re not alone. Evolutionary forces, social norms, and design features all make us naturally inclined to treat these t...
blog.apaonline.org

Reposted by Evan Selinger

evanselinger.bsky.social
Can enlightened altruistic coders save us from the oppressive tyranny of corporate managerialism? Alas, I don’t think so. My argument in a review essay of Darryll Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software.” lareviewofbooks.org/article/what...
What Can Enlightened Coders Really Do? | Los Angeles Review of Books
Evan Selinger reads Darryl Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software” with the realities his students face in mind.
lareviewofbooks.org
clequesne.bsky.social
🗓️ Nice, les 13 & 14 Mars.
A tous les passionnés et les curieux, les avertis et les novices : colloque international sur la reconnaissance faciale et les technologies de surveillance. 12 pays représentés, 30 experts et de passionnants échanges en perspectives👇
droit.univ-cotedazur.fr/law-enforcem...

evanselinger.bsky.social
Wish I could! But it’s only in-person. There isn’t a link.

evanselinger.bsky.social
The attack on universities mirrors our blind spot with supply chains. Because both operate invisibly, misconceptions abound. The profound contributions of universities often go unnoticed—and, just like supply chains, we risk only recognizing their value when they're diminished or fail.

evanselinger.bsky.social
The main thesis is that the hermeneutic circle (no, they don't use this phrase!) haunts AI consciousness claims. Our theories of mind are built on pre-theoretical experience of consciousness. And yet companies insist they can replicate what they can't even define independently of that experience.
techpolicypress.bsky.social
Some AI enthusiasts fantasize about chatbots' potential future suffering. But David McNeill and Emily Tucker say there are many good reasons to reject the claim that contemporary AI research is on its way toward creating genuinely intelligent, much less conscious, machines.
Suffering is Real. AI Consciousness is Not. | TechPolicy.Press
Probabilistic generalizations based on internet content are not steps toward algorithmic moral personhood, write David McNeill and Emily Tucker.
buff.ly

Reposted by Evan Selinger

techpolicypress.bsky.social
Some AI enthusiasts fantasize about chatbots' potential future suffering. But David McNeill and Emily Tucker say there are many good reasons to reject the claim that contemporary AI research is on its way toward creating genuinely intelligent, much less conscious, machines.
Suffering is Real. AI Consciousness is Not. | TechPolicy.Press
Probabilistic generalizations based on internet content are not steps toward algorithmic moral personhood, write David McNeill and Emily Tucker.
buff.ly

Reposted by Evan Selinger

Reposted by Evan Selinger

gaiabernstein.bsky.social
Last opportunity to register to Seton Hall Law School's AI Companions online symposium tomorrow, Tuesday, Feb. 18 ,12 pm-2:30 pm EST. You can register here: : bit.ly/40Ztl2j

evanselinger.bsky.social
Growing up in the 80s makes me a sucker for underdog stories. I loved reliving Karate Kid vibes with Cobra Kai!

Question—

Does celebrating beating the odds risk minimizing how stacked the deck is?

Or is that view overblown b/c life poses many challenges, and we need many inspirational stories?

Reposted by Evan Selinger

frankpasquale.bsky.social
“Brain capacity is also being squeezed. Our mental lives are more fragmented and scattered than ever before”
www.ft.com/content/c288...
The human mind is in a recession
Technology strains our brain health, capacity and skills
www.ft.com

evanselinger.bsky.social
Narrating counterfactuals is necessary to make the invisible legible. Tragically, though, I suspect many will find such stories too abstract and hypothetical to resonate. When people are hurting, it's hard to point out that things could have been worse, and much is taken for granted.
juliaangwin.com
Government’s wins are often invisible: Systems that avoid plane crashes; alliances that avert war; surveillance that prevent pandemics.

Government wins are often *the avoidance of loss.*

So how do we tell the story of the destruction of government? The story of future losses *not* averted?
juliaangwin.com
Government’s wins are often invisible: Systems that avoid plane crashes; alliances that avert war; surveillance that prevent pandemics.

Government wins are often *the avoidance of loss.*

So how do we tell the story of the destruction of government? The story of future losses *not* averted?

evanselinger.bsky.social
It’s hard for some to appreciate this because, tragically, they only associate governance with one thing: a scolding headshake.

“The second fallacy we’ve heard is that AI requires a tradeoff – between safety and progress, between competition and collaboration, and between rights and innovation.”
justinhendrix.bsky.social
At the Paris AI Action Summit, Dr. Alondra Nelson was an invited speaker at a private dinner at the Elysée Palace hosted by French President Emmanuel Macron. Here are her remarks on “three fundamental misconceptions in the way we think about artificial intelligence.”
Three Fallacies: Alondra Nelson's Remarks at the Elysée Palace on the Occasion of the AI Action Summit | TechPolicy.Press
Dr. Nelson was an invited speaker at a dinner hosted by French President Emmanuel Macron at the Palais de l'Élysée on February 10, 2025.
www.techpolicy.press

evanselinger.bsky.social
Ha! But is that idea—that everything can be explained with the right mathematical take—the same perspective being advocated for here and also explicitly linked to ketamine epiphanies?

evanselinger.bsky.social
with his take, his response was basically, “Well, I guess you don’t get math.”

evanselinger.bsky.social
revolved around the idea—which he got from Deleuze—that you could explain all kinds of social phenomena through the lens of structures like soap bubble formation. I never really understood it. But he was clear to us that experiences with ketamine helped unlock his key insights. And if you disagreed

evanselinger.bsky.social
Can you explain the ketamine part to me? When I was a grad student a billion years ago, I made a couple of trips with some Danish friends to visit Manuel DeLanda. Not sure if you’ve heard of him but DeLanda was a self-taught Deleuze scholar who wrote a bunch of weird and interesting books. They

Reposted by Evan Selinger

justinhendrix.bsky.social
At the Paris AI Action Summit, Dr. Alondra Nelson was an invited speaker at a private dinner at the Elysée Palace hosted by French President Emmanuel Macron. Here are her remarks on “three fundamental misconceptions in the way we think about artificial intelligence.”
Three Fallacies: Alondra Nelson's Remarks at the Elysée Palace on the Occasion of the AI Action Summit | TechPolicy.Press
Dr. Nelson was an invited speaker at a dinner hosted by French President Emmanuel Macron at the Palais de l'Élysée on February 10, 2025.
www.techpolicy.press

Reposted by Evan Selinger