Dani Shanley
@danishanley.bsky.social
240 followers 200 following 75 posts
assistant prof in philosophy @ maastricht university. thinking about ethics and politics (human factors) of new tech. critical of the hype and critical of the critics.
Posts Media Videos Starter Packs
Reposted by Dani Shanley
estherschindler.bsky.social
I just saw someone use the abbreviation “AI;DR” and I’ll be laughing for a while.
Reposted by Dani Shanley
melhogan.bsky.social
Ongoing CFP:

Heliotrope is seeking submissions on a rolling basis. This a space for short think-&-feel pieces... www.heliotropejournal.net/editors-notes
Editors' notes — HELIOTROPE
www.heliotropejournal.net
Reposted by Dani Shanley
abeba.bsky.social
this!!!
okwonga.bsky.social
Everything I have learned about [insert name of far-right US media pundit], I have learned entirely against my will.
Reposted by Dani Shanley
techpolicypress.bsky.social
While there is value to foresight and anticipating risks, writes Tech Policy Press fellow
@eryk.bsky.social, could the language used by AI risk communities to describe the technology contribute to the very problems it aims to curb?
The AI Safety Debate Needs AI Skeptics | TechPolicy.Press
The language used by AI risk communities to describe the technology may contribute to the very problems it aims to curb, Eryk Salvaggio writes.
www.techpolicy.press
Reposted by Dani Shanley
olgacronin.bsky.social
Here's a video explaining how Chat Control surveillance would work by the Max Planck Institute for Security and Privacy...

fair.tube/w/72DCPMByyS...

#speirgorm
Reposted by Dani Shanley
verybadllama.bsky.social
every day I wake up and read a headline that sounds like something a concussed writer for The Simpsons would come up with while trying to make an intentionally stupid joke about George Orwell
Reposted by Dani Shanley
edzitron.com
Oh god, they actually put in the newspaper that I got mad
nuclearpidgeon.bsky.social
Two copies acquired and distributed to the lunch tables of my corporate tech job office 🙂
danishanley.bsky.social
I agree. it's the only job that corresponds with my dress sense, other than children's tv presenter.
Reposted by Dani Shanley
cwebbonline.com
Speaking of voices under attack…can we be just as loud about Black journalists and comedians who are silenced, even by our so-called liberal outlets.

• Joy Reid
• Don Lemon
• Melissa Harris-Perry
• Tiffany Cross
• Jemele Hill
• Marc Lamont Hill
• Karen Attiah
• Amber Ruffin
Reposted by Dani Shanley
mekka.mekka-tech.com
I like Jimmy Kimmel. I really do. I have no issues with him.

I'm talking about something very different.

Kimmel was only suspended. But he's back now. Karen Attiah and all the other Black journalists are still fired.

Y'all do know how to fight, when you want to.

But you don't really want to. 🤷🏿‍♂️
Reposted by Dani Shanley
abeba.bsky.social
I was part of a working group on AI and Fraternity assembled by the Vatican. We met in Rome and worked on this over two days. I am happy to share the result of that intense effort: a Declaration we presented to the Pope and other government authorities

coexistence.global
In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family:

    Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated. 

    AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent.

    Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or “rights”. 

    Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.
    Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle.
    Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance. 

    Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy.  

    No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized. 

    No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable. 

    Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain.

    No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.
Reposted by Dani Shanley
ninamarkl.bsky.social
apropos of recent posts, I was invited to give a talk to my faculty about "social impacts of AI" on Monday and I have decided to stop being coy about it
screenshot of powerpoint title slide: "Saying "No": 10 things I hate about GenAI (and you should too)"
Reposted by Dani Shanley
miriamposner.com
FWIW, I’ve read a lot of corporate materials on AI for research and I’ve noticed this, too. It may not seem like a big deal that AI sometimes gets stuff wrong, as long as it’s mostly right—but in fact it throws every product into doubt and is therefore close to useless.
Reposted by Dani Shanley
bcmerchant.bsky.social
If you're in London, check out Breaking the (G)loom, the only event that pokes at (p)doom and references the original luddites all in one title. *Also* wish I could make this one

luma.com/9ddl2shi
Reposted by Dani Shanley
bcmerchant.bsky.social
If you're in New York, join the Silicon Valley tech-rejecting youth, a Palantir whistleblower, activists, writers and critics as they take to the Highline to make some noise.

Really wish I could make this.
Reposted by Dani Shanley
bcmerchant.bsky.social
Protests, mass meetups, conferences—a youth-led movement is reclaiming the Luddite mantle, rejecting a future dominated by Silicon Valley companies, toxic apps, and generative AI.

This fall, a "Luddite renaissance" is in full swing.

www.bloodinthemachine.com/p/the-luddit...
The Luddite Renaissance is in full swing
This fall, the new luddites are rising
www.bloodinthemachine.com
Reposted by Dani Shanley
parismarx.com
my friend @nastasiahadjadji.bsky.social has a new book out on techno-fascism. if you’re a french speaker, you should definitely check it out!
apocalypse nerds by nastasia hadjadji and olivier tesquet
danishanley.bsky.social
Some thoughts on seeing through/beyond the hype surrounding synthetic data. Delighted to share them via @adalovelaceinst.bsky.social! 🙏
Reposted by Dani Shanley
marielza.bsky.social
"If so-called ‘real’ data is never objective — but rather the product of subjective decisions about what to measure, when to collect, whom to include and how to categorise – synthetic data, voided of any concrete referents, makes this subjectivity more concentrated and less visible."
Reposted by Dani Shanley
jackstilgoe.bsky.social
With a nod to our @hypestudies.bsky.social meeting last week in Barcelona. Thanks Eryk.
eryk.bsky.social
AI hype overlaps with the belief that people don’t matter and that politics needs to be “solved” for government to be efficient. In the ongoing failure of US democracy to solve problems, AI hype shadowed the rise of authoritarian anti-politics. www.techpolicy.press/future-fatig...
Future Fatigue: How Hype has Replaced Hope in the 21st Century | TechPolicy.Press
To resist AI hype, we must not reassert the fiction that we could “return” to a functional democracy, writes Eryk Salvaggio.
www.techpolicy.press