The Distributed AI Research (DAIR) Institute
@dairinstitute.bsky.social
3.6K followers 110 following 82 posts
AI is not inevitable. We DAIR to imagine, build & use AI deliberately. Website: http://dair-institute.org Mastodon: @[email protected] LinkedIn: https://www.linkedin.com/company/dair-institute/
Posts Media Videos Starter Packs
Pinned
dairinstitute.bsky.social
🎉 Congratulations to our own @milamiceli.bsky.social for being on TIME's 100 list of "innovators, leaders, & thinkers reshaping our world through groundbreaking advances in" AI.

Mila is recognized for her innovative Data Workers' Inquiry, a worker led investigation of the impact of data work. 🧵
A headshot of Mila with the text "TIME 100/AI"
Reposted by The Distributed AI Research (DAIR) Institute
Reposted by The Distributed AI Research (DAIR) Institute
braiduk.bsky.social
Congratulations to artists Identity 2.0, selected as the first guest zine to be included in the @dairinstitute.bsky.social Zine Library. 'Conversations about resistance and generative AI' can be downloaded here: zines.dair-institute.org/ai-z Image courtesy of the DAIR website.
Reposted by The Distributed AI Research (DAIR) Institute
dylnbkr.bsky.social
Saw this article linked by @wonkish.bsky.social in the replies here and wanted to expand on it a little, because this really does feel like a place where a tech solution *is* warranted to help address a major problem. 🧵

www.theverge.com/2024/8/21/24...
A screenshot of the title of an article from The Verge. It reads:

This system can sort real pictures from AI fakes — why aren’t platforms using it?
Big tech companies are backing the C2PA’s authentication standard, but they’re taking too long to put it to use.

by Jess Weatherbed
Aug 21, 2024, 6:00 AM PDT A screenshot from the linked The Verge article. It reads:

Step one: the industry adopts a standard
A body like C2PA develops an authentication and attribution standard.
Parties across photography, content hosting, and image editing industries agree to the standard.
Step two: creators add credentials
Camera hardware makers offer to embed the credentials.
Editing apps offer to embed the credentials.
Both hardware and software solutions work in tandem to ensure creators can confirm the origins of an image and how / if it’s been altered during edits.
Step three: platforms and viewers check credentials
Online platforms scan for image credentials and visibly flag key information to their users.
Viewers can also access a database to independently check if an image carries credentials.
Reposted by The Distributed AI Research (DAIR) Institute
hypervisible.blacksky.app
“…that Sora is being used for stalking and harassment will likely not be an edge case, because deepfaking yourself and others into videos is one of its core selling points.”

Far from an edge case, it’s the primary use case.
Stalker Already Using OpenAI's Sora 2 to Harass Victim
A journalist claims that her stalker used Sora 2, the latest video app from OpenAI, to churn out videos of her.
futurism.com
Reposted by The Distributed AI Research (DAIR) Institute
foxrainbow.bsky.social
My question is in their! I like the answer.
dairinstitute.bsky.social
Happy 3rd birthday to the Mystery AI Hype Theater 3000 podcast! 🎂

To celebrate, @emilymbender.bsky.social and @alexhanna.bsky.social recorded a special episode to answer listener questions about the show, & reflect on the past & future of AI hype.

Listen here: www.buzzsprout.com/2126417/epis...
Reposted by The Distributed AI Research (DAIR) Institute
Reposted by The Distributed AI Research (DAIR) Institute
cgsunit.bsky.social
Today's the day for my anti-AI zine volume 2: "Human Perspectives on the Latest AI Hype Cycle" 🎉

Enjoy the fruits of my focus these past few months and learn from many great people!

Scanned zine to print your own and the full text and references are available at padlet.com/laurenUU/antiAI
Front and back cover of the Zine sitting among Japanese maple leaves. Front cover has the title "Human Perspectives on the Latest AI Hype Cycle" with subtitle "AI Sucks and You Should Not Use It, Volume 2"
along with the date of October 2025 and author Lauren Woolsey.

Back cover has the text "References available on the back of this unfolded sheet and at padlet.com/laurenUU/antiAI" along with a QR code to that link. Then it has the text "Share with a friend, light the world! Connect w/ me: @cgsunit.bsky.social" Pages 2 and 3 of the Zine, open among tree leaves.

Page 2 starts with handwritten "First...some backstory!" and then the text reads as follows: "Version Volume 1 of this zine, (June 2025), is called “Why GenAI Sucks and you should not use it.” I gave copies to my friends, did swaps at Grand Rapids Zine Fest, and shared the digital scan with hundreds of folks. It’s been great to connect with a community of humans who also think AI sucks! Since June, more great folks have added to the conversation. Let me introduce a few here..."

Page 3 is titled Anthony Moser and has the following text: "“I am an AI hater. This is considered rude, but I do not care, because I am a hater.” So opens this most excellent essay (posted August 2025). 
You absolutely need to read it. Also, it has 24 linked resources, if my Zine v1.1 list wasn’t enough to get you started being a hater." Pages 4 and 5 of the Zine, open among tree leaves.

Page 4 is titled Olivia Guest and has the text: "1. Look at Guest’s incredible collection promoting Critical AI Literacy (CAIL): olivia.science/ai . 2. Discover a framework to define AI in “What Does 'Human-Centred AI' Mean?” (July 2025). 3. Share with educator friends Guest et al: “Against the Uncritical Adoption of 'AI' Technologies in Academia” (September 2025). Such a helpful paper for advocacy!"

Page 5 is titled Ali Alkhatib and has the following text: "“AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.” -from his essay Defining AI. Ali is on my recent radar because he’s starting “AI Skeptics Reading Group” the same month that this Zine launches (October 2025)! If you're a reader, check out the book list on p. 7 here!" Pages 6 and 7 of the Zine, in partial shadow from tree leaves and surrounded by Japanese maple leaves.

Page 6 is titled Distributed AI Research (DAIR) Institute and has the text: "Great projects DAIR supports: Data Workers Inquiry (work led by Dr. Milagros Miceli), Mystery AI Hype Theater 3000 (by E. Bender and A. Hanna), Possible Futures workshop and Zine series. Timnit Gebru is founder and executive director of DAIR and co-author of the “TESCREAL Bundle” research paper. (Read it!)

Page 7 is titled Further Reading and has a drawn stack of books with the following titles and publication months: Resisting AI (08/22), Blood in the Machine (09/23), The AI Mirror (06/24), Taming Silicon Valley (09/24), Why We Fear AI (03/25), More Everything Forever (04/25), The AI Con (05/25), Empire of AI (05/25). There are notes for The AI Con that the authors run the podcast mentioned on page 6 and that it is the book that the Reading Group from page 5 started on 10/13/25. The page ends with the text "Authors and full titles in reference list!" and a signature from Lauren "Double U."
Reposted by The Distributed AI Research (DAIR) Institute
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
dairinstitute.bsky.social
Today is the 12th anniversary of 369 Eritrean refugees who drowned in Lampedusa exactly 12 years ago. As our researcher refugee advocate Meron Estefanos wrote, one of them was 22 years old and was giving birth as she was drowning. Her body was later found with an umbilical cord attached to her baby.
Reposted by The Distributed AI Research (DAIR) Institute
humanityexists.bsky.social
I highly recommend following ALL of em’ critical voices of our time…✌🏼
dairinstitute.bsky.social
Happy 3rd birthday to the Mystery AI Hype Theater 3000 podcast! 🎂

To celebrate, @emilymbender.bsky.social and @alexhanna.bsky.social recorded a special episode to answer listener questions about the show, & reflect on the past & future of AI hype.

Listen here: www.buzzsprout.com/2126417/epis...
Reposted by The Distributed AI Research (DAIR) Institute
tjheffernan.bsky.social
Infected by AI...we need immunity to this BS. Not a "future" for anyone.
dairinstitute.bsky.social
Happy 3rd birthday to the Mystery AI Hype Theater 3000 podcast! 🎂

To celebrate, @emilymbender.bsky.social and @alexhanna.bsky.social recorded a special episode to answer listener questions about the show, & reflect on the past & future of AI hype.

Listen here: www.buzzsprout.com/2126417/epis...
dairinstitute.bsky.social
Happy 3rd birthday to the Mystery AI Hype Theater 3000 podcast! 🎂

To celebrate, @emilymbender.bsky.social and @alexhanna.bsky.social recorded a special episode to answer listener questions about the show, & reflect on the past & future of AI hype.

Listen here: www.buzzsprout.com/2126417/epis...
Reposted by The Distributed AI Research (DAIR) Institute
emilymbender.bsky.social
As a birthday bonus, here’s one more audiogram for Mystery AI Hype Theater 3000 Ep 63

www.buzzsprout.com/2126417/epis...

w/ @alexhanna.bsky.social
production by Ozzy Llinas Goodman
Reposted by The Distributed AI Research (DAIR) Institute
petertarras.bsky.social
If the current 'AI' hype seems ideologically motivated to you, it's because there's a whole bundle of ideologies operating in the background. Timnit Gebru @timnitgebru.bsky.social and Émile Torres @xriskology.bsky.social call it the 'TESCREAL bundle'. You can read their fundamental paper here:
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
firstmonday.org
Reposted by The Distributed AI Research (DAIR) Institute
Reposted by The Distributed AI Research (DAIR) Institute
talesofanalfa.bsky.social
History repeats itself. Nothing ever changes.
dairinstitute.bsky.social
"“AI isn’t magic; it’s a pyramid scheme of human labor,” said @adiod.bsky.social , a researcher at the Distributed AI Research Institute based in Bremen, Germany. “These raters are the middle rung: invisible, essential and expendable.”

www.theguardian.com/technology/2...
How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart
Contracted AI raters describe grueling deadlines, poor pay and opacity around work to make chatbots intelligent
www.theguardian.com
Reposted by The Distributed AI Research (DAIR) Institute
Reposted by The Distributed AI Research (DAIR) Institute
ucdavislaw.bsky.social
Join the Center for Innovation, Law, and Society (CILS) for a talk with Dr. @timnitgebru.bsky.social about the history of the Artificial General Intelligence (AGI) movement and its link to the 20th-century eugenics movement.

This in-person event will also be livestreamed: bit.ly/42kxaQV

#AGI #AI
Reposted by The Distributed AI Research (DAIR) Institute
jasonbell.bsky.social
If you really knew how your AI was being designed, filtered and trained.... you wouldn't want to use it.

This has been going on for years with the large tech companies, not just in AI.
dairinstitute.bsky.social
"“AI isn’t magic; it’s a pyramid scheme of human labor,” said @adiod.bsky.social , a researcher at the Distributed AI Research Institute based in Bremen, Germany. “These raters are the middle rung: invisible, essential and expendable.”

www.theguardian.com/technology/2...
How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart
Contracted AI raters describe grueling deadlines, poor pay and opacity around work to make chatbots intelligent
www.theguardian.com
Reposted by The Distributed AI Research (DAIR) Institute
Reposted by The Distributed AI Research (DAIR) Institute
crackedwindscreen.bsky.social
Want an example of that? The autonomous vehicle industry. Who have somehow conned people into believing that teleops is suitable (which is insane if you actually think about it) and they kick in every mile or so.
dairinstitute.bsky.social
"“AI isn’t magic; it’s a pyramid scheme of human labor,” said @adiod.bsky.social , a researcher at the Distributed AI Research Institute based in Bremen, Germany. “These raters are the middle rung: invisible, essential and expendable.”

www.theguardian.com/technology/2...
How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart
Contracted AI raters describe grueling deadlines, poor pay and opacity around work to make chatbots intelligent
www.theguardian.com
dairinstitute.bsky.social
Our own Kathleen Siminyu writes: "most parallel data [...] was from the religious domain, either translations of the Bible or other religious texts [...] done by religious organizations whose primary aim was evangelization, both in pre-colonial & post-colonial times."
akademie.dw.com/en/mind-the-...
Mind the gap: Building inclusive AI for African languages
Poor quality and domain bias in training data hinders language tool development. That's why language diversity must be a priority, says Kathleen Siminyu from the Distributed Artificial Intelligence Re...
akademie.dw.com