Lauren Woolsey
@cgsunit.bsky.social
480 followers 410 following 500 posts
Earthling, she/her, teacher of science, reader of books, player of board games, and more. Avatar by Corinne Roberts :)
Posts Media Videos Starter Packs
Pinned
cgsunit.bsky.social
Today's the day for my anti-AI zine volume 2: "Human Perspectives on the Latest AI Hype Cycle" 🎉

Enjoy the fruits of my focus these past few months and learn from many great people!

Scanned zine to print your own and the full text and references are available at padlet.com/laurenUU/antiAI
Front and back cover of the Zine sitting among Japanese maple leaves. Front cover has the title "Human Perspectives on the Latest AI Hype Cycle" with subtitle "AI Sucks and You Should Not Use It, Volume 2"
along with the date of October 2025 and author Lauren Woolsey.

Back cover has the text "References available on the back of this unfolded sheet and at padlet.com/laurenUU/antiAI" along with a QR code to that link. Then it has the text "Share with a friend, light the world! Connect w/ me: @cgsunit.bsky.social" Pages 2 and 3 of the Zine, open among tree leaves.

Page 2 starts with handwritten "First...some backstory!" and then the text reads as follows: "Version Volume 1 of this zine, (June 2025), is called “Why GenAI Sucks and you should not use it.” I gave copies to my friends, did swaps at Grand Rapids Zine Fest, and shared the digital scan with hundreds of folks. It’s been great to connect with a community of humans who also think AI sucks! Since June, more great folks have added to the conversation. Let me introduce a few here..."

Page 3 is titled Anthony Moser and has the following text: "“I am an AI hater. This is considered rude, but I do not care, because I am a hater.” So opens this most excellent essay (posted August 2025). 
You absolutely need to read it. Also, it has 24 linked resources, if my Zine v1.1 list wasn’t enough to get you started being a hater." Pages 4 and 5 of the Zine, open among tree leaves.

Page 4 is titled Olivia Guest and has the text: "1. Look at Guest’s incredible collection promoting Critical AI Literacy (CAIL): olivia.science/ai . 2. Discover a framework to define AI in “What Does 'Human-Centred AI' Mean?” (July 2025). 3. Share with educator friends Guest et al: “Against the Uncritical Adoption of 'AI' Technologies in Academia” (September 2025). Such a helpful paper for advocacy!"

Page 5 is titled Ali Alkhatib and has the following text: "“AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.” -from his essay Defining AI. Ali is on my recent radar because he’s starting “AI Skeptics Reading Group” the same month that this Zine launches (October 2025)! If you're a reader, check out the book list on p. 7 here!" Pages 6 and 7 of the Zine, in partial shadow from tree leaves and surrounded by Japanese maple leaves.

Page 6 is titled Distributed AI Research (DAIR) Institute and has the text: "Great projects DAIR supports: Data Workers Inquiry (work led by Dr. Milagros Miceli), Mystery AI Hype Theater 3000 (by E. Bender and A. Hanna), Possible Futures workshop and Zine series. Timnit Gebru is founder and executive director of DAIR and co-author of the “TESCREAL Bundle” research paper. (Read it!)

Page 7 is titled Further Reading and has a drawn stack of books with the following titles and publication months: Resisting AI (08/22), Blood in the Machine (09/23), The AI Mirror (06/24), Taming Silicon Valley (09/24), Why We Fear AI (03/25), More Everything Forever (04/25), The AI Con (05/25), Empire of AI (05/25). There are notes for The AI Con that the authors run the podcast mentioned on page 6 and that it is the book that the Reading Group from page 5 started on 10/13/25. The page ends with the text "Authors and full titles in reference list!" and a signature from Lauren "Double U."
cgsunit.bsky.social
I have appreciated Kurzgesagt for quite a long time now and link to several of their videos in my course materials for astronomy.

This newest video is so good and so clear, and I have joined their Patreon for a year because of this messaging. I highly recommend watching it.
kurzgesagt.org
AI-generated content is flooding the internet, and we're entering a new era of information overload. Watch our latest video to find out how AI slop affects the internet and why kurzgesagt videos will always remain human-made: https://kgs.link/AISlop
In the middle, there is an AI-generated picture of a duck. Below it, there is a human-illustrated Duck, looking at the other duck in shock, and saying "Wtf is this?"
cgsunit.bsky.social
Zelda Williams made a statement that has such an important quote about AI in it (for sad context, it was in response to being inundated with AI slop of her dad):

"And for the love of EVERYTHING, stop calling it ‘the future,’ AI is just badly recycling and regurgitating the past to be re-consumed."
Reposted by Lauren Woolsey
evangreer.bsky.social
suspending my generalized agnosticism to pray for Dolly Parton
cgsunit.bsky.social
Ooh thanks for reminding me of "tarpitting 8-ball"
cgsunit.bsky.social
Related:
cassiewillson.bsky.social
there are so many great things about generative AI! here are some of my favorites <3
cgsunit.bsky.social
lol damn I felt this in my bones it hit so hard
ohlookbirdies.bsky.social
"anti-AI people should just learn more about it" motherfucker I know so much about it, that's why I'm anti-AI
Reposted by Lauren Woolsey
prisonculture.bsky.social
We are here because policing in the U.S. has been this way for generations. This is why ICE acts with impunity. They know they can act this way.
cgsunit.bsky.social
They are caught up in an arms race of bullshit and it's absurd that there are no regulations that can effectively stop them.
moreperfectunion.bsky.social
Meta, OpenAI, and Oracle have collectively announced plans to spend more than $1 trillion on data centers in the next several years.
Reposted by Lauren Woolsey
hilaryagro.com
This is also why huge amounts of heavily funded propaganda is necessary in capitalism in the first place—commodifying, dominating, destroying & competing goes against our material interests, so people have to be brainwashed into dehumanizing each other & disconnecting from land to let it happen
cgsunit.bsky.social
I love to hear it! Thanks :)
cgsunit.bsky.social
Second half of my 3-minute opening statement for today's panel discussion.

In the panel, we each picked two papers to share on a handout for attendees and guide discussion. Mine were: zenodo.org/records/1706... and unesdoc.unesco.org/ark:/48223/p... (click through to read titles/authors please!)
Screenshot of text continuing from the previous post. It reads as follows: And many AI tools have fewer benefits and more harms. Let’s start with harms to the environment. Using AI means using data centers. Data centers evaporate water, spew pollutants, and use so much energy that people’s home electricity bills increase near them.
Beyond environmental harm, there are also social and mental harms. When a new drug comes out on the market, it has been well tested and regulated so that side effects are known and risks are minimal. AI has no such regulations, yet the side effects are extreme and the extent of risks are unknown. A teen was encouraged into suicide, adults have been driven to complete psychosis. AI is not just disrupting learning, it is disrupting our society, and not in a good way. 
Last, we have hubris. I’ll end with part of the quote that Guest et al. open their paper with: “The culture of AI is imperialist and seeks to expand the kingdom of the machine. The AI community is well organized and well funded [...] AI scientists believe in their revolution; the old myths of tragic hubris don’t trouble them at all.” That quote is older than I am, from a past era of overhyped, unsustainable, and poorly-defined AI. Those were my original three words, by the way. I stand by them, but I felt my revision provides a better framework for reflection: hype, harm, and hubris. Thank you.
cgsunit.bsky.social
Besides this anti-AI zine for general audiences, I have been preparing all month for a panel this morning at my institution discussing whether AI was disrupting learning and what we as educators should or should not do. Here is my 3-minute prepared opening statement, in two parts (for alt text).
Screenshot of text that reads as follows: Hype, harm, and hubris. These words will guide this opening statement. How many years old is the term “artificial intelligence”? Think of your guess. The current frenzy of changes and marketing campaigns might have made you pick maybe 3 or 4 years. Well, the term is 70 years old and this is not the first AI hype cycle.
The risk of us as educators falling for Fear of Missing Out and hype is that we risk preparing students for a collapsing industry. Companies replace entry level jobs with AI tools, then data shows that people work slower and lose skills. Pilot tests are failing and AI adoption rates are trending downward in recent months. The speed of changes in AI should concern anyone hoping for a stable career path when the bubble is soon to burst.
Now for a brief analogy. Let’s say you paint the interiors of houses as part of your construction career. What if I told you that you could buy a certain type of paint that dries faster, is more durable, and is more moisture resistant than the paint you’re currently using? And suddenly, everyone you knew was talking about the benefits of this different paint and it seems like all the other contractors have switched to it. Well, here’s where we move from hype into harm. That list provided real, actual benefits of lead paint.
We’ve made laws against lead paint use in homes because the benefits do not outweigh the harms of lead. And many AI tools have fewer benefits and more harms. Let’s start with harms to the environment. Using AI...
cgsunit.bsky.social
never forget
you are a breathing
accident of chance
ample with reverberations
of the impossible
a bright buoyant moment
in the dark indifferent
stream of time
made to bask
in the rays of love
and blaze with
readiness for life

Poem/Diviniation by @mariapopova.bsky.social from An Almanac of Birds
A card being held up that has an authentic audobon drawing of a Snake Bird and the words of the poem from the post typed across it brought together like magnetic poetry kits for one's fridge.
Reposted by Lauren Woolsey
xriskology.bsky.social
The Realtime Techpocalypse Newsletter is two months old! We've covered: the TESCREAL ideologies, Silicon Valley pro-extinctionism, the AGI race, AI slop and AI bloopers, etc. If these topics interest you, please subscribe, share, and recommend! :-)

www.realtimetechpocalypse.com/p/the-right-...
The Right Time for a Realtime Roundup!
Here are the fun topics we covered over the past two months!
www.realtimetechpocalypse.com
Reposted by Lauren Woolsey
noethematt.bsky.social
A great book relevant to this:

A Psalm for the Wild Built by Becky Chambers
josie.zone
It is morally wrong to want a computer to be sentient. If you owned a sentient thing, you would be a slaver. If you want sentient computers to exist, you just want to create a new kind of slavery. The ethics are as simple as that. Sorry if this offends
cgsunit.bsky.social
💯
cgsunit.bsky.social
Today's the day for my anti-AI zine volume 2: "Human Perspectives on the Latest AI Hype Cycle" 🎉

Enjoy the fruits of my focus these past few months and learn from many great people!

Scanned zine to print your own and the full text and references are available at padlet.com/laurenUU/antiAI
Front and back cover of the Zine sitting among Japanese maple leaves. Front cover has the title "Human Perspectives on the Latest AI Hype Cycle" with subtitle "AI Sucks and You Should Not Use It, Volume 2"
along with the date of October 2025 and author Lauren Woolsey.

Back cover has the text "References available on the back of this unfolded sheet and at padlet.com/laurenUU/antiAI" along with a QR code to that link. Then it has the text "Share with a friend, light the world! Connect w/ me: @cgsunit.bsky.social" Pages 2 and 3 of the Zine, open among tree leaves.

Page 2 starts with handwritten "First...some backstory!" and then the text reads as follows: "Version Volume 1 of this zine, (June 2025), is called “Why GenAI Sucks and you should not use it.” I gave copies to my friends, did swaps at Grand Rapids Zine Fest, and shared the digital scan with hundreds of folks. It’s been great to connect with a community of humans who also think AI sucks! Since June, more great folks have added to the conversation. Let me introduce a few here..."

Page 3 is titled Anthony Moser and has the following text: "“I am an AI hater. This is considered rude, but I do not care, because I am a hater.” So opens this most excellent essay (posted August 2025). 
You absolutely need to read it. Also, it has 24 linked resources, if my Zine v1.1 list wasn’t enough to get you started being a hater." Pages 4 and 5 of the Zine, open among tree leaves.

Page 4 is titled Olivia Guest and has the text: "1. Look at Guest’s incredible collection promoting Critical AI Literacy (CAIL): olivia.science/ai . 2. Discover a framework to define AI in “What Does 'Human-Centred AI' Mean?” (July 2025). 3. Share with educator friends Guest et al: “Against the Uncritical Adoption of 'AI' Technologies in Academia” (September 2025). Such a helpful paper for advocacy!"

Page 5 is titled Ali Alkhatib and has the following text: "“AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.” -from his essay Defining AI. Ali is on my recent radar because he’s starting “AI Skeptics Reading Group” the same month that this Zine launches (October 2025)! If you're a reader, check out the book list on p. 7 here!" Pages 6 and 7 of the Zine, in partial shadow from tree leaves and surrounded by Japanese maple leaves.

Page 6 is titled Distributed AI Research (DAIR) Institute and has the text: "Great projects DAIR supports: Data Workers Inquiry (work led by Dr. Milagros Miceli), Mystery AI Hype Theater 3000 (by E. Bender and A. Hanna), Possible Futures workshop and Zine series. Timnit Gebru is founder and executive director of DAIR and co-author of the “TESCREAL Bundle” research paper. (Read it!)

Page 7 is titled Further Reading and has a drawn stack of books with the following titles and publication months: Resisting AI (08/22), Blood in the Machine (09/23), The AI Mirror (06/24), Taming Silicon Valley (09/24), Why We Fear AI (03/25), More Everything Forever (04/25), The AI Con (05/25), Empire of AI (05/25). There are notes for The AI Con that the authors run the podcast mentioned on page 6 and that it is the book that the Reading Group from page 5 started on 10/13/25. The page ends with the text "Authors and full titles in reference list!" and a signature from Lauren "Double U."
cgsunit.bsky.social
Happy international observe the moon night! 💜
Close up of the waxing gibbous moon A photo of the oversaturated Moon and a small dot labeled "Saturn!"
cgsunit.bsky.social
You're welcome! Thank you for all you do in these areas, too! I have an event coming up on campus where I'm on a panel, and I'm using one of your papers with @olivia.science as one of my two submitted references (each panelist picked two)
Reposted by Lauren Woolsey
anthonymoser.com
but all of this is about finding messages that reach people who don't have a lot of context *to help them understand why they shouldn't use it*

harm reduction, in this context, is *use reduction*
cgsunit.bsky.social
How it "started": Productivity gains are coming (June 2025)
How it's going: AI Adoption Rate Trending Down for Large Companies (September 2025)

Left image: www.apolloacademy.com/productivity...
Right image: www.apolloacademy.com/ai-adoption-...
Chart titled "More companies are using AI: Productivity gains are coming". On the graph is plotted the share of respondents (firms) answering yes to the question that they have used AI tools in the past two weeks, and it rises from 4% in September 2023 to 9% in May 2025. Chart titled "AI adoption rates starting to decline for larger firms" with different lines showing AI adoption rate by firm size. Chart shows 3 to 5 percent in November 2023 at start of data set, and at least three of the seven lines show clear downward trends starting in June or so of 2025. The lines end with August 2025 data between 7 and 12 percent.
cgsunit.bsky.social
Quoted post is from my reading thread on More Everything Forever, this and the one right before it in the thread are on Yudkowsky.

Other light AI reading would be my new zine in my pinned post, highlighting humans who have useful knowledge bases to keep us all grounded during this hype cycle :)
cgsunit.bsky.social
And damn is he an exhausting dude. He dropped out of school and decided to be the lone hero to save humanity, including writing HP fanfic about "rationality." Imagine if he had been swayed by a different book at 16. Or gotten therapy instead of quitting the socializing that happens in high school.