Megan McIntyre
banner
rcmeg.bsky.social
Megan McIntyre
@rcmeg.bsky.social
Director, Program in Rhet/Comp, U of Arkansas
English prof
Writing about #WPALife and writing pedagogy
Loves dogs

Before: Sonoma State English & Dartmouth Institute for Writing & Rhetoric
(views only ever mine, obv)
she/her
Pinned
I'm genuinely begging folks to read both @timnitgebru.bsky.social and Torres' "TESCREAL Bundle" and @adambecker.bsky.social's _More Everything Forever_ and connect these billionaires' eugenicist dreams to the GenAI products they're pushing onto every educational institution, K-college.
Reposted by Megan McIntyre
"Maybe all we do is frustrate," Star said. "Give someone one more day with their daughter or their dad.

"But if you lost your dad, or you lost your daughter, you would give anything to have one more day with them. And that’s what we can do.”

My latest for Slate
Activists Are Fighting ICE Even Though It Could Get Them Killed. Here’s Why.
ICE is still trying to subjugate Minneapolis. It's still failing.
slate.com
February 11, 2026 at 3:18 PM
Reposted by Megan McIntyre
As I’ve been saying, by the end of the year, you can expect that ChatGPT will be selling a product that lets people generate child sexual abuse material on demand. They will then say “oops” about it violating their TOS, and repeat until everyone caves. Because their leadership is largely ex-Meta.
OpenAI fired one of its top safety execs, on the grounds of sexual discrimination, after she voiced opposition to the controversial rollout of AI erotica in its ChatGPT product.

OpenAI told her the term was related to her sexual discrimination against a male colleague.

www.wsj.com/tech/ai/open...
Exclusive | OpenAI Executive Who Opposed ‘Adult Mode’ Fired for Sexual Discrimination
Ryan Beiermeister, who served as the vice president leading OpenAI’s product policy team, had raised concerns about the upcoming launch of erotic content.
www.wsj.com
February 11, 2026 at 10:31 PM
Reposted by Megan McIntyre
It hurts my heart to see how many people think (demonstrated in their use of generative AI) the only value in writing is producing an end product.

I write foremost because the process of writing teaches me a great deal, much of which is never represented in a single word of the final piece.
February 11, 2026 at 11:06 PM
Reposted by Megan McIntyre
what makes something "vulnerable to AI" is not the capacity of machines but the credulity of management.
Of course, it is also true that historians jobs may in practice be vulnerable to AI, because a lot of people who control the money for historian jobs probably haven’t thought much about where history comes from, either.
February 11, 2026 at 8:39 PM
Reposted by Megan McIntyre
I saw this take before I had to run and teach today, so I've had 4 hrs to build a head of steam about it. Still, I'm gonna try to be nice. All due respect, I expect better from people who have all the resources in the world to research a thing AND a giant megaphone with which to share knowledge. 1/
I understand why people are exhausted by AI hype, and why those of us squarely in the corner of "human dignity uber alles" see AI doomerism as self-serving hype, but I *really* think people on the left broadly need to start thinking seriously about the possibiltiy of the hype being...true.
February 11, 2026 at 10:19 PM
Reposted by Megan McIntyre
if AI is so amazing, why can’t its supporters point to one real actual benefit to society it offers?
February 11, 2026 at 8:11 PM
Reposted by Megan McIntyre
No one has seriously said LLMs aren’t important or that AI is categorically junk.

Some of us have said that there is something bigger than tech. It’s called power — governance, civic norms, etc — & refusal is absolutely part of how we think soberly about that power. Who has it & how they use it.
February 11, 2026 at 5:53 PM
Reposted by Megan McIntyre
I don’t really pay attention to charges of doomerism. I don’t know what it means offline.

I do know that refusing a version of how the future will unfold forecloses on the power that actually shapes that future. That’s not disavowing that tech changes are happening but they are in now a given.
I understand why people are exhausted by AI hype, and why those of us squarely in the corner of "human dignity uber alles" see AI doomerism as self-serving hype, but I *really* think people on the left broadly need to start thinking seriously about the possibiltiy of the hype being...true.
February 11, 2026 at 5:51 PM
Reposted by Megan McIntyre
I am exhausted by AI hype. AI doomerism is 100% more AI hype.

A *lot* of harm is being done in the name of AI.

If that harm is too hard to keep looking at, then you get this kind of nonsense:
I understand why people are exhausted by AI hype, and why those of us squarely in the corner of "human dignity uber alles" see AI doomerism as self-serving hype, but I *really* think people on the left broadly need to start thinking seriously about the possibiltiy of the hype being...true.
February 11, 2026 at 6:40 PM
Reposted by Megan McIntyre
No. What people on the left need to see is journalists informing the public rather than performing press release as a service.
I understand why people are exhausted by AI hype, and why those of us squarely in the corner of "human dignity uber alles" see AI doomerism as self-serving hype, but I *really* think people on the left broadly need to start thinking seriously about the possibiltiy of the hype being...true.
February 11, 2026 at 6:28 PM
Reposted by Megan McIntyre
My initial impression of this list is it demonstrates two things:
1. The continued overselling of AI and grandiose wishcasting about what it is/will be capable of doing
2. The wild misunderstanding of what these jobs actually do, by smug techbros who don't care about what they don't know
Microsoft released a study showing the 40 jobs most at risk by AI:
February 11, 2026 at 2:14 PM
Reposted by Megan McIntyre
every single one of these articles.
February 11, 2026 at 2:19 PM
Reposted by Megan McIntyre
A point I make in my work on this subject is that we need to reject the insistance that convos abt commercial AI products should be about the tech. Determining whether AI is “conscious” is a Smartwashing exercise in solipcism that distracts from the material conditions of the product’s distribution
February 11, 2026 at 2:07 PM
Reposted by Megan McIntyre
Ring is a wildly dangerous company. Always has been. But it has sort of flown under the radar the last couple years as it tried to soften its image. Make no mistake that this is an extremely dangerous surveillance dragnet:

www.404media.co/with-ring-am...
With Ring, American Consumers Built a Surveillance Dragnet
Ring's 'Search Party' is dystopian surveillance accelerationism.
www.404media.co
February 10, 2026 at 3:09 PM
Reposted by Megan McIntyre
This article immediately labels @emilymbender.bsky.social and I as "curmudgeons" because we don't think that the spicy auto complete has a concept of the self.
Experiments conducted with the A.I. system Claude are producing fascinating results—and raising questions about the nature of selfhood. Gideon Lewis-Kraus reports from inside the company that designed it, Anthropic. newyorkermag.visitlink.me/rOfXjg
February 11, 2026 at 3:01 PM
Reposted by Megan McIntyre
My god.
I keep rereading this paragraph expecting the words to change
February 11, 2026 at 12:24 AM
Reposted by Megan McIntyre
The site Realfood.gov uses Elon Musk's Grok chatbot to dispense nutrition information—some of which contradicts the government’s new guidelines. www.wired.com/story/rfk-jr...
RFK Jr. Says Americans Need More Protein. His Grok-Powered Food Website Disagrees
The site Realfood.gov uses Elon Musk's Grok chatbot to dispense nutrition information—some of which contradicts the government’s new guidelines.
www.wired.com
February 10, 2026 at 8:36 PM
Reposted by Megan McIntyre
This is because a statistical model of word frequency is not a useful tool for modeling the complex interactions of the human body
⚠️ Despite all the hype, chatbots still make terrible doctors. Out today is the largest user study of language models for medical self-diagnosis. We found that chatbots provide inaccurate and inconsistent answers, and that people are better off using online searches or their own judgment.
February 10, 2026 at 2:27 PM
Reposted by Megan McIntyre
⚠️ Despite all the hype, chatbots still make terrible doctors. Out today is the largest user study of language models for medical self-diagnosis. We found that chatbots provide inaccurate and inconsistent answers, and that people are better off using online searches or their own judgment.
February 9, 2026 at 5:08 PM
Reposted by Megan McIntyre
Tech deployed at the border “includes everything from hyper-visible tethered aerostats — massive blimp-like detection platforms hovering thousands of feet over the desert — to stealthy devices like unattended ground sensors to detect footsteps, and license plate scanners disguised as traffic cones.”
How AI Surveillance Tech is Creeping From the Southern Border Into the Rest of the Country
Surveillance technology has long been part of policing the border. ICE’s growing raids are bringing it to many other areas.
www.themarshallproject.org
February 7, 2026 at 5:28 PM
Reposted by Megan McIntyre
‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI
‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI
Women in rural communities describe trauma of moderating violent and pornographic content for global tech companies
www.theguardian.com
February 5, 2026 at 11:48 AM
Reposted by Megan McIntyre
Last night we had 2 ICE agents get out of a car, pepper spray legal observers, shout something to the effect of "I'm fucking ICE motherfucker" and drive away.
Ice is being very aggressive with arresting observers this afternoon.
February 6, 2026 at 10:23 PM
Reposted by Megan McIntyre
Every instance of "AI democratizes the arts, you're classist and ableist and a gatekeeper for trying to stop it" is a slap in the face to the literally centuries of poor, disabled people making art on the margins and a crass lie in service of a machine that strips down and regurgitates dreams
February 6, 2026 at 5:36 PM
Reposted by Megan McIntyre
Chatbots offer a magnificent bribe - a fast, frictionless route to information that bypasses the discomfort of learning. In this op-ed, we describe this as a Faustian bargain, in which we trade away what it means to be human & universities trade away their value
www.irishexaminer.com/opinion/comm...
Learning is complex, messy, emotional: AI can’t replicate that
ChatGPT and other AI tools may seem irresistible. But educators should beware, as they could end up trading away the thing that gives them value — the rich experience of slow learning
www.irishexaminer.com
February 5, 2026 at 6:24 PM
Reposted by Megan McIntyre
NEW: Mobile Fortify is not designed to "verify" identity, as DHS claims, and it was only approved after DHS rewrote its privacy review rules, records reviewed by @wired.com show. @dell.bsky.social, @regret.bsky.social & @hudsongiles.bsky.social w/scoops. No paywall. www.wired.com/story/cbp-ic...
ICE and CBP’s Face-Recognition App Can’t Actually Verify Who People Are
ICE has used Mobile Fortify to identify immigrants and citizens alike over 100,000 times, by one estimate. It wasn't built to work like that—and only got approved after DHS abandoned its own privacy r...
www.wired.com
February 5, 2026 at 8:30 PM