Brian Merchant
@bcmerchant.bsky.social
43K followers 1.4K following 2.6K posts
author of Blood in the Machine, tech writer, luddite newsletter: https://www.bloodinthemachine.com/ books: https://www.hachettebookgroup.com/contributor/brian-merchant/ kofi link: https://ko-fi.com/brianmerchant
Posts Media Videos Starter Packs
Pinned
bcmerchant.bsky.social
Well hello all, and thanks for giving me a follow over here, and to the starter pack creators who kindly included me.

I'm a tech journalist, critic, and author of Blood in the Machine, a book + newsletter about breaking oppressive machines:

www.bloodinthemachine.com/p/one-year-o...
One year of Blood in the Machine (the book)
On a year of rehabilitating the luddites, resisting AI, and beginning to build a better future
www.bloodinthemachine.com
Reposted by Brian Merchant
susankayequinn.bsky.social
My recent adventures with AI:

—attended bookclub where BLOOD IN THE MACHINE author @bcmerchant.bsky.social zoomed in; I said I've been thinking about "What's my hammer?" (my answer to myself: "stories" and "copyright law")

1/n
Blood in the Machine by Brian Merchant
Reposted by Brian Merchant
Reposted by Brian Merchant
alexvont.bsky.social
I agree with every word Zelda Williams says. And this at the end from OpenAI makes me want to go full Ned Ludd. Creators can’t have a blanket opt-out on copyright infringement of their work and have to fill out a form appealing to OpenAI’s mercy every time? Fuck off into the sun
OpenAI told the Guardian that content owners can flag copyright infringement using a “copyright disputes form” but that individual artists or studios cannot have a blanket opt-out. Varun Shetty, OpenAI’s head of media partnerships, said: “We’ll work with rights holders to block characters from Sora at their request and respond to takedown requests.”
bcmerchant.bsky.social
Yeah exactly — hard bans on related keywords like that are probably the only way to keep from infringing at scale
bcmerchant.bsky.social
I'm also skeptical that this opt-in method will work all that well, aside from in the more obvious cases that pose the greatest risk of litigation like Disney. All of these materials are clearly already in the training data!
Reposted by Brian Merchant
garychun.bsky.social
Thread.
bcmerchant.bsky.social
OpenAI's Sora gambit is somehow even more reckless and arrogant than we've grown accustomed to with the company. OpenAI is betting that it can spit in the face of creatives, workers, and the largest media companies on the planet—and bend copyright law to its whims.
The incredible arrogance of OpenAI
With Sora 2, OpenAI is betting it can spit in the face of workers, creators, and the biggest media conglomerates on the planet — and win
www.bloodinthemachine.com
bcmerchant.bsky.social
ah that's so nice of you, and will keep that in mind but it's all good right now (though could be fun to do something like that at some point regardless.) certainly not dire or anything, just trying to get to a decent spot with it — and paywalls suck, but they *work*
Reposted by Brian Merchant
alecf.bsky.social
What's worse is that OpenAI is giving users tools so that they, themselves, can also gleefully spit in the face of creatives, workers & media companies. It's a loogiefest.
bcmerchant.bsky.social
The full piece is here — I don't usually paywall, but paid subscriptions have slowed over the last couple weeks and I'm not yet sustainable, and well I gotta eat.

Sign up for a paid subscription if you can, so I can keep pieces like this paywall-free in the future. Cheers and hammers up
The incredible arrogance of OpenAI
With Sora 2, OpenAI is betting it can spit in the face of workers, creators, and the biggest media conglomerates on the planet — and win
www.bloodinthemachine.com
bcmerchant.bsky.social
Meanwhile, OpenAI is being sued because its chatbots encouraged a child to kill himself. It helped a mentally unwell veteran to kill his mother. As @hypervisible.blacksky.app but it, OpenAI is nothing less than a "social arsonist." And it's plowing ahead to keep the bubble inflated.
bcmerchant.bsky.social
It's likely a bit of both. But it's made all the worse because it's *also* incredibly reckless on a social level too—we are weeks out from footage of an assassination going wildly viral. The Trump admin says it's treating political enemies as terrorists. And OpenAI releases an AI video generator.
bcmerchant.bsky.social
The way I see it, there are only two real possibilities here — either OpenAI is so desperate to prove it's still on the cutting edge of AI to placate investors and partners, or it's so arrogant that it thinks it can harvest and appropriate intellectual property at will, regardless of current law.
bcmerchant.bsky.social
OpenAI's Sora gambit is somehow even more reckless and arrogant than we've grown accustomed to with the company. OpenAI is betting that it can spit in the face of creatives, workers, and the largest media companies on the planet—and bend copyright law to its whims.
The incredible arrogance of OpenAI
With Sora 2, OpenAI is betting it can spit in the face of workers, creators, and the biggest media conglomerates on the planet — and win
www.bloodinthemachine.com
bcmerchant.bsky.social
I am once again asking us all to recognize that the Luddites were not anti-technology but anti-exploitation. If they were around today they would have joined you in a call for technology that serves everyone’s, not just the bosses.

You *are* a Luddite Bernie, and you should be proud to be one
sanders.senate.gov
Can AI and robotics help us in many ways? I am not a Luddite — I believe they can.

But we must make sure these new technologies benefit all of us, not just a handful of billionaires.

I hope you'll take a few minutes to read my op-ed in Fox News. Let’s start the debate.
SEN SANDERS: AI must benefit everyone, not just a handful of billionaires
Billionaires like Elon Musk and Jeff Bezos invest billions in AI and robotics that could eliminate millions of jobs while increasing corporate profits at workers' expense.
www.foxnews.com
Reposted by Brian Merchant
salome.bsky.social
AI as wage depression tool is a very useful and compelling framing
bcmerchant.bsky.social
Hagen is the author of a great short book, Why We Fear AI, which argues, among other things, that AI should not be viewed as a productivity tool, but a *wage depression* tool.

www.commonnotions.org/why-we-fear-ai
Why We Fear AI — Common Notions Press
Fears about AI tell us more about capitalism today than the technology of the future.
www.commonnotions.org
bcmerchant.bsky.social
can't suck the feeling out of that...
bcmerchant.bsky.social
Hagen is the author of a great short book, Why We Fear AI, which argues, among other things, that AI should not be viewed as a productivity tool, but a *wage depression* tool.

www.commonnotions.org/why-we-fear-ai
Why We Fear AI — Common Notions Press
Fears about AI tell us more about capitalism today than the technology of the future.
www.commonnotions.org
bcmerchant.bsky.social
If you follow AI, you've heard a number of cognitive scientists weigh in with valuable insights and concerns. You may not have heard from one quite like Hagen Blix, however, who describes generative AI as "class warfare through enshittification."

My interview with @hagenblix.bsky.social:
"AI is an attack from above on wages": An interview with cognitive scientist Hagen Blix
The author of 'Why We Fear AI' on why he sees generative AI as "class warfare through enshittification."
www.bloodinthemachine.com
Reposted by Brian Merchant
hypervisible.blacksky.app
OpenAI is essentially a social arsonist, developing and releasing tools that hyper scale the most racist, misogynistic, and toxic elements of society, lowering the barriers for all manner of abuse. The so called guardrails make a pinky swear look like an ironclad contract.
This social app can put your face into fake movie scenes, memes and arrest videos
The new Sora social app from ChatGPT maker OpenAI encourages users to upload video of their face so their likeness can be put into AI-generated clips.
www.washingtonpost.com
Reposted by Brian Merchant
jathansadowski.com
What an unsurprising decision by corporations that desperately need to make money with AI. If every other company with a chatbot isn't already doing this, then expect them to be following Meta's lead soon. The chatbot is not your friend; it's a corporate listening device. www.ft.com/content/22f7...

	Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at https://www.ft.com/tour.
	https://www.ft.com/content/22f7afc3-8ac0-4ca1-9877-fd3f8ddcc986?desktop=true&segmentId=7c8f09b9-9b61-4fbb-9430-9208a9e233c8#myft:notification:daily-email:content

	Meta will use conversations people have with its chatbots to personalise advertising and content across its platforms, in a sign of how tech companies plan to make money from artificial intelligence.

The owner of Facebook, Instagram and WhatsApp on Wednesday said it would use the content of chats with its Meta AI to create advertising recommendations across its suite of apps.

“People will already expect that their Meta AI interactions are being used for these personalisation purposes,” said Christy Harris, privacy and data policy manager at Meta.
Reposted by Brian Merchant
davidakaye.bsky.social
"We call for the immediate end of the use of AI systems by DHS until the government can ensure the systems it deploys are free of discrimination, and until diverse perspectives are meaningfully included in the development and use of AI systems."
techpolicypress.bsky.social
Tsion Gurmu, Hinako Sugiyama, and Sobechukwu Uwajeh call on the US Department of Homeland Security to cease deploying AI systems until it can ensure they are free of discrimination and until diverse perspectives are meaningfully included in their development.
Where AI Meets Racism at the Border | TechPolicy.Press
Migrants are impacted by the use of AI even before they arrive at the border, write Tsion Gurmu, Hinako Sugiyama, and Sobechukwu Uwajeh.
buff.ly