Remmelt 🛑
@artificialbodies.net
420 followers 340 following 1.3K posts
Stop Big Tech that: - launders our data - dehumanises workers - lobbies for unsafe uses - pollutes our environment Short book on how AI corps get destructive: https://artificialbodies.net/artificial-bodies-preface-7042453348de
Posts Media Videos Starter Packs
Reposted by Remmelt 🛑
hypervisible.blacksky.app
OpenAI and Anthropic “have traditional business insurance coverage in place, but insurance professionals said AI model providers will struggle to secure protection for the full scale of damages they may need to pay out in the future.”
Insurers balk at multibillion-dollar claims faced by OpenAI and Anthropic
Companies struggle to assess scale of financial risks emerging from artificial intelligence
www.ft.com
Reposted by Remmelt 🛑
abeba.bsky.social
"New York City filed a new lawsuit accusing Facebook, Google, Snapchat, TikTok and other online platforms of fueling a mental health crisis among children by addicting them to social media." www.reuters.com/sustainabili...
www.reuters.com
artificialbodies.net
Oh, this is killing me.
Reposted by Remmelt 🛑
hypervisible.blacksky.app
“The more ways that a student reports that their school uses AI, the more likely they are to report things like 'I know someone who considers AI to be a friend,' 'I know someone who considers AI to be a romantic partner.'"
1 in 5 high schoolers has had a romantic AI relationship, or knows someone who has
A national survey of students, teachers and parents shines a light on how the AI revolution is playing out in schools – including when it comes to bullying and a community's trust in schools.
www.npr.org
Reposted by Remmelt 🛑
bcmerchant.bsky.social
Turns out when @hypervisible.blacksky.app called OpenAI a "social arsonist" last week, he had no idea just how accurate that was.
Reposted by Remmelt 🛑
neilturkewitz.bsky.social
“Every time a news story frames AI competition as a race between companies or countries, it obscures the fact that ordinary people have been drafted into that race without their consent.”
We should all be Luddites | Brookings
Courtney Radsch discusses rehabilitating the idea of Luddites as people concerned with the control and impact of technology.
www.brookings.edu
artificialbodies.net
Start-ups like “Friend” and “Artisan” appropriately blacken the image of the entire AI industry.

It’s one thing I’m glad about here. These execs don’t give a shit about making the public hate the industry, as long as more eyeballs are on their product.

It’s like a negative negative externality.
hypervisible.blacksky.app
“Hype, however, does not necessarily translate into sales. As of this writing, he has sold around 3,100 pendants — though he expects that will increase rapidly once the product hits retailers like Walmart sometime next tear. The ad campaign is still rolling out in Los Angeles, with Chicago up next.”
A Debate About A.I. Plays Out on the Subway Walls
www.nytimes.com
Reposted by Remmelt 🛑
abeba.bsky.social
this! legitimate, trusted, and verifiable knowledge/reality are becoming a thing of the past
quantian.bsky.social
Sora is not the real problem here. The real problem is that in 12, at most 24 months, there will be a Sora clone from Tencent or Alibaba that is just as good and can be run by anybody with a 5099, at which point everyone in society needs to rapidly agree to treat all digital video as fake, period.
Reposted by Remmelt 🛑
hypervisible.blacksky.app
“Justice systems across the world are struggling to address harms from deepfakes that are increasingly used for financial scams, in elections, and to spread nonconsensual sexual imagery.”
Courts don’t know what to do about AI crimes
AI-generated images and videos are stumping prosecutors in Latin America, even as courts embrace AI to tackle case backlogs.
restofworld.org
artificialbodies.net
They don’t give a shit.

Respect and admiration for what you did at Genoa
artificialbodies.net
Actually, just editing a book:

“Why AI Won’t Be Your Mother”
But what if they could control those machines over the very long term? What if they discovered a design that never stops working?
So geeks got busy on perpetual control.
Take Geoffrey Hinton, who left Google to go on an alarmist media tour. Hinton recently had a revelation: Al will dominate us, but we could control it by building in a motherly instinct. To people commenting online, this was absurd. The absurdity also is revealing.
To this top-cited researcher, the worst excesses of tech can be solved with more tech. The machine can even become your mother!
Your machine of loving grace is reaching superintelligence - says the über geeks - as they run out of data to steal, and burn billions of dollars on data centers, to generate slop that looks a lot like last year's slop. Generative models are not reaching take-off - they are reaching dot-com levels of crazy investment.
This time, the tech is backed by big cash reserves and a fascist regime. But the developers are despised and deep in the red.
Likely, the Al crash will happen in the next few years.
artificialbodies.net
Hinton is a sheltered rich guy who focusses on far-out risks but ignores harms to communities now. Underprivileged communities feel harms disproportionately (e.g. see uses of facial recognition, regurgitation of tropes about people of colour). It’s true all of us are affected, but some clearly more.
Reposted by Remmelt 🛑
timnitgebru.bsky.social
Back in the day people used to have a slide at the end of their presentations saying "and that's how the brain works" as an inside joke about Hinton since he was known to talk confidently out of his ass.

Now, the Nobel committee brought this upon us like they give the peace prize to genociders.
dystopiabreaker.xyz
anyway, here is 2024 Nobel Prize in Physics winner Geoffrey Hinton discussing what we know about large AI models on 60 Minutes.
Reposted by Remmelt 🛑
kojamf.bsky.social
Dr. Jane Goodall filmed an interview with Netflix in March 2025 that she understood would only be released after her death.
Reposted by Remmelt 🛑
neilturkewitz.bsky.social
“I’m more than a critic—I’m a hater. I’m not here to make a careful comprehensive argument, because people have already done that. If you’re pushing slop or eating it, you wouldn’t read it anyway. You’d ask a bot for a summary & forget what it told you, then proceed with your day…”
@anthonymoser.com
heleline.bsky.social
We're all being made to have this use-case of the *technology* in our lives, and at the end of the day all we get to do about it is hate it. Because, the arguments against it are blatantly ignored by so many people. Particularly by the people with the power to mitigate the damage "AI" can do.

4/5
anthonymoser.com
I considered writing a long carefully constructed argument laying out the harms and limitations of AI, but instead I wrote about being a hater. Only humans can be haters.
Reposted by Remmelt 🛑
neilturkewitz.bsky.social
“OpenAI is essentially a social arsonist, developing & releasing tools that hyper scale the most racist, misogynistic, & toxic elements of society, lowering the barriers for all manner of abuse.”

And there is also a dark side.

👏🏽 @hypervisible.blacksky.app

h/t @bcmerchant.bsky.social
hypervisible.blacksky.app
OpenAI is essentially a social arsonist, developing and releasing tools that hyper scale the most racist, misogynistic, and toxic elements of society, lowering the barriers for all manner of abuse. The so called guardrails make a pinky swear look like an ironclad contract.
This social app can put your face into fake movie scenes, memes and arrest videos
The new Sora social app from ChatGPT maker OpenAI encourages users to upload video of their face so their likeness can be put into AI-generated clips.
www.washingtonpost.com
Reposted by Remmelt 🛑
neilturkewitz.bsky.social
Everyone needs to see AI for what it is—a project that undermines truth & agency while transferring wealth from individual creators to Silicon Valley companies & VC’s. Rooted in exploitation of humanity & the planet, it’s a tool for the radical right to claim dominion & end representative democracy.
hypervisible.blacksky.app
These are profoundly anti social tools that erode the social fabric. Their main purposes are harassment, abuse, demolishing consent and furthering an authoritarian project.

Any uses outside of that are incidental.
Reposted by Remmelt 🛑
hypervisible.blacksky.app
“‘To ensure we have enough data, we are looking for videos of both real and staged events, to help train the Al what to be on the lookout for,’ the company wrote on its website.
‘You can even create events by pretending to be a thief and donate those events,’ the website reads.’” 💀
Anker offered Eufy camera owners $2 per video for AI training | TechCrunch
Hundreds of Eufy customers have donated hundreds of thousands of videos to train the company’s AI systems.
techcrunch.com
Reposted by Remmelt 🛑
hypervisible.blacksky.app
“From street cameras to drones and regional fusion hubs, surveillance systems are increasingly built atop AWS. The pitch Amazon makes to law enforcement is about more than raw infrastructure. It is also about access, connections, and momentum.“
Amazon’s quiet rise as a power broker in police surveillance | Biometric Update
AWS has positioned itself not just as a host of police data but also as a promoter and intermediary for surveillance tools.
www.biometricupdate.com
Reposted by Remmelt 🛑
Reposted by Remmelt 🛑
hypervisible.blacksky.app
Much like the immortal tech bro, this guy has tapped into the spectacle as a way of garnering attention. In his case, it’s for what is mostly an absurd product that few people want and lots of people hate.
The Most Reviled Tech CEO in New York Confronts His Haters
Avi Schiffmann says he’s enjoying the angry reaction to the Friend AI pendant. Is he serious?
www.theatlantic.com
Reposted by Remmelt 🛑
hypervisible.blacksky.app
“…that Sora is being used for stalking and harassment will likely not be an edge case, because deepfaking yourself and others into videos is one of its core selling points.”

Far from an edge case, it’s the primary use case.
Stalker Already Using OpenAI's Sora 2 to Harass Victim
A journalist claims that her stalker used Sora 2, the latest video app from OpenAI, to churn out videos of her.
futurism.com
artificialbodies.net
Ask Sam about Annie.
hypervisible.blacksky.app
“While Ive acknowledged the potential for AI to boost productivity, efficiency doesn’t appear to be his core goal with these devices. Rather, he hopes for them to bring more social good into the world. The devices should ‘make us happy, and fulfilled, and more peaceful, and less anxious…’”
Jony Ive Says He Wants His OpenAI Devices to ‘Make Us Happy’
“I don’t think we have an easy relationship with our technology at the moment,” the former Apple designer said at OpenAI's developer conference in San Francisco on Monday.
www.wired.com