Remmelt 🛑
banner
artificialbodies.net
Remmelt 🛑
@artificialbodies.net
Stop Big Tech that:
- launders our data
- dehumanises workers
- lobbies for unsafe uses
- pollutes our environment

Short book on how AI corps get destructive:
https://artificialbodies.net/artificial-bodies-preface-7042453348de
We won’t find acceptance of our authentic self in social media, and we certainly won’t find it in genAI.
January 13, 2026 at 3:09 AM
I hate to say it, but this is the start.

The pain we’re going to face with genAI (used to extract on top of identity conflict politics) is going to be say at least 20x the pain we experienced with social media.

Our hope is in finding clarity. That even the engineers realise that it is a dead end.
January 13, 2026 at 2:29 AM
Reposted by Remmelt 🛑
“Once hired, workers must install time-tracking software. That software ensures contractors are working during billable hours and aren’t cutting corners by using AI to critique the AI…”
Job Seekers Find a New Source of Income: Training AI to Do Their Old Roles
Buzzy AI startup Mercor employs tens of thousands of white-collar contractors, and the gig is open to anyone with expertise in their own particular field.
www.wsj.com
January 12, 2026 at 3:57 PM
Cancel this shit.
“..if AI listening devices go mainstream, there will also have to be some sort of cultural shift in terms of what’s appropriate and what’s not. Today, it’s somewhat looked down upon to record video of everyday people going about their lives…”
January 13, 2026 at 2:11 AM
“And so the CEOs of the biggest Al companies sign the extinction-oriented statement and don't sign the one calling out more immediate risks or calling for them to be held accountable.”

7/
January 12, 2026 at 2:44 AM
“But although "current-risk accountability" regulation is clearly in the public interest… it's not in the interest of the people in the Al industry who are trying to put as much money in their pockets as they can.”

6/
January 12, 2026 at 2:43 AM
“So focusing on the immediate problem helps both cases… while focusing on the hypothetical problems ignores the immediate case completely and still may not be useful in the future”

5/
January 12, 2026 at 2:26 AM
“And as a bonus, making companies liable for what their software does will force them to slow down and to understand how to control their software”

4/
January 12, 2026 at 2:25 AM
“Regulations like that would immediately help the teenagers and the parents, because the
companies would be forced to reckon with these problems or risk real penalties.”

3/
January 12, 2026 at 2:22 AM
“By contrast, many of the immediate problems have clear-cut solutions.

For example, severe, explicit criminal penalties for developing or hosting a service that can be shown to have generated content that manipulated or encouraged anyone to harm themselves or others.”

2/
January 12, 2026 at 2:20 AM
“Most of what l've seen out of the "Al will make us extinct" camp downplays or ignores the near-term problems… The problem with the extinction argument is that there's no clear, actionable path to solving the extinction problem, because it's only hypothetical at this point in time.”

1/
January 12, 2026 at 2:18 AM
This is such a good video.

I’m frustrated by the push for inevitability. We need to focus on stopping the harms now, instead of ‘AGI in 2027’. For me, it’s about not accelerating the sixth mass extinction.

Kurzgesagt’s vid was funded by the biggest alignment research funder (Coefficient).
January 12, 2026 at 2:11 AM
‘The site asks visitors to "assist the war effort by caching and retransmitting this poisoned training data"

…the poisoned data on the linked pages consists of incorrect code that contains subtle logic errors and other bugs that are designed to damage language models that train on the code.’
MSN
www.msn.com
January 12, 2026 at 1:36 AM
Reposted by Remmelt 🛑
“The Justice Department on Friday announced to employees it is creating an artificial intelligence taskforce to challenge state-level regulations so that AI companies can "be free to innovate without cumbersome regulation…”
DOJ creates task force to challenge state AI regulations
A new group within the Justice Department will target state artificial intelligence laws that it says hinder innovation, according to a memo.
www.cbsnews.com
January 10, 2026 at 3:08 PM
Reposted by Remmelt 🛑
I love seeing these kinds of articles in extremely normal publications because imho it’s an indicator of the level of overreach from tech companies.
Ring Doorbells Can Now Identify Faces—But Experts Say It’s a Major Privacy Invasion. Here’s Everything You Need to Know
Tech keeps getting better—or worse, if you ask privacy experts. Find out which new Ring doorbell feature is making people a little nervous.
www.rd.com
January 10, 2026 at 3:11 PM
Reposted by Remmelt 🛑
All the noise about alignment from the advocates of 'AI Safety' is completely specious because AI already aligns with the hierarchical dualisms that shape our society at the deepest levels, especially misogyny, racism and a contempt for nature.
January 11, 2026 at 7:32 AM
Reposted by Remmelt 🛑
The Looki L1 AI wearable
“…continuously captures a wearer's point of view, promising to advise when to avoid another cup of coffee, to comment on places or objects around you, and to summarize each day in a comic strip.”
AI pendants back in vogue at tech show after early setback
Pendants and brooches packed with artificial intelligence abounded at the Consumer Electronics show, using cameras and microphones to watch and listen through the day like a vigilant personal assistan...
uk.finance.yahoo.com
January 11, 2026 at 4:45 PM
Reposted by Remmelt 🛑
Anthropic head of life sciences: “you can integrate all of your personal information together with your medical records and your insurance records, and have Claude as the orchestrator and be able to navigate the whole thing and simplify it for you.”
Anthropic joins OpenAI's push into health care with new Claude tools
Anthropic’s new offerings allow users and providers to work with medical data, mimicking similar moves by OpenAI.
www.nbcnews.com
January 12, 2026 at 12:00 AM
“…to put her in a transparent bikini, sometimes covered in “donut glaze” that resembles semen.

Grok frequently complies: one estimate suggests that the AI model is posting around 6,700 sexually suggestive or “nudified” images every hour.”

2/
January 11, 2026 at 12:05 PM
“UK’s Internet Watch Foundation said it had found people using Grok to produce sexualized images of girls as young as 11, which the organization says constitute CSAM under British law.

The findings come weeks into a horrifying new trend on X: tagging Grok under a woman’s photo and asking it…”

1/
Why hasn’t anyone stopped Grok?
Transformer Weekly: China’s H200 plans, the plot to oust Ro Khanna, and research turmoil at OpenAI
www.transformernews.ai
January 11, 2026 at 12:04 PM
Reposted by Remmelt 🛑
As someone who has reported on deepfakes for years now, I’ve had a few moments while reporting on Grok where I feel sick to my stomach thinking about the continued escalation of sexual abuse with this technology and where it’s going. We’re not ready for this. And it didn’t have to be this way.
January 10, 2026 at 2:16 AM
Yes, the “safety” community really did everyone a disservice here.
This was also the end game of co-opting the term "safety" to focus on hypothetical and unscientific AGI claims. It diverted attention from the real harms that are inherent to genAI and its training data, and now society has sleep walked into this horror with no accountability in sight.
on twitter grok is generating sexual images of children (in addition to the women whose images were being taken and edited to make them nude, put them in micro bikinis, etc...) and in response elon turned off the media tab on grok's profile page so you can't see the AI-generated child porn as easily
January 2, 2026 at 12:33 PM
Sounds like a cheap hack to get your attention.
January 2, 2026 at 12:07 PM
Glad you got out, and can now enjoy explorations in Europe
January 1, 2026 at 1:22 PM
Reposted by Remmelt 🛑
The deeper point of debating the energy and water demands of data centres is to pose the question of re-socialising energy and water resources for the common good 🔋🌊✊
December 30, 2025 at 8:07 AM