Transformer
@transformernews.ai
16 followers 1 following 28 posts
A publication about the power and politics of transformative AI. Subscribe for free: http://transformernews.ai/subscribe
Posts Media Videos Starter Packs
transformernews.ai
A battle is simmering over proposals to make US chipmakers sell to domestic customers before exporting to countries such as China. @celiaford.bsky.social has the run down on everything you need to know about the GAIN AI Act.
What the GAIN AI Act could mean for chip exports
A battle is simmering over proposals to make US chipmakers sell to domestic customers before exporting to countries such as China
www.transformernews.ai
transformernews.ai
While OpenAI gets distracted by AI-slop feeds, Anthropic looks focused on its core model, writes @ShakeelHashim in this week's newsletter. Plus Newsom signs SB 53, Sen Hawley pushes new AI bills, and top researchers found an “AI scientist” company. www.transformernews.ai/p/openais-de...
OpenAI's descent into slop
Transformer Weekly: Newsom signs SB 53, Claude Sonnet 4.5, and a Hawley/Blumenthal evals bill
www.transformernews.ai
transformernews.ai
OpenAI's new evaluation helps to ground conversations about AI-driven job loss in actual model performance, rather than fear and wild speculation. It also shows that “Will AI take our jobs?” isn’t a yes or no question.
AI models are getting really good at things you do at work
A new OpenAI benchmark, GDPval, tests AI models on things people actually do in their jobs — and finds that Claude is about as good as a human for government work
www.transformernews.ai
transformernews.ai
“He's concerned, open-minded and wants to make time to engage with experts and people who are thinking about AI security, and I think that’s going to be part of what he spends a lot of his time as a minister doing." Our profile of the new UK AI minister: www.transformernews.ai/p/kanishka-n...
Britain’s new AI minister actually ‘gets’ AI
Kanishka Narayan is excited about AI opportunities, but takes the risks seriously too
www.transformernews.ai
transformernews.ai
"We cannot rule out that Claude's low deception rates in our evaluations are at least partially driven by its evaluation awareness." — Apollo Research
Claude Sonnet 4.5 knows when it’s being tested
Anthropic's new model appears to use "eval awareness" to be on its best behavior
www.transformernews.ai
transformernews.ai
In a world of already low or declining trust in political systems and institutions, indiscriminate and irresponsible use of AI could end up being more detrimental for democratic life than systems employed with the explicit aim of shaping election results, writes @felixsimon.bsky.social
AI is persuasive, but that’s not the real problem for democracy
Opinion: Felix M Simon argues that AI is unlikely to significantly shape election results in the near future, but warns that it could damage democracy through a steady erosion of institutional trust.
www.transformernews.ai
transformernews.ai
The pyrotechnic narratives of The Matrix or Terminator may be science fiction, but the underlying mechanism — AI autonomously developing new, better AI — is considered one of the greatest risks inherent in our headlong rush to build more powerful models. www.transformernews.ai/p/automated-...
When AI starts writing itself
Why automating AI R&D could be the most dangerous milestone yet
www.transformernews.ai
transformernews.ai
Washington has just made life significantly harder for foreign workers.

But the H-1B visa changes present a generational opportunity for the UK to scoop up AI talent, Julia Willemyns argues.
How the UK can seize on Trump’s immigration mistakes
Opinion: The H-1B visa changes present a generational opportunity for the UK to scoop up AI talent, Julia Willemyns argues
www.transformernews.ai
transformernews.ai
"The closer we get to machines that can think for us, the more crucial it becomes to preserve our capacity to think for ourselves."
No, ChatGPT isn’t ‘making us stupid’
— but there’s still reason to worry
www.transformernews.ai
transformernews.ai
"AI is now where electricity and cars were when they first appeared: it holds great promise and great peril. AI could accelerate drug development; it could also enable terrorists to create synthetic bioweapons."
How insurance could help make AI secure
Opinion: Cristian Trout, Rajiv Dattani and Rune Kvist argue that insurance can help reward responsible development of AI.
www.transformernews.ai
transformernews.ai
The statement is timed for the UN General Assembly this week — but, though a meaningful step towards building an international consensus on AI, it is unlikely to move the needle on concrete governance, largely due to American opposition.
Nobel laureates and AI developers call for ‘red lines’ on AI
Experts are calling for international agreements as the United Nations meets, but they face an uphill battle turning words into action
www.transformernews.ai
transformernews.ai
China is on the verge of AI chip supremacy — or so the headlines this week would have you believe. As Trump’s AI advisor David Sacks put it, “The message is clear: China is not desperate for our chips.” But that message is bullshit, writes @shakeelhashim.com
Don’t fall for China’s chip propaganda
Transformer Weekly: Anthropic in DC, an AI-designed virus, and If Anyone Builds It, Everyone Dies
www.transformernews.ai
transformernews.ai
There’s a very real danger that without intervention, AGI will create conditions which stand in conflict with the egalitarian ideals of our political systems. It is not certain that democracy would survive the transition. www.transformernews.ai/p/would-demo...
Would democracy survive an AGI-supercharged economy?
AGI could lead to growth of 30% — and double digit unemployment. It’s not clear that democratic institutions would be able to survive
www.transformernews.ai
transformernews.ai
"Open-weight models sit on a knife’s edge. Handled well, they expand access and drive innovation; handled poorly, one misuse could end openness altogether." Bengüsu Özcan, Alex Petropoulos and Max Reddel on why the window to design safe openness is closing fast www.transformernews.ai/p/can-open-w...
Can open-weight models ever be safe?
Opinion: Bengüsu Özcan, Alex Petropoulos and Max Reddel argue that technical safeguards, societal preparedness, and new standards could make open-weight models safer
www.transformernews.ai
transformernews.ai
"If Anyone Builds It is objectively short: 233 pages ... a miracle by Yudkowsky’s standards. But the painful prose makes it feel interminable. The stylistic choices, regularly lapsing into fantasy-novel flourishes, do not project competence."
Book Review: 'If Anyone Builds It, Everyone Dies'
Eliezer Yudkowsky and Nate Soares’ new book should be an AI wakeup call — shame it’s such a chore to read
www.transformernews.ai
transformernews.ai
The EU AI Office can't hire the people it needs.

Low pay, slow hiring, and pressure to ensure representation from member states are creating a talent shortage — just as major AI regulation takes effect.

Read more:
The EU is struggling to hire the people it needs to regulate AI
Key leadership roles, including a head of the AI Office safety unit, have yet to be hired
www.transformernews.ai
transformernews.ai
At the cringily named “AI’ve Got A Plan” hearing Wednesday, Sen. Ted Cruz unveiled his much-trailed roadmap for AI policy — and a bill to start implementing it. Read our analysis, and the rest of our roundup of everything you need to know in AI policy.
What Ted Cruz’s SANDBOX Act would actually do
Transformer Weekly: OpenAI restructuring, Altman and Huang in the UK, and AI hunger strikes
www.transformernews.ai
transformernews.ai
"We must design, build and reward systems that complete work predictably in messy environments, rather than building ones that simply ace static quizzes under lab conditions," write @ruchowdh.bsky.social and Mala Kumar.
Why AI evals need to reflect the real world
Opinion: Rumman Chowdhury and Mala Kumar argue that we need better AI evaluations — and the infrastructure and investment to do them
www.transformernews.ai
transformernews.ai
"He needs money from the tech industry. That's really the equation," Common Sense Media founder Jim Steyer said about Gavin Newsom's position on SB 53. Yet the bill might still pass.

Read more:
California's latest AI safety bill might stand a chance
SB 53 is entering the home stretch despite industry lobbying. Will it make it over the line?
www.transformernews.ai
transformernews.ai
Researchers made Claude threaten blackmail to avoid shutdown, then headlines claimed AI had gone rogue. But they had to engineer that scenario repeatedly. Are scheming evaluations revealing real dangers or manufacturing them? Read more:
Are AI scheming evaluations broken?
Doubts have been raised about one of the key ways we tell if AI will misbehave. Is it time for a new approach?
www.transformernews.ai
transformernews.ai
A single ChatGPT query uses the same energy as running a microwave for one second. And sending 50,000 fewer ChatGPT queries avoids just 0.014 tonnes of CO2.
transformernews.ai
"There's no evidence of any AI chip diversion," Jensen Huang said. Yet Nvidia chips worth $1 billion secretly reached China in three months through smuggling networks. The Chip Security Act could fix this with location tracking. Read more:
Chip location verification is the new export control battleground
The Chip Security Act proposes a way to tackle chip smuggling. Semiconductor companies don’t seem to like it.
www.transformernews.ai
transformernews.ai
ChatGPT queries use energy equivalent to microwaving food for one second. The real AI environmental challenge isn't individual use—it's planning for massive future scale responsibly. Read more:
We’re getting the argument about AI's environmental impact all wrong
Individual ChatGPT queries are a rounding error — we need to think about the future
www.transformernews.ai
transformernews.ai
"In artificial intelligence, the basic starting points of evaluation have largely been ignored in favor of benchmarks that sound impressive but are misaligned to what we need." Read @ruchowdh.bsky.social and Mala Kumar's call for AI evals that reflect the real world:
Why AI evals need to reflect the real world
Opinion: Rumman Chowdhury and Mala Kumar argue that we need better AI evaluations — and the infrastructure and investment to do them
www.transformernews.ai