Peter Evans-Greenwood
banner
peter.evans-greenwood.com
Peter Evans-Greenwood
@peter.evans-greenwood.com
Thinking out loud about hybrid agency, crooked paths & coordination without stories. Essays → thepuzzleanditspieces.substack.com
My latest post, "What Happens in the Gap". The one in which I show what actually has to happen between 'this technology works' and 'productivity explodes.'
January 28, 2026 at 12:27 AM
My latest Substack post, 'LLMs Are Following the Expert Systems Playbook—But the Score Is Different'. The one in which I explain why AI-skilled workers earn 56% more while 95% of enterprise AI pilots fail—and why both signal the same economic retreat.
January 20, 2026 at 12:28 AM
My new Substack post, the one where I trace the 30-year journey from 'motors as fire hazards' to 'networks as infrastructure."

If you like thermodynamic efficiency, historical insurance data, and debunking standard economic myths, this one is for you.
January 13, 2026 at 12:52 AM
A characteristically thorough and grounded piece from @rodneyabrooks.bsky.social with his Predictions Scorecard, 2026 January 01.

What makes it solid is his methodical tracking of predictions against reality, combined with his deep historical perspective (50 years in AI/robotics).
January 8, 2026 at 10:15 AM
My latest, “2025: The Year that Wasn’t”, the one where I explain why 89% of organizations haven’t scaled AI agents: the problem isn’t infrastructure or training—it’s that simulated goals aren’t stable enough to justify the costs.

A squirrel can’t be talked out of wanting birdseed. An AI agent can.
January 6, 2026 at 12:48 AM
My latest Substack post, "We Saw AGI on Mars", the one in which I explain why LLMs are sophisticated combinational engines, not AGI—and why the difference matters. Hint: it's about existential stakes vs. statistical surprises. The canals are in the mirror.
December 23, 2025 at 12:23 AM
My latest Substack post, "The Present is Legible: Why your 2027 forecast is making you blind". The one where I show why squirrels, Wayne Gretzky, and Netflix all succeeded the same way: reading resistance instead of predicting futures.
December 17, 2025 at 9:48 PM
The grifters were prepare for the gov’s age verification, of the mark on the first day with fishing emails,
December 11, 2025 at 9:42 PM
The AGI panic is distracting us from present harms—Robodebt, UK Post Office Horizon, the algorithmic surveillance of warehouse workers—to focus on a science fiction future.
December 9, 2025 at 10:34 PM
My latest Substack post, "The Three Grammars". The one in which I show the same pattern repeating: 1897 electrical code, 1925 auto loans, 1999 web standards. Each took 10-15 years. Now we need all three to converge simultaneously for heat pumps, EVs, and solar. That's the crisis.
December 9, 2025 at 1:38 AM
(p.2, beside “Are you using AI to write this?”)
“Turing test or tone-police?”
December 4, 2025 at 10:15 AM
Latest: "The Agent That Wasn't There"—explaining why agentic AI security is a category error.

We're building stateful pattern matchers (LLMs) and securing them as goal-driven agents. That architectural gap? That's the vulnerability.

Think platypus, but for AI security.
December 2, 2025 at 12:33 AM
My latest on Substack, 'The Platypus in the Server Room', the one in which I compare AI researchers to Victorian naturalists inspecting a platypus pelt for stitches, and somehow this analogy holds up for 1,200 words.
November 25, 2025 at 12:20 AM
Are We in 1886? And 1919?
When a technology wave requires two grammars that history kept separate
buff.ly/sesUhoh
November 20, 2025 at 12:04 AM
The 2020s productivity paradox explained:

For the first time in industrial history, supply-side coordination (1886: making installation cheap) and demand-side coordination (1919: making purchase affordable) must solve simultaneously.

Same purchase. Two grammars. 12-16 year clock.
November 18, 2025 at 3:41 AM
1798: British naturalists receive a pelt from Australia: duck bill, otter fur, venom, lays eggs. 1st reaction: "Hoax!" 2nd reaction: "We need a new branch of life."

2024: LLM debugs code, writes poetry, fails basic logic, claims consciousness. Reaction: "Alien intelligence!"
November 17, 2025 at 8:47 PM
We see faces in clouds, personalities in boats—and now, apparently, introspection in autocomplete.

Here's what LLMs actually do (and don't do). A thread. 🧵
November 5, 2025 at 1:23 AM
My latest Substack post, 'Inside the Language Machine'. The one in which I argue that LLMs navigate the Tube map of human language—and explain why that changes everything about AI capabilities and limits.
November 4, 2025 at 12:41 AM
My new Substack post, "The Death of Authorship Is a Homecoming", the one in which I trace how we're returning to the scriptorium—where value comes from coordination and collective sense-making rather than individual content production.
October 28, 2025 at 12:45 AM
“The Age of De-Skilling” is the first essay I’ve seen that refuses the easy “robots-make-us-dumb” panic. Instead it asks which kinds of forgetting we can live with—and which ones eat the soul. Highly recommended for anyone who teaches, writes, codes, or thinks for a living.
October 27, 2025 at 8:47 PM
Researchers: "AI models resist being shut down. We don't know why but it might be a survival drive."

Guardian: "AI DEVELOPING SURVIVAL DRIVE!"

Reality: They built a machine that flips its own switch back on, then acted surprised when it flipped its own switch back on.
October 26, 2025 at 9:17 PM
This pattern isn't new.

Power looms in the 1810s: ~2.5:1 ratio (one weaver, multiple looms).

Waymo operations today: ~15-20:1 ratio.

FamilyMart robots: 50:1.

The leverage ratio keeps climbing.
October 24, 2025 at 7:24 AM
Everyone's missing the actual story in this Philippines robot operation piece. It's not about offshoring. It's about the 50:1 ratio.
October 24, 2025 at 7:24 AM
Gerald Gaus gives us the diagnostic tool: evaluative-coordination conflation.
We confuse "what is optimal?" with "how can diverse people coordinate?"

A perfect theory of justice tells you nothing about whether people with different values and circumstances can actually live under it.
October 21, 2025 at 9:34 PM
My next Substack essay, "The Tyranny of Optimisation", the one where I explain that sophisticated policies—like NDIS—are not being undermined, the sophistication is the problem.
October 21, 2025 at 12:54 AM