Andrew Stellman 👾
@andrewstellman.bsky.social
380 followers 420 following 1.1K posts
Author, developer, team lead, musician. Author of O'Reilly books including Head First C#, Learning Agile, and Head First PMP. Solving complexity with simplicity.
Posts Media Videos Starter Packs
andrewstellman.bsky.social
I'm not that pessimistic. I've found AI coding tools like GitHub Copilot and Cursor to be incredibly valuable. The whole point of the writing and research I've been doing over the past few months is to help people learn to use them effectively without losing their critical thinking skills.
andrewstellman.bsky.social
Right! What makes me worry is that new devs learning to depend on AI tools aren't picking up the important critical thinking skills they need to figure out when AI generated code needs to be fixed.
Reposted by Andrew Stellman 👾
oreilly.bsky.social
The cognitive shortcut paradox presents a fundamental challenge for how we teach and learn #programming in the #AI era. The traditional path of building skills through struggle and iteration hasn’t become obsolete; it’s become more critical. @andrewstellman.bsky.social: bit.ly/4nyzhcB #Radar
The Cognitive Shortcut Paradox
This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI.AI gives novice developers the ability to skip
bit.ly
andrewstellman.bsky.social
That is definitely not a recipe for AI taking all of our coding jobs.

I dig into this in my @oreilly.bsky.social Radar piece: “Prompt Engineering Is Requirements Engineering.”
📖 www.oreilly.com/radar/prompt...
Prompt Engineering Is Requirements Engineering
We’ve Been Here Before
www.oreilly.com
andrewstellman.bsky.social
You don’t actually avoid the hard parts of software this way. You just hand them off to a system that will confidently charge ahead with missing or wrong assumptions, and you’ll only discover the problems later when the code doesn’t fit what you really needed.
andrewstellman.bsky.social
these are the things that derail projects, not the syntax of a language.

Now imagine giving that same vague description to an overly literal AI that never pushes back, never asks clarifying questions, and sometimes hallucinates.
andrewstellman.bsky.social
But I think it’s worth challenging.

Writing solutions in natural language has been one of the hardest problems in software engineering for the last 50 years. Even teams of smart developers often misunderstand each other when requirements aren’t clear. Miscommunication, missing context, vague specs…
andrewstellman.bsky.social
🚀 𝗔𝗜 𝘄𝗼𝗻’𝘁 𝗸𝗶𝗹𝗹 𝗰𝗼𝗱𝗶𝗻𝗴—𝗵𝗲𝗿𝗲’𝘀 𝘄𝗵𝘆 🚀

I got a thoughtful reply on one of my posts: we’re in a transition phase where LLMs cover 80% of the work, and once the last 20% is solved, we’ll just be writing apps in natural language instead of code.

It’s a smart take—and it’s also something I hear a lot.
andrewstellman.bsky.social
AI can help you get to a working solution faster, but if you lean on it too early, you can skip the exact work that builds debugging skills, pattern recognition, and systematic thinking.

📖 More in my new @oreilly.bsky.social Radar article: www.oreilly.com/radar/the-co...
The Cognitive Shortcut Paradox
This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI.AI gives novice developers the ability to skip
www.oreilly.com
andrewstellman.bsky.social
🚀 𝗧𝗵𝗲 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗦𝗵𝗼𝗿𝘁𝗰𝘂𝘁 𝗣𝗮𝗿𝗮𝗱𝗼𝘅: 𝗪𝗵𝗲𝗻 𝗔𝗜 𝗵𝗲𝗹𝗽𝘀 𝘀𝗲𝗻𝗶𝗼𝗿𝘀 𝗯𝘂𝘁 𝗵𝘂𝗿𝘁𝘀 𝗷𝘂𝗻𝗶𝗼𝗿𝘀 🚀

Evidence is emerging that AI chatbots boost productivity for experienced developers—but have little measurable impact on skill growth for beginners.

That’s the heart of what I call the 𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝘀𝗵𝗼𝗿𝘁𝗰𝘂𝘁 𝗽𝗮𝗿𝗮𝗱𝗼𝘅.
The Cognitive Shortcut Paradox
This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI.AI gives novice developers the ability to skip
www.oreilly.com
andrewstellman.bsky.social
…but for beginners, it can mean skipping the actual learning process. Without struggling through design, debugging, and those “aha!” problem-solving moments, they risk missing the very skills that make AI useful later on.
andrewstellman.bsky.social
🚀 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗰𝗼𝘀𝘁 𝗼𝗳 𝗔𝗜 𝗰𝗼𝗱𝗶𝗻𝗴: 𝗧𝗵𝗲 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗦𝗵𝗼𝗿𝘁𝗰𝘂𝘁 𝗣𝗮𝗿𝗮𝗱𝗼𝘅 🚀

I’m really excited to share my latest Radar piece: “The Cognitive Shortcut Paradox.”

AI gives developers the ability to skip the slow, messy parts of coding. That feels great…
andrewstellman.bsky.social
You still have to use your judgment, but treating AI as its own critic keeps you engaged and highlights problems you’d miss if you just skimmed the output.

That’s the essence of 𝘁𝗿𝘂𝘀𝘁 𝗯𝘂𝘁 𝘃𝗲𝗿𝗶𝗳𝘆, the theme of my new @oreilly.bsky.social Radar article: www.oreilly.com/radar/trust-...
Trust but Verify
Learning to Catch What AI Misses
www.oreilly.com
andrewstellman.bsky.social
🚀 𝗛𝗼𝘄 𝘁𝗼 𝘂𝘀𝗲 𝗔𝗜 𝘁𝗼 𝗰𝗮𝘁𝗰𝗵 𝗶𝘁𝘀 𝗼𝘄𝗻 𝗺𝗶𝘀𝘁𝗮𝗸𝗲𝘀 🚀

Here’s a trick: after Copilot or ChatGPT generates code, ask it to review that same code for problems. Because the model shifts context, it often surfaces issues it “missed” the first time: contradictory feedback, nitpicky warnings, or edge cases.
andrewstellman.bsky.social
🧪 𝗙𝗿𝗮𝗴𝗶𝗹𝗲 𝘁𝗲𝘀𝘁𝘀: mocking, brittle setups, or too many dependencies just to get tests to pass

These are the red flags that your AI code is heading into technical debt. Spot them early and you save yourself months of pain later.
andrewstellman.bsky.social
🚀 𝗧𝗵𝗲 𝘀𝘂𝗯𝘁𝗹𝗲 𝘀𝗶𝗴𝗻𝘀 𝘆𝗼𝘂𝗿 𝗔𝗜-𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 𝗰𝗼𝗱𝗲 𝗶𝘀 𝗶𝗻 𝘁𝗿𝗼𝘂𝗯𝗹𝗲 🚀

How do you know when to stop vibe coding and start verifying? Watch for the signals:

🔄 𝗥𝗲𝗵𝗮𝘀𝗵 𝗹𝗼𝗼𝗽𝘀: prompting slight variations over and over without progress

💥 𝗦𝗵𝗼𝘁𝗴𝘂𝗻 𝘀𝘂𝗿𝗴𝗲𝗿𝘆: one small change triggers cascading edits everywhere
Reposted by Andrew Stellman 👾
oreilly.bsky.social
"Watch for signs of trouble: lots of mocking, complex setup, too many dependencies—especially needing to modify other parts of the code. When you see those signs, stop vibe coding and read the code." Check out new #Radar article by @andrewstellman.bsky.social: bit.ly/4nQMeOx #AI #VibeCoding
Trust but Verify
Learning to Catch What AI Misses
bit.ly
andrewstellman.bsky.social
Let AI give you a starting point, but review it like you would any other code: check assumptions, run the debugger, refactor where coupling creeps in, and make sure names actually carry meaning.

This is the focus of my new @oreilly.bsky.social Radar article: www.oreilly.com/radar/trust-...
Trust but Verify
Learning to Catch What AI Misses
www.oreilly.com
andrewstellman.bsky.social
🚀 𝗧𝗵𝗲 𝗔𝗜 𝗿𝘂𝗹𝗲 𝗲𝘃𝗲𝗿𝘆 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 𝗻𝗲𝗲𝗱𝘀: 𝗧𝗿𝘂𝘀𝘁 𝗯𝘂𝘁 𝘃𝗲𝗿𝗶𝗳𝘆 🚀

AI-generated code looks right a lot of the time—but it’s not guaranteed. It’s stitching patterns together, not reasoning about your architecture or long-term design.

That’s why I argue the mindset has to be 𝘁𝗿𝘂𝘀𝘁 𝗯𝘂𝘁 𝘃𝗲𝗿𝗶𝗳𝘆.
andrewstellman.bsky.social
When you're using an LLM-based AI, the line is blurred even further. In the model, the "rules" are vectors in a huge cloud, and each prompt is also converted into vectors so it can be processed by the model's existing vector-based "rules." The prompt is literally the same kind of data as the rule.
andrewstellman.bsky.social
Yes, and I find that diagram problematic, too. At best it's an oversimplification, but honestly, I think it's kind of nonsensical.

In a machine learning model, the "rules" are the learned parameters, like weights and biases, which are fundamentally just another form of data.
andrewstellman.bsky.social
The real skill is knowing when to ride the speed and when to stop, verify, and fix problems before they harden into your codebase.

This piece brings together lessons I’ve learned teaching developers and writing books about teams, architecture, and technical debt.