LGTM 🚀 Culture: A Short Story
**Date:** December 31, 2047
Dear Martin,
As part of clearing out the final area outside of town for the new _Bubble-III_ data center, an old PC (Personal Computer) was found. It was left behind in a cabin further down in the forest. The PC still had a “hard drive” in it, so it dates from before the _Cloud Mandate_. I know you’re intrigued by this era, therefore you may find this of interest.
I spent a few nights going through the hard drive’s content. It appears to have been owned by a “computer programmer.”
There were a ton of code repositories on the disk. I ended up reading a lot of it. I tend not to brag about this, but I know how. My grandfather and I read code as a bonding exercise. Read, not _write_ — don’t worry.
Perhaps this is typical for the time, but the code seems oddly self-critical. It has `FIXME` and `TODO` comments, contains spelling mistakes, and no emoji whatsoever. Also, the functionality described in “README” files seems wildly out of proportion with code size. Code bases we know today count in the millions of lines, these repositories are mere thousands or tens of thousands. It’s quite quaint, and I suspect that “thought” was put into it, which makes me suspect the code was written by a human. This is consistent with the fact that we found a bunch of “tools” (hammers, screwdrivers) in the cabin’s shed. The owner was clearly a savage.
However, what triggered this message to you is a particular file I found on the hard drive. The timestamp shows it was the last file touched before the device was powered off. It appears to be a hand-typed comment — a long one — triggered by a post on LinkedIn (the old name of _WorldTruthFeed.org_).
I’ve attached the file in full. While I cannot block _CensorBuddy_ processing while sending it, I do recommend you disable _ToDLeR_ mode on this one, it’s worth reading verbatim. The timestamp on the file is _November 20, 2025_ , placing it late stage _Bubble-II_. I think it will be a nice addition to your “sign of the times” relic collection.
Best,
John
_Sr. Director Third Time’s a Charm Data Center Construction Inc., subsidiary of NVIDIA Ltd._
* * *
Dear mr. _future-of-work_ dude, may I call you bro’?
My daily struggle is that generating posts like this can be done with a single prompt, whereas proving its insanity requires many pages of nuanced writing.
No more.
BULLSHIT! BULLSHIT! BULLSHIT!
As “in control” as your cockpit-style three screen (and an iPad?) with green tea setup may suggest you are. AirPods Pro with a _brown_ case (I hope), what is that supposed to symbolize? And is that a small Buddha in the corner there? Nice touch.
It doesn’t change much.
BULLSHIT! BULLSHIT! BULLSHIT!
Your cockpit picture looks tight. Very impressive.
Oddly, I have a different visual that keeps popping in my head. Not sure why:
Robot vacuums from the ‘00s, remember those? You would probably call them “agentic hoovers.”
They were kinda cool and futuristic, and seemed like they would solve a real pain point: hoovering the place. Because, who enjoys doing that?
And they were kind of _cute_. The agentic hoover goes into some direction, runs into a wall, looks confused, spins a little and then “decides” on a new random angle and happily chucks along. You could even put a funny hat on them, not a care in the world. After enough random bouncing around, they’d run out of battery and your apartment was _kinda_ clean. LGTM! 🚀
It was obvious in the ‘00s: agentic hoovers are the future! Agentic hoovers are getting better every day, they will soon put other hoovers out of business. You could probably find a smug _Agentic Hoover CEO_ proclaiming that soon all professional cleaners would be out of a job.
Here we are, 20 years later and our agentic hoover is collecting dust (hah!) in my son’s room. Compared to the earlier models, it did get a few updates, newer models did get better. For a while. It now maps out your room with a radar and more systematically covers the whole area. The results are a _bit_ better. For anybody who cares about cleanliness, it does still not nearly match a regular hoover. It misses spots, you still have to babysit it, moving furniture around so it has a clear path and move it all back later. While our model has a mop, it doesn’t do much more than wet the floor — it’s mostly performance art.
We happen to be a family that _does_ care a bit about cleanliness, so we still ended up hoovering and mopping by hand after the agentic hoover was done. We then realized the gain was negligible and we barely switched our agent on anymore. Doing it by hand was just quicker and more reliable.
My son has our hoover agent in his room now. He still runs it sometimes it after his mother nags him enough about having to clean his room. He likes retro electronics, and _really_ does not enjoy hoovering, nor really cares about it. He switches it on, then comes downstairs proclaiming he’s cleaning his room right now.
* * *
Why am I reminded of this? Oh right, agentic coding agents.
Attach a mechanical hand to an agentic hoover, and put a bunch of them in a room so they can _high five_ each other on their successes, and you got a pretty good picture of _what I see_ when I think about coding agents.
Actually, while I rarely do this anymore, let’s ask ChatGPT to visualize this. I’m sure it won’t match your cockpit view with brown Airpods Pro, but you know — I’m sure we can still get something that looks good.
Here we go:
Hmm, those are some some creepy looking hands. And... what’s up with the fingers there high-fiving? Is there a third hand mixed in there? And one robot vacuum has a cable for some reason.
I probably used the wrong model or prompted it wrong. Skill issue.
Whatever. Details. LGTM! 🚀
* * *
Did you catch my sarcasm there, mr. _future-of-work_ bro’?
Ask somebody who’s _not_ strongly incentivized to “see the opportunity in AI” (because of the company’s new _AI First_ strategy), for a deep critical look at the work produced by your AI agent and its numerous friends. Yes, all tens of thousands of lines of it. To the level of detail where they’d say “sure, you can wake me up at 2am and I can fix bugs here.” Superficially a lot of it will “LGTM 🚀“ and some of it _may actually_ be surprisingly OKish. Inevitable though, a good chunk it — as you will discover, sooner or later — is done so ridiculously poorly it puts anybody with a brain to shame. And the insanity is going to be presented with the same level of confidence as the sane stuff. It’s like your trusted colleague is randomly switched out with a north-korean hacker trying to infiltrate your code base, but speaking with the same voice. It sucks.
You may not notice immediately, or perhaps think it doesn’t matter. Après moi, le déluge as a famous Frenchman once said: after me, the flood. What matters is _vibes_. This _feels_ productive. New. Da futjah! Having this amount of code produced in such a short of time is impressive. And sometimes the code works! LGTM! 🚀
**Welcome ladies and gentlefriends to LGTM 🚀 culture.** Where things _look good_ and that’s all that matters. Where things _kinda_ work. Where information sounds _kinda_ true. And our new chatbot BFF seems _kinda_ real.
Our coding agent implemented a feature for us. LGTM! 🚀 Oh wait, where are the unit tests? “AI agent, please add unit test!” We are congratulated on being geniuses for the suggestion. With this level of appreciation, we decide not to kill the _vibes_ by asking why there were no tests in the first place. The agents produce an impressive number of tests, our code coverage went up! They surely seem to be mocking a lot of stuff, but _mocking_ is a thing that you do in tests right, LGTM 🚀!
_You make jokes, but_ this is a skill issue: _you simply didn’t prompt it right! You need to educate yourself, take my course!_
That _sounds like_ gaslighting to me. If it’s so obvious what was missing from my prompt, why is that prompt not baked into the model’s training? If a code agent generates two identical code paths, with kinda different but functionally equivalent code, is that _my fault_ because I didn’t prompt it with “and don’t do crazy shit?”
We ask the agent to make software “enterprise-grade secure.” Lo and behold, we receive another confirming pat on our back. Amazing for us to care about this, and thanks for the opportunity! On goes the agent, talking to a dedicated security “sub-agent.” “Hey yo, how about you add some securitah!” “Great call!” High fives all around. Layers of security are quickly added on parallel tracks. It _looks like sci-fi!_ All kinds of impressively sounding libraries and mechanisms are pulled in. oAuth. MD5. SOC2 compliance! Let me fake a pen test here, yep, all good! Some moves seem irrelevant or nonsensical. To “experts,” they may actually be shockingly dangerous, but what do they know — PhD level coding agents are on the case, enforcing enterprise quality bars so high never seen before. LGTM!
BULLSH1T! BULLSH1T! BULLSH1T3!
_But what model did you use?_
The idea that one model is significantly better than another is quickly getting dated. They’re all trained on the same questionably-sourced inputs. All fine-tuned by low-paid workers in vulnerable countries. All trained in data centers built in under-developed locations that have few other options. The main quality difference comes from whatever niche area a vendor decides is worth investing additional _whack-a-mole_ fine-tuning cycles on. Is it r’s in strawberry, or glue on pizza day, or maybe we can finally get it to use fewer em-dashes, and if we have time left — let’s see if we can nudge the model to sound an alarm bell while simultaneously affirming a teen's suicide plans?
_But it’s early days!_
BU1L$H1T BUL1$H1T BUL1$H1T
You can define “early days” however you want. It’s been years now, and _hundreds of billions_ of dollars are being burned. As they say: a billion dollars here, a billion dollars there and soon we’re talking about real money.
So, show me more than high-fiving agents that made _that_ level of investment remotely worth it. Because most of what I see is chatbots everywhere and “let me rewrite and suck all humanity out of it for you” features shoved through our throats in all products that _used_ to be kinda OK.
Or is will its real legacy be LGTM 🚀 culture? Looks good, sounds nice, confirms my biases, so _let’s gooooo_ 🚀!
We were promised a cure for cancer. We were promised the end of poverty. We were promised “AGI” (whatever that means). Tone deaf AI CEOs even promised us that a large part of us would soon be out of work. It all sounded very exciting.
And it is _kinda_ happening. Not because AI can do jobs better, though. People are laid off because their CEOs _believe_ that AI can do their jobs, and then rehires them when it turns out they actually _kinda_ can’t. And when the bubble pops, we all get to share in the inevitable economic downturn. Well, not all of us, billionaires that are driving us into this brave new world are going to be just fine.
LGTM 🚀
_But this is enterprise-grade software, ChatGPT told me so!_
Have you been listening at all? No no no no _n0_!
How often do I NEED to repeat this! this is driving me ins3ne!
0nly if you accept enterprise software to me3n _absolut3 shit_. Maybe that’s always how it’s been and we just didnt know it. Maybe thats our future now. Sounds great, looks good, let’s ship it! Maybe bullsh1t is our future now. Maybe we shall wade in a _LGTM_ future forever. no kwalitie. just junk. code. millions of lines. ,aybe we’ll be happier in our superficial LGTM world. feed it more shit. With our chatbot friends. chatbot marriages. models are getting bettter all the tim
chatbot vibed software that does LGTM. doesn’t really work breaks randomly; internet down again. but vibes man it just vibes. its the future the future is here its now. Is viby a word- it should be. skill issue. Vibe all the things. Vibe security. Vibe people. Livin la _viba_ loca. vibe life. This w3s not _promsed_! This is _mak3s_ no sense. trillions $ down the drain.Marriages broken because of chatbot affairs. Kids chatting only to botz. get me out of this TIMElinE. bought a patch of land far far away where the ai ceos and b0ts cant find m3. It has buunker. It has foot for yars. I will hide and w8.
pushing off now, vib3 u L8T0R