Randall Bennett
randallb.com
Randall Bennett
@randallb.com
builds
I don't yet have a name for it, but this is my spectrum of "how lazy are you allowed to be with AI."

1- internal only (slop allowed)
2-private alpha (no security vulns)
3- private beta (dece architecture, no scalability potential)
4- public beta (scalability potential)
5- prod
December 23, 2025 at 7:40 PM
ASAP is too aggressive for me, so I have 3 other levels.

ASAC -- as soon as convenient, when you're ready
ASAR -- as soon as reasonable, i need you to unblock me
ASAFP -- self explanatory.
December 19, 2025 at 5:02 PM
if chatgpt was put in a fleshy body and dropped in the wild, would it be intelligent enough to survive?

probably not.

maybe this is the next turing test? but most people would fail this too…
December 17, 2025 at 5:00 PM
Larger context windows enable new usecases, they don't make models instantly smarter. Fully using a 1mm context window model with 100% related context would be insanely cool.

But so would just getting deterministic results from smaller context windows.
December 12, 2025 at 10:04 PM
We're hosting a "how to vibe code 101" live stream this week. The target audience is non technical people who want to learn how to vibe code, but also technical people struggling to organize their vibe coded projects.

Hope you can come!

luma.com/b6c91diz
Vibe Coding 101 · Luma
Setting up a repo for vibe coding, and the basics of reproduceible prompts
luma.com
October 27, 2025 at 7:14 PM
The most famous public Facebook-ism is "Move fast and break things."

"Done is better than perfect." "Code wins arguments."

I've found introducing concepts like that in my AGENTS.md / claude.md file tend to drive the AI toward agreeing with me.
Claude Code overview - Claude Docs
Learn about Claude Code, Anthropic's agentic coding tool that lives in your terminal and helps you turn ideas into code faster than ever before.
claude.md
October 21, 2025 at 9:03 PM
Pretty wild to think Apple was so far ahead on Siri, and the current state of Apple's personal assistant products compared to something like ChatGPT.

Might be the biggest technology miss of all time.
October 19, 2025 at 5:04 AM
lore drop:

gist.github.com/randallb/00...

My codex system prompt (also symlinked to claude)

I added the team culture section recently. It seems to be helping it ask fewer “why don’t we add x as well? Let me know if you need y” type questions
My agents.md / claude.md
My agents.md / claude.md. GitHub Gist: instantly share code, notes, and snippets.
gist.github.com
October 12, 2025 at 6:43 PM
How do we feel about the personification of ai assistants? ie Claude vs ChatGPT?

I'm honestly not sure... pros and cons for each. What do you think?
October 2, 2025 at 5:43 PM
Hosting an agentic-first coding workshop tomorrow.

After you come to this, you should be able to one-shot most of your features.

events.zoom.us/ev/AvTick2O...
September 30, 2025 at 10:34 PM
I don't think people fully appreciate just how much better GPT5 is than GPT4. Chatgpt doesn't really showcase how much better it is, though you can see it if you squint.

GPT5 agents are just otherworldly at staying on task.
September 30, 2025 at 5:57 AM
If you have the concept of "tribal knowledge" in your company at all, ngmi.

AI agents make it simpler than ever to update docs, and they thrive on accurate, up to date docs.
September 29, 2025 at 8:57 PM
I have to convince codex to rubberduck with me to fix the most complex problems. Like I don’t really know what’s going on with this weird type inference issue that i’m having, but getting codex to write a little lab notebook thing and then keep working on it, it fixes it.
September 23, 2025 at 6:02 AM
I want to be the Rakim of AI.

Prompts / AI engineering from today will look like the Fresh Prince. I want to introduce flow into the AI world.

Like nobody is going to get this tweet, and that's ok. :)
September 15, 2025 at 12:57 AM
I think there's a concept i'm landing on... "Inference Engineer." Context engineering is about how to make the whole pipeline good, give the ai the right context, etc. Inference engineering would be about considering how specific information flows into an LLM.
September 11, 2025 at 9:04 PM
So we run our standups via claude code as facilitator (it gathers the data, records info, etc.)

Today it was ineffective b/c we updated a different runbook that caused it to not work right.

We don't have evals on it. It's required for anything mission critical it turns out.
September 11, 2025 at 4:24 PM
Launching a startup as a remote team is such a bad idea.

Honestly it'd be better to have a single person launch something, then add team members progressively than try to coordinate multiple people across multiple timezones.

Especially in the age of AI.
September 11, 2025 at 6:02 AM
Everything you need to know about GitHub:

Its homepage is literally unusable and completely ignored.
Its default actions viewer is a node based thing to handle thousands of connected jobs.

Please someone help us. As bad as sourceforge was, github is now as bad.
September 11, 2025 at 5:00 AM
I know a lot of people are annoyed by the summarized thought traces from LLMs, but as someone who has deep empathy and a history of trauma, I can tell you the raw thought traces are emotionally draining sometimes.
September 11, 2025 at 12:00 AM
My read on the GPT5 launch is a consensus: Meh.

I think that's wrong though. Think about the evolution thusfar:

GPT3 was like "wow these sentences are coherent!" 3.5 was like "wow this is actually useful!" 4 was like "Whoa it's useful and doesn't hallucinate usually!"
September 10, 2025 at 9:02 PM
Don't always use the highest thinking LLM to do your job. The pattern I've found that works for code:

GPT-5 minimum for finding all the files required and building the context. Low for writing initial code / unit tests / docs

If low gets stuck, go to medium.

When to use high?
September 10, 2025 at 4:03 PM
Codex's actual output is just simply better than Claude Code. I’m consistently using low, then having high go back and code review, and it just looks like code that I’d be proud of writing at FB. It’s insane.
September 10, 2025 at 7:58 AM
Claude Code does A LOT more classification before starting tasks than Codex.

That’s what makes it feel so fast.

Right now, it seems like gpt is just using whatever effort level you specify and doing everything in that mode.
September 9, 2025 at 8:56 PM
Codex is like a car: Build the context with minimal mostly, then shift to low for light code, then medium when you actually kick off code if you have tests so that it can iterate through, then high for yolo.

Shift gears at the right time or you go way slower than you’d like
September 9, 2025 at 3:59 PM
i will say that all the things i invested in to make claude work more effectively are helping with gpt. I’m going to explicitly stay out of all claude-only tooling (subagents most namely)
September 9, 2025 at 7:56 AM