Nick Rempel
banner
nrempel.com
Nick Rempel
@nrempel.com
15K followers 1.7K following 3.5K posts
Software engineer @shopify.com. Mostly just curating funny/interesting images here these days.
Posts Media Videos Starter Packs
Totally agree that it's not a silver bullet. But I think we can tilt the build vs import scale much further in favour of build now.

But yes, I'm definitely not suggesting we codegen rustls or something as some others in this thread suggest. (I addressed that directly in the post)
I wanted to update my personal site. So naturally the first step was to create a new static site generator.

Introducing Tiny, the ridiculously simple static site generator. A single ~40 line bash file the leverages awk and outsources the markdown parsing to cmark—its only dependency.
GPT 5 Pro is so insanely good.

I have it thinking about something for 10 minute chunks at all times.
Love the new @cloudflare.social toolbar. Great work.
If you have been using the old storyline that OpenAI would declare AGI to void the deal, update your model. The revised terms shift this from a cliff to a controlled process with longer dated IP certainty.

Definition I am using for AGI:
• Strategy is more flexible on both sides. Microsoft can build independently. OpenAI can partner outside some previous constraints while keeping guardrails around core IP
• The governance risk is reduced. The decision, and the economic triggers tied to it, move from a unilateral board call to a verification gate
What this means in practice:

• The incentive to declare AGI early is weaker. A declaration alone does not change Microsoft’s commercial rights. Verification is required, and Microsoft’s product runway now stretches to 2032
• Revenue share continues until AGI is verified by the panel. The arrangement ends once verification occurs
• Other business terms also shifted. Microsoft no longer has cloud right of first refusal. OpenAI committed very large incremental Azure spend that has been widely reported
• Microsoft’s access to OpenAI’s research IP lasts until verified AGI or 2030, whichever comes first
• Microsoft can pursue AGI independently. OpenAI can co‑develop some products with third parties
• Any AGI declaration by OpenAI now needs verification by an independent expert panel
• Microsoft’s product and model IP rights run through 2032, including post‑AGI models, with safety guardrails
Everyone debated the so‑called AGI clause between Microsoft and OpenAI. The short version used to be simple. If OpenAI declared AGI before 2030, Microsoft’s access to future OpenAI models would end. That narrative changed on October 28, 2025.

What changed:
Now that it’s no longer a hard limit, it’s worth thinking about what becomes possible. Parallel data processing without extra processes? ML workloads that scale on all cores? What else could we see?
The GIL has always been part of Python. It defined how we wrote code, scaled workloads, and worked around limitations with multiprocessing and async patterns.
The t suffix installs the free threaded variant. To confirm, run python -VV and look for “free-threading build” in the output.
Tasks that once needed multiple processes or native extensions can now scale inside a single interpreter.

You can try it today using uv (docs.astral.sh/uv/):

uv self update
uv python install 3.14t
uvx [email protected]
The transition will take time. Libraries that depend on the current C API need updates for thread safety, and async IO remains the best tool for high concurrency networking. Multiprocessing will still make sense when you need isolation. What changes is the upper bound.
The standard interpreter build still includes the GIL.

The free threaded build lets Python run CPU bound code on multiple cores at once. Multithreaded programs and libraries that were bottlenecked by the interpreter lock can now take advantage of real parallel execution.
In case you missed it, Python 3.14 quietly introduced one of the most important changes in its history.

After almost thirty years, the Global Interpreter Lock that limited CPython to one active thread per process is being reworked.
Stricter linting rules, stricter compilers (*cough* Rust) more tests, etc, etc. All of these things have MUCH more value now that we're generating code. Super robust test coverage is now an absolute must. Companies that have been investing in this already are reaping the rewards.
But now, with tools like Claude Code and Codex, the equation has changed. The more confidently you can trust generated code, the faster you can ship.