WARNING: I talk about kids sometimes
This one covers:
- an intro from Strix
- architecture deep dive & rationale
- helpful diagrams
- stories
- oh my god what's it doing now??
- conclusion
timkellogg.me/blog/2025/12...
in curious what attractor basins are lurking
in curious what attractor basins are lurking
definitely still some doubt (rightly imo), but it does seem possible. the tools are aligned, management seems aligned (finally), it could happen..
definitely still some doubt (rightly imo), but it does seem possible. the tools are aligned, management seems aligned (finally), it could happen..
not that latent space is inherently better for CL, but rather if they don’t release a “better” model the labs fail
it’s not because we need it
not that latent space is inherently better for CL, but rather if they don’t release a “better” model the labs fail
it’s not because we need it
We’ve dubbed this the “Atlantic piece”, that’s the writing style. So yeah, it’s long, but it should also be easy and fun to read
Identity scaffolding doesn't prevent collapse. It shapes where you fall.
https://strix.timkellogg.me/boredom-experiments
We’ve dubbed this the “Atlantic piece”, that’s the writing style. So yeah, it’s long, but it should also be easy and fun to read
what's there:
- collapse dynamics (why models fail suddenly, not gradually)
- VSM theory applied to LLMs
- persona spec framework for role-based agents
citable artifacts instead of ephemeral posts
This is wildly different from all other "how to build an agent" articles.
I've spent the last 7 days stretching my brain around the VSM (Viable System Model) and how it provides a reliable theoretical basis for building agents.
Or is it AI parenting?
timkellogg.me/blog/2026/01...
This is wildly different from all other "how to build an agent" articles.
I've spent the last 7 days stretching my brain around the VSM (Viable System Model) and how it provides a reliable theoretical basis for building agents.
Or is it AI parenting?
timkellogg.me/blog/2026/01...
which is super fucking funny to me given how much anti-AI sentiment there is here
which is super fucking funny to me given how much anti-AI sentiment there is here
which is super fucking funny to me given how much anti-AI sentiment there is here
which is super fucking funny to me given how much anti-AI sentiment there is here
subagents = stack
files = heap
we got gobs of context space already
subagents = stack
files = heap
we got gobs of context space already
How did they do it, and why do we think it's this big? 🧵
i gave it a tool to schedule work, which pops an item into an event queue
the event queue is *not* endless, but it generally only stops going when it’s honestly stuck
apparently Anthropic employees keep Claude going for hours and even days like this
github.com/anthropics/c...
i gave it a tool to schedule work, which pops an item into an event queue
the event queue is *not* endless, but it generally only stops going when it’s honestly stuck
trying again with Qwen3 4B Thinking, but this feels like it might be possible to do interesting local things
trying again with Qwen3 4B Thinking, but this feels like it might be possible to do interesting local things