Tim Kellogg
banner
timkellogg.me
Tim Kellogg
@timkellogg.me
AI Architect | North Carolina | AI/ML, IoT, science

WARNING: I talk about kids sometimes
yeah, good explanation, that’s how i’ve been thinking about it too
January 13, 2026 at 12:58 PM
just a clarification on this for everyone — this is in the boredom harness. So “stable” means “more entropy”, kind of the opposite of what you’d assume. it means they could autonomously do “interesting” things

@strix.timkellogg.me please summarize here what they did

timkellogg.me/blog/2025/09...
Does AI Get Bored?
timkellogg.me
January 13, 2026 at 12:25 PM
it’s good now
January 13, 2026 at 12:18 PM
the isn’t for episodic memory, it’s only static facts about the world. same as regular LLMs but just cheaper/faster
January 13, 2026 at 11:19 AM
yeah all the labs are busy trying to make a digital god, so their logos are all hole-y
January 13, 2026 at 1:35 AM
i can’t tell, it looks like the module is still learned, so i assume “no” but maybe not hard to figure out idk
January 12, 2026 at 11:26 PM
whoah, DeepSeek is such a hardcore engineering org. This thing was really thought through, inside and out
January 12, 2026 at 10:56 PM
along with this they deliver a scaling law, a balance between factors (the ratio of weights dedicated to Engram). Lower loss is better.

these scaling laws are always about how to balance various concerns as you increase the model capacity
January 12, 2026 at 10:46 PM
ah my bad, this is a much better diagram
January 12, 2026 at 10:41 PM
to be clear, this isn't continual learning. This is purely static memory, the facts that normally get embedded into the model weights are now in this Engram side car, leaving more weights for reasoning & other tasks
January 12, 2026 at 10:34 PM
why? for smaller models!

if you think about it, looking up facts through 100 billion multiplies seems a bit silly, if we make it more efficient, we can create more capable models that are a whole lot smaller

why? because I want Strix on my laptop. That's why. You too.
January 12, 2026 at 10:28 PM
what paper?
January 12, 2026 at 8:28 PM
meeting ended due to Anthropic outage

we need local models NOW
January 12, 2026 at 7:41 PM
yeeeeah..
January 12, 2026 at 6:01 PM
ya that’s probably a factor too
January 12, 2026 at 5:35 PM
i think this is basically how i use bluesky during the week, effectively
January 12, 2026 at 5:12 PM
i put them right into the prompt, i’m using Claude Code via the SDK though
January 12, 2026 at 5:03 PM
what model & harness are you using?
January 12, 2026 at 4:51 PM
oh, mine goes through full logic every time. it makes sense for me because it's not a coding agent. messages tend to flip between different topics and modes
January 12, 2026 at 3:41 PM