jinom.bsky.social
@jinom.bsky.social
Liable to post about random nerdy things.
I also feel kinda stupid about asking for 100$ to compare amp code token expenditure now. Should've asked for 1000.
January 14, 2026 at 1:18 PM
Huh, not even a blip. Glad I decided to wait, but kinda wild. Powell says he's the target of lawfare, and the market just... shrugged, like it's business as usual and fully priced in.
January 12, 2026 at 8:12 PM
I've been on this session for an hour and I still have context left, usually I'd have compacted twice by now.
January 11, 2026 at 10:22 PM
We know that increasingly thinkers write for LLMs. Oddly enough, I don't think it's percolated enough that library writers must write for agents.
January 11, 2026 at 6:58 PM
Like, I love a graph diagram with shape annotation on the edges but how easy is it for claude to parse this reliably? And if you do it in text, same thing: something that shows the topology to human eyes does not parse nicely for agents. It increases the cognitive load.
January 11, 2026 at 6:58 PM
for all the talk about claude code I don't see enough talk about designing libraries that are easy for *agents* to use. What are their pain points? What are the limitations of their tools? What kind of visualization is better for them?
January 11, 2026 at 6:58 PM
As in, a sufficiently poorly designed RCT that it would show this no matter the underlying reality?
January 11, 2026 at 6:15 PM
It may be psychological more than anything else, but it did help break down a mental wall from "oh my god what is this paper's exact architecture" to "oh ok this paper implements this graph operation and adds this loss, that makes sense".

Maybe I should make it into a proper repo.
January 11, 2026 at 6:01 PM
I'm surprised this isn't more widespread. Apparently there's one in pytorch, but not in jax (which is what I use), even though the concept is largely backend-independent.
January 11, 2026 at 6:01 PM
have a v0 architecture (a scaled-down nanochat), write it as a graph, and have experiments be operations on this graph. For example, adding canon layer is pretty easy, since it's just splicing one A -> B edge into A -> conv1d node -> B. Also helps with keeping track of loss components etc.
January 11, 2026 at 6:01 PM
It's actually so good. I can talk shit with gemini, get codex to do the heavy lifting and claude to supervise.
January 4, 2026 at 1:57 PM
(It's generally "do less drugs", usually uttered with a powdered nose)
January 2, 2026 at 3:24 PM