michael 📈 👨‍💻
banner
mikedecr.computer
michael 📈 👨‍💻
@mikedecr.computer
Quant finance researcher/developer.
Former political scientist, sometimes bike rider.
Long form mikedecr.computer
two priors, the easy mean is the risky mean + k and k has some positive value constraint
February 10, 2026 at 3:27 AM
Speaking of keeping the focus in the code, not the chat, I saw this clip of a new nvim pkg "99" that's more my style maybe. A prompt buffer appears only briefly, and you can still work while an LLM fulfills some request asynchronously.

youtu.be/ws9zR-UzwTE?...
This is the only AI I want to use
YouTube video by The PrimeTime
youtu.be
January 31, 2026 at 5:13 PM
And I guess it's easier with LLMs to create your own integrations.

For example this Claude \in Nvim pkg github.com/greggh/claud... lets you bind a fn to send a visual selection into the chat prompt. Stuff like that is good. I want to be in the code, not in the chat.
GitHub - greggh/claude-code.nvim: Seamless integration between Claude Code AI assistant and Neovim
Seamless integration between Claude Code AI assistant and Neovim - greggh/claude-code.nvim
github.com
January 31, 2026 at 5:13 PM
LLMs don't scare me, executives do
January 21, 2026 at 1:44 AM
I need to be bombarded by unsolicited rage bait but in an ethical way
January 20, 2026 at 12:22 AM
I suppose this is also less clunky then popping an element only to maybe re-append it
January 10, 2026 at 3:24 AM
de-structuring the tuples here creates a more pleasant reading experience
January 10, 2026 at 3:21 AM
i shall spare you from my blegh
January 1, 2026 at 6:32 PM
join the vim gang and control everything
December 13, 2025 at 5:44 PM
editing the clip: "hard rule: no statistics... it isn't worth it"
November 30, 2025 at 5:53 PM
my worry is that this is good armor against slop only to the extent that people have collective energy to join hands and hate it together, which is a shaky foundation
November 9, 2025 at 5:52 PM
the work is never done, matt
November 9, 2025 at 4:58 PM