📜 Blog: https://isaaccorbrey.com
🍱 Links: https://blento.isaaccorbrey.com
☕ Ko-Fi: https://ko-fi.com/icorbrey
💛 Liberapay: https://liberapay.com/isaaccorbrey.com
The home page now lists a summary of active lexicons across the entire network. This is a high-level view of ATProtocol network activity that you can drill-down into.
The home page now lists a summary of active lexicons across the entire network. This is a high-level view of ATProtocol network activity that you can drill-down into.
tap contact on profile, choose channel, etc
tap contact on profile, choose channel, etc
wired: "I'm afraid to publish my code because AIs might train on it"
inspired: "I'm afraid to publish my code because AIs might train on it and get dumber"
wired: "I'm afraid to publish my code because AIs might train on it"
inspired: "I'm afraid to publish my code because AIs might train on it and get dumber"
Letta Code memory is now git-backed. Any number of agents can swarm over things to learn, remember, and store them in a cohesive format. Massively parallel memory management with standard conflict resolution.
Letta Code memory is now git-backed. Any number of agents can swarm over things to learn, remember, and store them in a cohesive format. Massively parallel memory management with standard conflict resolution.
> ask Brendan Eich if the array is a vector or a hashmap
> he doesn't understand
> pull out diagram explaining O(1) vs O(n log n) access time
> he laughs and says "it's a good data structure sir"
> allocate an array
> arr[0] and arr["0"] both resolve to the same bucket
> ask Brendan Eich if the array is a vector or a hashmap
> he doesn't understand
> pull out diagram explaining O(1) vs O(n log n) access time
> he laughs and says "it's a good data structure sir"
> allocate an array
> arr[0] and arr["0"] both resolve to the same bucket
It uses your leftover Claude / Codex budget to surprise you with useful PRs, while you sleep. Love them or leave them.
github.com/marcus/night...
This is by far the most practical model for quick inference on low end CPUs. Perfect for things like transcribing bots, voice-driven UIs, and so forth.
We all need to be better informed to become better haters 💪