I develop software using Swift, dabbling with Rust.
I am into homelabing; using proxmox, kubernetes, and nix.
I like playing and running tabletop games; particularly Pathfinder 2, Mage, and Dungeon Crawl Classics.
the ability to explore a whim, a hunch, a random notion mentioned in an aside
There’s no precedent for it, and as the learning skill develops, the ROI improves
the ability to explore a whim, a hunch, a random notion mentioned in an aside
There’s no precedent for it, and as the learning skill develops, the ROI improves
As far as I can tell it's between 5.1 and 10.2 seconds, depending on which end of the 2019 IEA Netflix energy usage […]
[Original post on fedi.simonwillison.net]
As far as I can tell it's between 5.1 and 10.2 seconds, depending on which end of the 2019 IEA Netflix energy usage […]
[Original post on fedi.simonwillison.net]
RE: <a href="https://mastodon.gamedev.place/users/TomF/statuses/115589875974658415" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">https://mastodon.gamedev.place/users/TomF/statuses/115589875974658415
RE: <a href="https://mastodon.gamedev.place/users/TomF/statuses/115589875974658415" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">https://mastodon.gamedev.place/users/TomF/statuses/115589875974658415
Agents that churn out a thousand lines of code leave you either blindly trusting them or slogging through reviews. These tools should embrace their fallibility.
Agents that churn out a thousand lines of code leave you either blindly trusting them or slogging through reviews. These tools should embrace their fallibility.
www.pnas.org/doi/10.1073/...
a lot of top researchers think this is part of the continual learning puzzle
putting this here to force myself to dive deeper into it (later)
hazyresearch.stanford.edu/blog/2025-06...
They show that LLMs implicitly apply an internal low-rank weight update adjusted by the context. It’s cheap (due to the low-rank) but effective for adapting the model’s behavior.
#MLSky
arxiv.org/abs/2507.16003
They show that LLMs implicitly apply an internal low-rank weight update adjusted by the context. It’s cheap (due to the low-rank) but effective for adapting the model’s behavior.
#MLSky
arxiv.org/abs/2507.16003
More parts later, with the Fedi comparisons & misconceptions etc.
More parts later, with the Fedi comparisons & misconceptions etc.
Sensational scenes. Busted ass lib website.
reinstate Jessie, this is ridiculous
Sensational scenes. Busted ass lib website.
May JK Rowling choke on a bag of dicks until the world has one less disgusting and harmful billionaire.
May JK Rowling choke on a bag of dicks until the world has one less disgusting and harmful billionaire.
oh
friends
i have such delights to show you
github.com/anthropics/c...
oh
friends
i have such delights to show you
github.com/anthropics/c...
*hi ferris!*
*hi ferris!*