Work: IT at University of Sheffield
www.patreon.com/posts/yes-ar...
clune.org/posts/anthro...
clune.org/posts/anthro...
first we had '2025-processed'
then '2025-corrected'
then '2025-corrected-fix'
then '2025-final'
we're up to '2025-final-v2'
Via @brendannyhan.bsky.social I haven't linked the original thread because the comments are so wrong ('they can't be using LLMs' etc)
Via @brendannyhan.bsky.social I haven't linked the original thread because the comments are so wrong ('they can't be using LLMs' etc)
if everyone who ever lived on earth had a pocket universe in which every star was inhabited by earthlike civilization, that would be ~1 trillionth the number of ppl Grok would kill to save Elon Musk.
if everyone who ever lived on earth had a pocket universe in which every star was inhabited by earthlike civilization, that would be ~1 trillionth the number of ppl Grok would kill to save Elon Musk.
1) Web based
2) Keyboard shortcut heavy. Ideally Google Reader shortcuts as the muscle memory is real
1) Web based
2) Keyboard shortcut heavy. Ideally Google Reader shortcuts as the muscle memory is real
With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research
With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research
With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research
Google does seem to be proving that just scaling LLMs is still working
generativehistory.substack.com/p/the-sugar-...
Google does seem to be proving that just scaling LLMs is still working
generativehistory.substack.com/p/the-sugar-...
a) model training costs (financial and environmental) are ~5-10x the final run and
b) inference costs (financial and environmental) are smaller than assumed
I'm making heroic assumptions for a). No-one outside OpenAI can answer properly
a) model training costs (financial and environmental) are ~5-10x the final run and
b) inference costs (financial and environmental) are smaller than assumed
I'm making heroic assumptions for a). No-one outside OpenAI can answer properly
Prompted by today's @garbageday.email
Prompted by today's @garbageday.email
The obvious counter is that it's stepping away, but unis will still have to embrace AI for research
The obvious counter is that it's stepping away, but unis will still have to embrace AI for research
1) Anthropic suggesting "Code Execution with MCP" but don't give examples www.anthropic.com/engineering/... but then I also came across
2) replacing playwright with js scripts mariozechner.at/posts/2025-1...
1/
1) Anthropic suggesting "Code Execution with MCP" but don't give examples www.anthropic.com/engineering/... but then I also came across
2) replacing playwright with js scripts mariozechner.at/posts/2025-1...
1/
www-bbc-co-uk.cdn.ampproject.org/c/s/www.bbc....