Lecturer in AI, Government & Policy at the OII (University of Oxford) | Author of "The Materiality of AI" (2027, Bristol University Press) | Associate Editor at Big Data & Society | Investigating algorithmic accountability and environmental impacts
I'm a little 🤷♀️ about this scale, ALSO tested on prolific which isn't always high quality participant samples in my humble opinion as someone who is really skeptical of online psych platforms but w/e, my bigger thought here is that this is kind of more about AI *model* literacy than *usage skills*
January 20, 2026 at 3:19 AM
I'm a little 🤷♀️ about this scale, ALSO tested on prolific which isn't always high quality participant samples in my humble opinion as someone who is really skeptical of online psych platforms but w/e, my bigger thought here is that this is kind of more about AI *model* literacy than *usage skills*
También se habló de los problemas ambientales que Latinoamérica vive frente a la población de centros de datos, las circunstancias políticas que provocan una proliferación de éstos y el coste ambiental que las comunidades cercanas a ellos tienen. Citando el trabajo de @anavaldi.bsky.social 🙌
January 15, 2026 at 10:38 PM
También se habló de los problemas ambientales que Latinoamérica vive frente a la población de centros de datos, las circunstancias políticas que provocan una proliferación de éstos y el coste ambiental que las comunidades cercanas a ellos tienen. Citando el trabajo de @anavaldi.bsky.social 🙌
"LLMs can be accurate", proponents argue, they just need a bit of oversight. And that oversight doesn't have to mean human-in-the-loop. It can come from other LLMs."
Well, clearly not.
December 15, 2025 at 11:41 PM
"LLMs can be accurate", proponents argue, they just need a bit of oversight. And that oversight doesn't have to mean human-in-the-loop. It can come from other LLMs."
Yet one more reason we cannot allow LLMs to serve as epistemic grounding is that we cannot triangulate among them the way you can among reasonable independent sources. They bullshit in the same way and end up agreeing with one other about things that are completely false.
December 15, 2025 at 11:36 PM
Yet one more reason we cannot allow LLMs to serve as epistemic grounding is that we cannot triangulate among them the way you can among reasonable independent sources. They bullshit in the same way and end up agreeing with one other about things that are completely false.
After my FOI request to UKRI and its sparked debate on bsky, I followup and UKRI acknowledged errors in the original data and confirmed the correct 2025 success‑rate percentages*:
*These may rise as 2025 proposals could be evaluated in 2026.
December 14, 2025 at 4:40 PM
After my FOI request to UKRI and its sparked debate on bsky, I followup and UKRI acknowledged errors in the original data and confirmed the correct 2025 success‑rate percentages*:
I don't know about you but the way my brain works is by analyzing the contents of the entire internet to make an educated guess about what word I should use next.
November 25, 2025 at 2:05 PM
I don't know about you but the way my brain works is by analyzing the contents of the entire internet to make an educated guess about what word I should use next.
Researchers attending include DPhil students Andrew Bean, Ryan Brown, Franziska Sofia Hafner, @ryanothnielkearns.bsky.social , Harry Mayne and Kaivalya Rawal; Shreyanash Padarha, Research Assistant and faculty members @computermacgyver.bsky.social Adam Mahdi, @rocher.lc and Chris Russell. 2/2
November 25, 2025 at 9:46 AM
Researchers attending include DPhil students Andrew Bean, Ryan Brown, Franziska Sofia Hafner, @ryanothnielkearns.bsky.social , Harry Mayne and Kaivalya Rawal; Shreyanash Padarha, Research Assistant and faculty members @computermacgyver.bsky.social Adam Mahdi, @rocher.lc and Chris Russell. 2/2
Loved this bit : “The most lucrative users – English-speaking professionals willing to pay $20-200 monthly for premium AI subscriptions – become the implicit template for ‘superintelligence’.”
November 18, 2025 at 10:45 AM
Loved this bit : “The most lucrative users – English-speaking professionals willing to pay $20-200 monthly for premium AI subscriptions – become the implicit template for ‘superintelligence’.”