Laurence D
banner
laurencediver.net
Laurence D
@laurencediver.net
AI strategy at the FCA. Views mine and only mine.

Recovering academic (Legal {tech, philosophy}, hci, regulation, responsible innovation, and whatnot). Diver rhymes with river.

tbh I'd rather be in Assynt
Reposted by Laurence D
The real breakthrough of the year -- something that matters far, far more than generative AI ever will.
Science has named the seemingly unstoppable growth of renewable energy worldwide as the 2025 Breakthrough of the Year.

Learn more about this year's #BOTY and other big advances in science: https://scim.ag/493Tpgx
December 19, 2025 at 4:14 PM
Just AISI there, using 'post volume' and 'negative sentiment' on the r/CharacterAI subreddit as a source of evidence for the impact of emotional dependence on AI

www.aisi.gov.uk/frontier-ai-...
December 18, 2025 at 12:52 PM
of all places, why the hell would i want a 'year in review' from linkedin
December 17, 2025 at 12:15 PM
Reposted by Laurence D
🔴 REVEALED 🔴

The Institute of Economic Affairs – the anti-state, anti-climate pressure group that "incubated" Liz Truss – was bankrolled by oil giants and Rupert Murdoch’s media empire.

📝 Exclusive findings from @desmog.com 👇
The Institute of Economic Affairs Banked £640,000 from Oil Giants and Murdoch
The Institute of Economic Affairs (IEA) – the anti-state, anti-climate pressure group – received more than £640,000 from fossil fuel companies and Rupert Murdoch’s media conglomerate between 1957 and ...
www.desmog.com
December 11, 2025 at 7:05 AM
Reposted by Laurence D
1/ A longtime Wired editor just wrote a mush-brained essay about how he totally missed the political rot of Silicon Valley (& still doesn't get it).

But in the late 1990s, a Wired journalist warned of a toxic ideology bubbling up from tech. Paulina Borsook has largely been erased. Let's change that
September 24, 2025 at 6:36 PM
"The current lack of oversight departs from established norms: regulatory mechanisms that are standard in other high-impact domains remain largely absent in AI governance"

New study on public attitudes to AI regulation, from the Ada Lovelace Institute: www.adalovelaceinstitute.org/policy-brief...
Great (public) expectations
New polling shows the public expect AI to be governed with far more rigour than current policy delivers
www.adalovelaceinstitute.org
December 5, 2025 at 11:34 AM
This is an actual captioned figure in a published article, and not a bit of marketing imagery. Absolutely unbelievable
"Runctitiononal features"? "Medical fymblal"? "1 Tol Line storee"? This gets worse the longer you look at it. But it's got to be good, because it was published in Nature Scientific Reports last week: www.nature.com/articles/s41... h/t @asa.tsbalans.se
November 27, 2025 at 8:45 PM
Reposted by Laurence D
I wish I didn’t have to share this. But the BBC has decided to censor my first Reith Lecture.

They deleted the line in which I describe Donald Trump as “the most openly corrupt president in American history.” /1
November 25, 2025 at 9:26 AM
Can we take a moment to acknowledge just how bad text-based 'asking' is as UI
November 23, 2025 at 10:03 PM
Reposted by Laurence D
❗ Researchers say they’ve found a universal “jailbreak” for top AI models, through poetry. A new study shows that rephrasing harmful prompts as short poems can bypass safety filters across all major models, raising questions over AI Act compliance.
www.mlex.com/mlex/article...
AI models’ safety features can be circumvented with poetry, research finds | MLex | Specialist news and analysis on legal risk and regulation
A study found that poetic prompts can bypass safety features in leading AI models from OpenAI, Anthropic, Google and others, triggering instructions for building chemical weapons and malware. The rese...
www.mlex.com
November 20, 2025 at 10:54 AM
Reposted by Laurence D
when I was young, i used to wonder how people like this would fare in life, as they didn't seem suited for gainful employment. it turns out, the internet allowed them to accumulate a massive audience of similar idiots, enriching them and turning them into a presidential advisor
November 20, 2025 at 12:58 AM
Cloudflare outage in the UK -- the consequences of concentrating infrastructure in ~10 companies becoming clearer by the day
November 18, 2025 at 12:49 PM
Interesting how often we seem to equate cleverness with constant, manic activity. I wonder how many smart folk fly under the radar simply because they know how to chill out
November 14, 2025 at 10:08 AM
Reposted by Laurence D
"Sure, our plagiarism machine ingests enormous amounts of copyrighted material without anyone's permission, but it's the users of the machine who are the 𝘳𝘦𝘢𝘭 criminals, your honor." 🙄🙄

Just unbelievable levels of chutzpah.

www.theguardian.com/technology/2...
November 14, 2025 at 2:39 AM
Reposted by Laurence D
We didn't plagiarize, you made us plagiarize by asking questions to which we stole the answers.

"Because its output is generated by users of the chatbot via their prompts, OpenAI said, they were the ones who should be held legally liable for it – an argument rejected by the court."
ChatGPT violated copyright law by ‘learning’ from song lyrics, German court rules
OpenAI ordered to pay undisclosed damages for training its language models on artists’ work without permission
www.theguardian.com
November 14, 2025 at 2:47 AM
Reposted by Laurence D
New NY State law on "AI Companions"

👀

www.governor.ny.gov/news/governo...
November 11, 2025 at 10:51 PM
Typography matters, don't let anyone tell you otherwise
The font would appear to be English 111 Adagio CE, available for the highly presidential sum of just $39.75 USD

Note "The" is not kerned correctly

(see perspective-corrected still for reference)
November 12, 2025 at 9:58 AM
Seen on Reddit
November 8, 2025 at 12:50 PM
Reposted by Laurence D
November 7, 2025 at 3:10 PM
Yoshua Bengio trotting out the nonsense idea that LLMs have volition and goals #FTAISummit
November 6, 2025 at 11:14 AM
banger
October 30, 2025 at 10:33 AM
We talk about enshittification as a tech phenomenon, but applied to everything it basically describes neoliberalism -- constant squeezing of more juice when there's nothing left to give, coupled with a rictus grin telling us it's all fine, everything is just *great*
October 28, 2025 at 6:03 PM
The salient question is why anyone would ever think a system producing output based on existing statistic distributions of fragments of text would be a reliable source of truth. That's the real enigma.
October 28, 2025 at 3:22 PM
Watching Magnolia for the first time since ~2001. Some barometer of growing up.
October 16, 2025 at 7:50 PM
Ezra Klein interviewing one of the extremists of the AI nonsense sphere

www.nytimes.com/2025/10/15/o...
Opinion | How Afraid of the A.I. Apocalypse Should We Be?
www.nytimes.com
October 15, 2025 at 4:34 PM