Laurence Dierickx
banner
ohmyshambles.bsky.social
Laurence Dierickx
@ohmyshambles.bsky.social
Interdisciplinary postdoc researcher / lecturer #AI #fact-checking #journalism #ethics #STS #datascience https://ohmybox.info/
Pinned
New publication alert! From bytes to bylines - A history of AI in journalism practices
www.taylorfrancis.com/chapters/edi...
To read: Social media research tool lowers the political temperature news.stanford.edu/stories/2025... Thanks @patwhite7000.bsky.social
November 29, 2025 at 2:20 PM
To read: Journalist Caught Publishing Fake Articles Generated by AI - "I did not speak with this reporter and did not give this quote." futurism.com/artificial-i...
November 29, 2025 at 2:20 PM
Reposted by Laurence Dierickx
Reposted by Laurence Dierickx
AI slop science? Nature, the world's most prestigious scientific paper published an article with a significant - and very bad, at that - AI generation elements. The next time your paper is desk rejected from Nature or other top tier venues, think about it? www.nature.com/articles/s41...
November 29, 2025 at 8:22 AM
Reposted by Laurence Dierickx
New #AIstories publication by @annesigrid.bsky.social! I love this one: it compares human-told variants of a folktale with LLM-generated variants, finding 1) the implicit is made explicit 2) floatif motifs (fascinating new concept) 3) sex is censored, cannibalism augmented doi.org/10.3390/h141...
doi.org
November 26, 2025 at 11:18 PM
"Imposter accounts, lax moderation, extremism and synthetic content could destroy trust in everything we read online" I’ve already experienced this with several students who explained that they no longer trust online content and therefore choose to avoid it.
Wow. After just 2 years of ChatGPT, there is more AI-generated than human-generated content on the internet.

Reported in the FT www.ft.com/content/ae15...
November 28, 2025 at 9:26 AM
Reposted by Laurence Dierickx
Wow. After just 2 years of ChatGPT, there is more AI-generated than human-generated content on the internet.

Reported in the FT www.ft.com/content/ae15...
November 28, 2025 at 8:03 AM
Reposted by Laurence Dierickx
Après le scandale du rapport rédigé avec de l'IA qui a créé des fausses références, fausses citations/études etc pour le gouv. australien, Deloitte remet ça ! Mêmes types d'erreurs et inventions retrouvés dans un rapport du cabinet de conseil pour le gouv canadien fortune.com/2025/11/25/d...
Deloitte allegedly cited AI-generated research in a million-dollar report for a Canadian provincial government | Fortune
In a healthcare report aimed to address a nurse and doctor shortage, Deloitte cited several fake studies with real researchers’ names attached.
fortune.com
November 26, 2025 at 8:47 AM
Reposted by Laurence Dierickx
A new paper shows how social media accelerates extremism by flooding users with emotionally charged, divisive content that algorithms naturally amplify, making extremist narratives far more visible, engaging, and persuasive than they would be otherwise.
November 24, 2025 at 8:00 PM
Reposted by Laurence Dierickx
Enough is enough. Bravo to Swedish publishers who have filed a criminal complaint against Facebook and Meta over scam adverts which steal journalists' identities and mis-use trusted media brands pressgazette.co.uk/platforms/fa...
News publishers file criminal complaint against Mark Zuckerberg over scam ads
Facebook scam ads prompt criminal complaintt rom publishers' group against Meta CEO Mark Zuckerberg
pressgazette.co.uk
November 28, 2025 at 7:18 AM
Reposted by Laurence Dierickx
Slop Evader is a tool from artist and researcher Tega Brain that lets you search the web for results exclusively before November 30, 2022—the day that ChatGPT was released to the public.
'Slop Evader' Lets You Surf the Web Like It’s 2022
Artist Tega Brain is fighting the internet’s enshittification by turning back the clock to before ChatGPT existed.
www.404media.co
November 26, 2025 at 3:59 PM
Reposted by Laurence Dierickx
In August, Adam Raine's parents sued OpenAI, saying their son used ChatGPT as his suicide coach. Today, OpenAI denied responsibility for his death, arguing that the 16-year-old violated its terms of use. It's the company's first legal response to the case.
www.nbcnews.com/news/amp/rcn...
OpenAI denies allegations that ChatGPT is to blame for a teenager's suicide
The family of Adam Raine filed a lawsuit against the AI company in August. On Tuesday, OpenAI said in a new court filing that is not responsible for the teen's death.
www.nbcnews.com
November 26, 2025 at 12:01 AM
Transparency involves telling audiences not only when AI has been used, but also how and why it has been used. For example, here is a statement from the French newspaper Libération "To our readers - How does Libération use AI?" (transl. from FR to EN) www-liberation-fr.translate.goog/economie/med...
De quelle façon «Libération» utilise l’IA ?
Dans un souci de transparence vis-à-vis de ses lecteurs, «Libé» publie ici la liste exhaustive de ses usages d’outils relevant de l’intelligence artificielle.
www-liberation-fr.translate.goog
November 26, 2025 at 7:12 AM
Reposted by Laurence Dierickx
Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

Read more from @benjaminriley.bsky.social: www.theverge.com/ai-artificia...
November 25, 2025 at 7:53 PM
Reposted by Laurence Dierickx
From Years and Years to Black Mirror: the best TV prophecies for how AI will end us all
From Years and Years to Black Mirror: the best TV prophecies for how AI will end us all
Will AI take all our jobs? Prevent all crimes from being committed? Or finally develop skills beyond that of a trainee copywriter? Here are television’s finest depictions of our imminent future…
www.theguardian.com
November 25, 2025 at 4:57 PM
Reposted by Laurence Dierickx
This is the tech companies playbook - First comes the increasing user base, and then we will see ads on ChatGPT. But ChatGPT and other AI chatbots are not like Meta and Google ->
www.nytimes.com/2025/11/23/t...
What OpenAI Did When ChatGPT Users Lost Touch With Reality
www.nytimes.com
November 25, 2025 at 3:14 PM
Reposted by Laurence Dierickx
Can AI systems be trusted to produce accurate transcriptions & translations for journalists? A new briefing from CNTI's AI & Journalism Research Working Group examines 55 studies and finds that while AI models are improving, human review is still needed to ensure accuracy. cnti.org/article/ai-t...
AI Transcription and Translation in Journalism
Working Group Briefing #2: November 2025
cnti.org
November 25, 2025 at 4:15 PM
Reposted by Laurence Dierickx
If 2023 was the year AI slop was embraced by spammers and social media influencers, and 2024 was the year the slop era began in earnest, 2025 was when slop became embedded in our cultural institutions and social spheres.

On the slop layer that we all must navigate now:
Lost in the slop layer
How AI has encrusted our culture and social sphere in a sedimentary layer of slop.
www.bloodinthemachine.com
November 24, 2025 at 9:48 PM
Reposted by Laurence Dierickx
"A new study shows that cutting social media use for just one week can reduce mental health symptoms, like anxiety and depression, in young adults."
Just one week off social media can improve young adults' mental health, study finds
A new study shows that cutting social media use for just one week can reduce mental health symptoms, like anxiety and depression, in young adults.
www.npr.org
November 25, 2025 at 12:58 PM
Reposted by Laurence Dierickx
👉 Turns out: Dutch people are not that skilled at dealing with (gen)AI.

An ASCoR research report co-authored by @claesdevreese.bsky.social on the digital skills of Dutch people found that the population has quite low confidence in their AI abilities — and unfortunately, they're right to think so. 🧵
November 25, 2025 at 12:01 PM
Reposted by Laurence Dierickx
Why Satellite Data Is a Powerful Tool for Building Disaster Resilience
Why Satellite Data Is a Powerful Tool for Building Disaster Resilience
Across Europe, governments are facing a new reality. Disasters are becoming more frequent, more severe, and more costly. The question is no longer whether the next emergency will come, but how prepared will we be when it does.
dlvr.it
November 25, 2025 at 11:06 AM
Reposted by Laurence Dierickx
NEW ISSUE ALERT! #DigitalJournalism Vol 13 Issue 9 is out! This issue covers a range of topics, including articles on #news producers and consumers, #AI and #data #journalism, and #factchecking. Access the articles in this thread below. 👇
November 25, 2025 at 9:31 AM
Reposted by Laurence Dierickx
A new article by me in @theconversation.com about forthcoming research with fab colleagues @snurb.info, @riedlinm.bsky.social, @phzerosounds.bsky.social, @timothyjgraham.bsky.social, and @antmandan.bsky.social. Looking forward to presenting this at the #AANZCA2025 conference this week! 🎉
November 24, 2025 at 3:42 AM
Reposted by Laurence Dierickx
“Meta shut down internal research into the mental health effects of Facebook after finding causal evidence that its products harmed users’ mental health, according to unredacted filings in a lawsuit by U.S. school districts against Meta and other social media platforms.”
Meta buried 'causal' evidence of social media harm, US court filings allege - The Economic Times
Meta reportedly halted internal research into the mental health impacts of Facebook and Instagram after finding causal evidence of harm. Internal documents revealed users reported lower depression and...
m.economictimes.com
November 23, 2025 at 4:46 PM