Luke Thorburn
banner
lukethorburn.com
Luke Thorburn
@lukethorburn.com
Algorithms ∩ Conflict • PhD candidate • King's College London • lukethorburn.com
Reposted by Luke Thorburn
I'm pretty sure the CJEU in Russmedia just casually dismantled 80% of the DSA with one ill-considered judgment. As I read it, it substitutes in GDPR rules -- not notice and takedown rules -- for any platform where a user is likely to post content about people. 1/

curia.europa.eu/juris/docume...
CURIA - Documents
curia.europa.eu
December 2, 2025 at 3:50 PM
Reposted by Luke Thorburn
AI presents a fundamental threat to our ability to use polls to assess public opinion. Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee. Models will also infer and confirm hypotheses in experiments. Current quality checks fail
November 18, 2025 at 9:23 PM
Reposted by Luke Thorburn
💥My report out now💥

📗 *VLOPs - how big is your Polarization Footprint? Towards a metric to give EU citizens transparency about an online systemic risk driving conflict in our societies*

Connecting #polarizationfootprint concept to systemic risk framework of Digital Services Act (#DSA)
November 17, 2025 at 5:52 PM
Reposted by Luke Thorburn
Is social media dying? How much has Twitter changed as it became X? Which party now dominates the conversation?

Using nationally representative ANES data from 2020 & 2024, I map how the U.S. social media landscape has transformed.

Here are the key take-aways 🧵

arxiv.org/abs/2510.25417
October 30, 2025 at 8:09 AM
Reposted by Luke Thorburn
GreenEarth is creating open source AI-driven recommender infrastructure for BlueSky. Type a prompt, see your feed change. We are here for the users, the builders, the dreamers. Join us.
greenearthsocial.substack.com/p/introducin...
Introducing GreenEarth
We're building advanced open source algorithms for social media
greenearthsocial.substack.com
October 23, 2025 at 3:08 AM
Reposted by Luke Thorburn
Macron remarks are notable- some quotes: "We have been incredibly naive in entrusting our democratic space to social networks that are controlled either by large American entrepreneurs or large Chinese companies, whose interests are not at all the survival or proper functioning of our democracies."
President Macron: “Europeans, let's wake up!

We have been incredibly naive in entrusting our democratic space to social networks.”

defenddemocracy.eu/macron-democ...
October 4, 2025 at 11:57 AM
Reposted by Luke Thorburn
🚨 PhD Position at the University of Amsterdam 🚨

Join my team as a computer scientist / computational social scientist working on LLMs, social media, and politics.

We offer freedom, impact, and an inspiring environment at one of Europe's leading universities.

🔗 werkenbij.uva.nl/en/vacancies...
Vacancy — PhD Position on Improving Social Media Using Large Language Models
The Institute for Logic, Language and Computation (ILLC) at the University of Amsterdam is inviting applications for a fully funded PhD position in the NWO VIDI project "Improving Social Media Using L...
werkenbij.uva.nl
August 25, 2025 at 8:34 AM
Reposted by Luke Thorburn
In the literature, there are two competing explanations for "echo chambers":
1️⃣ Algorithms curate what we see (“filter bubbles”)
2️⃣ People choose like-minded peers (“selective exposure”)

Our new study suggests something surprising:

both explanations might be wrong. 🧵

arxiv.org/abs/2508.10466
Online Homogeneity Can Emerge Without Filtering Algorithms or Homophily Preferences
Ideologically homogeneous online environments - often described as "echo chambers" or "filter bubbles" - are widely seen as drivers of polarization, radicalization, and misinformation. A central debat...
arxiv.org
August 15, 2025 at 7:46 AM
Reposted by Luke Thorburn
Ever since I started thinking seriously about AI value alignment in 2016-7, I've been frustrated by the inadequacy of utility+RL theory to account for the richness of human values.

Glad to be part of a larger team now moving beyond those thin theories towards thicker ones.
July 14, 2025 at 9:29 PM
June 30, 2025 at 10:52 AM
The whole conference looks great this year too! Talks from Cory Doctorow, Kate Starbird, + Glen Weyl; a workshop on futarchy w. Robin Hanson (straight after ours); and lots of papers on using AI to scaffold human coordination and collective decision-making.

ci.acm.org/2025/
June 30, 2025 at 10:50 AM
🔔

Often our tech policy interventions operate from linear, top-down assumptions that don't account for the complexity they seek to govern.

To dig into this I'm co-organizing a workshop at ACM CI 2025 with Jason Burton, Joe Bak-Coleman, + Naomi Shiffman. You should come!

ci-x-tp.github.io
June 30, 2025 at 10:50 AM
Proposals for Build Peace (arguably the main conference on digital peacebuilding) are now open. This year it's near Barcelona in November. Consider applying!

https://howtobuildpeace.org/attend-the-conference/register/
March 30, 2025 at 3:02 PM
Reposted by Luke Thorburn
🚨 WEBINAR ALERT 🚨 Join KGI on March 25th for a live discussion on designing algorithmic feeds that put people first. As legislation and litigation around algorithms heats up, it’s never been more important to learn how they can be improved.
March 13, 2025 at 2:50 PM
Reposted by Luke Thorburn
EVENT: Join us for Artificial Intelligence and Democratic Freedoms on April 10-11 at
@columbiauniversity.bsky.social & online. Hosted with Senior AI Advisor @sethlazar.org. Co-sponsored by the Knight Institute & @columbiaseas.bsky.social. Panel info in 🧵. RSVP: knightcolumbia.org/events/artif...
Artificial Intelligence and Democratic Freedoms
knightcolumbia.org
March 7, 2025 at 3:31 PM
Connected by Data are seeking examples of participatory digital governance to map out such projects around the world.

https://connectedbydata.org/blog/2025/03/05/participatory-digital-governance
March 7, 2025 at 8:46 PM
This was very much a joint effort with Andrew Konya, Wasim Almasri, Oded Adomi Leshem, Ariel Procaccia, Lisa Schirch, Michiel Bakker @mbakker.bsky.social, and many others.

Looking forward to seeing these kinds of technologies mature!
March 4, 2025 at 7:28 PM
Link for the full paper below, which documents the whole process, including all the ethical precautions we took.

arxiv.org/abs/2503.01769
Using Collective Dialogues and AI to Find Common Ground Between Israeli and Palestinian Peacebuilders
A growing body of work has shown that AI-assisted methods -- leveraging large language models (LLMs), social choice methods, and collective dialogues -- can help reduce polarization and foster common ...
arxiv.org
March 4, 2025 at 7:28 PM
This level of agreement is particularly noteworthy because, at the beginning of the process, the substrate of trust that makes dialogue (and Track II diplomacy) possible among peacebuilders in the region had (understandably) grown fragile.
March 4, 2025 at 7:28 PM
The process resulted in a joint letter to the international community with a set of five demands, each of which has at least 90% of support from participants on each 'side'.
March 4, 2025 at 7:28 PM
In April – July 2024, in collaboration with the Alliance for Middle-East Peace (ALLMEP), we conducted a series of online collective dialogues with civil society peacebuilders in Israel and Palestine, and used LLMs and bridging-based ranking to surface ideas that had broad support across groups.
March 4, 2025 at 7:28 PM
🔔 (new paper!)

You might have heard of the "Habermas Machine", an AI-human pipeline that is really good at finding common ground between ideologically diverse groups, at least in lab settings.

But can this kind of approach help in real world conflicts?
March 4, 2025 at 7:28 PM
Reposted by Luke Thorburn
Thanks to TIME and Tharin Pillay for this great coverage of our work with @audreyt.org, @mchangama.bsky.social, @lukethorburn.com, Divya Siddarth and Emilie de Keulenaar: time.com/7258238/soci...
Read the full paper here: www.arxiv.org/abs/2502.10834
Social Media Fails Many Users. Experts Have an Idea to Fix It
In a new paper, digital activist Audrey Tang and others emphasize the need for context and community over clicks.
time.com
February 18, 2025 at 11:49 PM
This is joint work with @jonathanstray.bsky.social (Berkeley), @juliehawke.bsky.social (Build Up), and Emillie de Keulenaar (UN).
January 31, 2025 at 3:57 PM
Live near Chicago? Or going to ISA 2025? Want to geek out on conflict theory + coauthor a landmark paper?

We're finding all the ways people distinguish between "good conflict" and "bad conflict" — but we need help, so we're hosting a workshop! You should come!

https://forms.gle/BKxziu8zbZ1oZZqU8
January 31, 2025 at 3:52 PM