Ben Tappin
banner
benmtappin.bsky.social
Ben Tappin
@benmtappin.bsky.social
• Assistant professor, London School of Economics and Political Science
• Persuasion, technology, experiments
• benmtappin.com
Pinned
👇New experiments in which we aimed to map the levers and scope of political persuasion with conversational AI models.

It was a tremendous privilege to lead on this work alongside the brilliant @kobihackenburg.bsky.social. The paper is packed with results and we'd love your comments!
Today (w/ @ox.ac.uk @stanford @MIT @LSE) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19  LLMs, 707 political issues.

We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more! 

🧵:
Reposted by Ben Tappin
🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
📌Usage is polarized, Grok users more likely to be Reps
📌BUT Rep posts rated as false more often—even by Grok
📌Bot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
February 3, 2026 at 9:55 PM
Reposted by Ben Tappin
My Centre is a unique place to do a PhD in Philosophy, because you can be in constant contact with experts in veterinary medicine, psychology, zoology and policy and be part of a team united by a shared interest in animal minds. We now have our 1st ever PhD scholarship: www.lse.ac.uk/sentience/phd
PhD Scholarship
www.lse.ac.uk
January 30, 2026 at 7:39 AM
Reposted by Ben Tappin
📢 JOB ALERT! Postdoc opportunity in Political Behaviour & Political Economy (UKRI‑funded) - Please do share with anyone who might be a great fit. If you’re interested or would like to know more, please feel free to get in touch!! jobs.reading.ac.uk/Job/JobDetai...
Postdoctoral Research Fellow in Quantitative Political Behaviour and Political Economy:Whiteknights Reading UK
The closing date for applications is 23.59 on 22nd February 2026
jobs.reading.ac.uk
January 26, 2026 at 4:37 PM
Reposted by Ben Tappin
🎺 Call for proposals 🎺

1️⃣ replicate an existing experiment
2️⃣ run a novel experiment

on repdata.com

3️⃣ coauthor with Mary McGrath and me to meta-analyze the replications and existing studies
4️⃣ publish your study

details: alexandercoppock.com/replication_...
applications open Feb 1

please repost!
January 27, 2026 at 10:16 PM
I land somewhere between this and the OP. Evaluating the quality of the methods often requires fully understanding the research question and estimand. And that usually requires reading the intro. (Disclaimer: but even then it’s no guarantee😭 cf. www.the100.ci/2024/08/27/l... @dingdingpeng.the100.ci)
I feel very conflicted about this advice: Can you skip parts of papers? Yes! Reading intro+disc first, and methods only optionally, is completely wrong though. IMHO, intro+disc are often lazy world building. Instead, read the methods first to decide whether the rest is even worth looking at.
Even in the AI era, learning to quickly "read" an academic article is an essential skill.

When I started grad school, I thought I had to read every word, in order, for every article I "read".

I don’t do that anymore.

Here’s how I "read" most academic articles:
January 26, 2026 at 7:57 PM
My students are in for a treat next week
January 16, 2026 at 2:36 PM
Reposted by Ben Tappin
Reposted by Ben Tappin
🚨 New in Nature+Science!🚨
AI chatbots can shift voter attitudes on candidates & policies, often by 10+pp
🔹Exps in US Canada Poland & UK
🔹More “facts”→more persuasion (not psych tricks)
🔹Increasing persuasiveness reduces "fact" accuracy
🔹Right-leaning bots=more inaccurate
December 4, 2025 at 8:43 PM
Reposted by Ben Tappin
🚨 New working paper 🚨

We often see populist parties like Reform UK blame higher energy bills on climate change policies. What are the political consequences of this strategy?

Very early draft; comments and criticisms are welcomed!

full draft: z-dickson.github.io/assets/dicks...
November 18, 2025 at 3:39 PM
Reposted by Ben Tappin
"While testing one dimension at a time can yield simple results, those effects may not generalise to richer, real-world contexts."

Read our new POAL Methods Briefs on Conjoint Experiments from Thomas Robinson!

Link: www.poal.co.uk/research/met...
Public Opinion Analytics Lab
The website of the Public Opinion Analytics Lab
www.poal.co.uk
November 10, 2025 at 8:47 AM
Insightful long-read:
"With AGI [artificial general intelligence], powerful actors will lose their incentive to invest in regular people–just as resource-rich states today neglect their citizens because their wealth comes from natural resources rather than taxing human labor."
intelligence-curse.ai
The Intelligence Curse
This series examines the incoming crisis of human irrelevance and provides a map towards a future where people remain the masters of their destiny.
intelligence-curse.ai
October 12, 2025 at 10:44 AM
Reposted by Ben Tappin
🗞️ 🤖 Weekend reading anyone? For the launch of @transformernews.ai as a standalone publication, they invited me to contribute a piece on what persuasive AI might mean for democracy and elections.

Here’s the result…

buff.ly/OJsNmpK
AI is persuasive, but that’s not the real problem for democracy
Opinion: Felix M Simon argues that AI is unlikely to significantly shape election results in the near future, but warns that it could damage democracy through a steady erosion of institutional trust.
www.transformernews.ai
October 3, 2025 at 5:19 PM
Reposted by Ben Tappin
WE ARE HIRING! 2 Lecturers in Quantitative Social Science. Want a friendly interdisciplinary department in one of the world's most vibrant cities? This just might be for you.

Apply by: 10 Oct

www.ucl.ac.uk/work-at-ucl/...
September 1, 2025 at 1:59 PM
Reposted by Ben Tappin
Wrote about how the UK's online age verification requirements are already proving to be a disaster (as UK regulators were clearly warned it would be) and how unhelpful the UK's response to this mess has been, including their tech minister saying anyone who complains supports predators.
August 5, 2025 at 12:31 AM
Reposted by Ben Tappin
The largest investigation of AI persuasion with 76,977 participants across 3 large-scale experiments. Excellent work by @ox.ac.uk PhD candidate @kobihackenburg.bsky.social.

19 LLMs. 707 political issues. 466,769 fact-checkable claims evaluated.

arxiv.org/abs/2507.13919

#AcademicSky #MLSky #PhDSky
July 24, 2025 at 7:56 PM
Reposted by Ben Tappin
I am so proud of the brilliant @oii.ox.ac.uk DPhil @kobihackenburg.bsky.social and this wonderful “bees-knees” paper on conversational AI and political persuasion @AISecurityInst - it is a “must-read”, comments welcome!
Today (w/ @ox.ac.uk @stanford @MIT @LSE) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19  LLMs, 707 political issues.

We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more! 

🧵:
July 22, 2025 at 7:54 AM
A great post, this point especially:

"I don’t mean to argue all research needs to be slow and fully documented. When we are just starting in a new area, it’s chaos. But at some point, by the time results are reported, the workflow needs to be professionalized. Research is not a hobby. It’s a job."
July 22, 2025 at 8:20 AM
Reposted by Ben Tappin
VERY excited about this massive AI persuasion experiment - big effects on UK policy positions, driven mostly by post-training (rather than hyperpersonalization or model scale); and the more info the model provides, the more persuasive it is
Today (w/ @ox.ac.uk @stanford @MIT @LSE) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19  LLMs, 707 political issues.

We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more! 

🧵:
July 21, 2025 at 4:44 PM
Reposted by Ben Tappin
Exciting new research by @benmtappin.bsky.social and colleagues ⬇️
Today (w/ @ox.ac.uk @stanford @MIT @LSE) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19  LLMs, 707 political issues.

We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more! 

🧵:
July 21, 2025 at 5:40 PM
👇New experiments in which we aimed to map the levers and scope of political persuasion with conversational AI models.

It was a tremendous privilege to lead on this work alongside the brilliant @kobihackenburg.bsky.social. The paper is packed with results and we'd love your comments!
Today (w/ @ox.ac.uk @stanford @MIT @LSE) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19  LLMs, 707 political issues.

We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more! 

🧵:
July 21, 2025 at 4:32 PM
Have now read this paper in detail. It’s a tour de force. If you’re interested in the potential impact of AI on election outcomes you should put it by your bedside. Key takeaway: let’s remain alive to, but healthily skeptical of, the possibility of large impacts: knightcolumbia.org/content/dont...
July 20, 2025 at 12:11 PM
Reposted by Ben Tappin
New paper in PSPB! journals.sagepub.com/doi/10.1177/...

Well, actually, not "new". We first put this paper online way back Dec 2022... in any case, we think it's really cool!

We find that conspiracy believers tend to be overconfident & really don't seem to realize that most disagree with them
May 29, 2025 at 5:18 PM
Reposted by Ben Tappin
New Substack post

It's very personal: my story of a 20-year academic career, and the many challenges of theoretical and cross-disciplinary work

As I put it in the subtitle: There is a lot of success and a lot of pain here, and no happy ending

thomscottphillips.substack.com/p/happy-in-t...
Happy In Theory
This is the short story of my long, 20 year search for a stable academic home. There is a lot of success and a lot of pain here, and no happy ending.
thomscottphillips.substack.com
May 23, 2025 at 9:28 AM
Reposted by Ben Tappin
I think the current state of social science research is pretty bad and I wrote something for @asteriskmag.bsky.social about it. asteriskmag.com/issues/10/ca...
May 19, 2025 at 3:41 PM