Tom Costello
@tomcostello.bsky.social
3.7K followers 220 following 150 posts
research psychologist. beliefs, AI, computational social science. prof at american university.
Posts Media Videos Starter Packs
tomcostello.bsky.social
I’m going to be in Montreal for a few days starting tomorrow for COLM — anyone also at the conference / interested in meeting up, let me know!
Reposted by Tom Costello
florianfoos.bsky.social
This is a valid point, I think. The question is always what type of alternative information gathering processes AI chatbots replace. In the case of medical "self diagnosis", there is some reason to believe that common alternative mechanisms aren't superior.
tomcostello.bsky.social
Is there a strong case for AI helping, rather than harming, the accuracy of people's beliefs about contentious topics? In this
@nature.com Nature Medicine piece (focusing on vaccination), I argue the answer is YES. And it boils down to how LLMs differ from other sources of information.
tomcostello.bsky.social
Maybe you see this as all too rosy, which is fair and maybe even true, but warnings and dismissals (alone) are bad tools, if nothing else. future isn't set. So yes, I believe we should actively articulate and defend a positive vision in order to reduce harms + capture gains.
tomcostello.bsky.social
Targeted ads have gone too far
tomcostello.bsky.social
Also, incentives are not static; if revenue continues to come from usage fees (rather than ads), maybe helping users reach reliable answers is indeed a profitable/competitive approach. open question. plus i don't imagine these big companies want to replay social media era mistakes
tomcostello.bsky.social
So the problem is incentives. I agree. The incentives are aligned with building the models in the first place, too (hence my first sentence in that quote). Should we not try to identify and bolster a positive vision that underscores potential returns to cooperation, democracy, etc?
Reposted by Tom Costello
smcgrath.phd
Thomas Costello argues that as patients move from WebMD to AI, we might be slightly optimism. Unlike former tools, LLMs can synthesize vast, shared knowledge, potentially helping users converge on more accurate beliefs.

The major caveat is: as long as the LLMs are not trained on bad data.
Large language models as disrupters of misinformation - Nature Medicine
As patients move from WebMD to ChatGPT, Thomas Costello makes the case for cautious optimism.
www.nature.com
tomcostello.bsky.social
Is there a strong case for AI helping, rather than harming, the accuracy of people's beliefs about contentious topics? In this
@nature.com Nature Medicine piece (focusing on vaccination), I argue the answer is YES. And it boils down to how LLMs differ from other sources of information.
tomcostello.bsky.social
More on that front soon, actually...
tomcostello.bsky.social
I think this is interesting, and it would be worthwhile to convene a group and expose them to this chatbot interaction (perhaps it would be much less effective when social dynamics are involved), but I think the active ingredient is strong arguments + evidence. LLMs can surface good arguments.
Reposted by Tom Costello
tomcostello.bsky.social
Conspiracies emerge in the wake of high-profile events, but you can’t debunk them with evidence because little yet exists. Does this mean LLMs can’t debunk conspiracies during ongoing events? No!

We show they can in a new working paper.

PDF: osf.io/preprints/ps...
tomcostello.bsky.social
You mean given everything with the Epstein files?
Reposted by Tom Costello
dgrand.bsky.social
I'm very excited about this new WP showing that LLMs effectively countered conspiracies in the immediate aftermath of the 1st Trump assassination attempt, and that treatment also reduced conspiratorial thinking about the subsequent 2nd assassination attempt
tomcostello.bsky.social
Conspiracies emerge in the wake of high-profile events, but you can’t debunk them with evidence because little yet exists. Does this mean LLMs can’t debunk conspiracies during ongoing events? No!

We show they can in a new working paper.

PDF: osf.io/preprints/ps...
tomcostello.bsky.social
yeah, I think that talking to an LLM that is prompted to behave like ChatGPT is likely to amplify whichever tendencies already exist in a person (so can weird beliefs, like has been reported). But our studies give the LLM a very specific goal (e.g., debunking), so it is not 1:1 in a meaningful way
tomcostello.bsky.social
We also find this intervention succeeds for vaccine skepticism:

bsky.app/profile/dgra...
dgrand.bsky.social
🚨New WP🚨
Using GPT4 to persuade participants significantly reduces climate skepticism and inaction
-Sig more effective than consensus messaging
-Works for Republicans
-Evidence of persistence @ 1mo
-Scalable!
PDF: osf.io/preprints/ps...
Try the bot: www.debunkbot.com/climate-change
Here’s how 👇
tomcostello.bsky.social
Do these effects succeed for non-conspiracy beliefs, like climate attitudes? yes!

bsky.app/profile/dgra...
dgrand.bsky.social
🚨New WP🚨
Using GPT4 to persuade participants significantly reduces climate skepticism and inaction
-Sig more effective than consensus messaging
-Works for Republicans
-Evidence of persistence @ 1mo
-Scalable!
PDF: osf.io/preprints/ps...
Try the bot: www.debunkbot.com/climate-change
Here’s how 👇
tomcostello.bsky.social
Second, why are debunking dialogues so effective? Good arguments and evidence! (and, for unfolding conspiracies, saying "no one knows what's going on, you should be epistemically cautious" may be a strong argument)

bsky.app/profile/tomc...
tomcostello.bsky.social
Last year, we published a paper showing that AI models can "debunk" conspiracy theories via personalized conversations. That paper raised a major question: WHY are the human<>AI convos so effective? In a new working paper, we have some answers.

TLDR: facts

osf.io/preprints/ps...
tomcostello.bsky.social
Some other recent papers from our group on AI debunking:

First, does this work if people think they're talking to a human being? yes!

bsky.app/profile/gord...
gordpennycook.bsky.social
Recent research shows that AI can durably reduce belief in conspiracies. But does this work b/c the AI is good at producing evidence, or b/c ppl really trust AI?

In a new working paper, we show that the effect persists even if the person thinks they're talking to a human: osf.io/preprints/ps...

🧵
tomcostello.bsky.social
Huge thanks to my brilliant co-authors: Nathaniel Rabb (who split the work with me and is co-first author), Nick Stagnaro, @gordpennycook.bsky.social, and @dgrand.bsky.social

We're eager to hear your thoughts and feedback!
tomcostello.bsky.social
Also, the treatment succeeded for both Democrats and Republicans, who endorsed slightly different conspiratorial explanations of the assassination attempts (see figure below for a breakdown)
tomcostello.bsky.social
(the most notable part?): The effect was durable and preventative. When we recontacted participants 2 months later after the second assassination attempt, those from the tx group were ~50% less likely to endorse conspiracies about this new event! The debunking acted as an "inoculation" of sorts.
tomcostello.bsky.social
Did this work? Yes. The Gemini dialogues significantly reduced conspiracy beliefs compared to controls who chatted about an irrelevant topic or just read a fact sheet (d = .38). The effect was robust across multiple measures.

Key figure attached