David Rand
banner
dgrand.bsky.social
David Rand
@dgrand.bsky.social

Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology.
https://www.DaveRand.org/ .. more

David G. Rand is the Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences at Massachusetts Institute of Technology.

Source: Wikipedia
Political science 25%
Sociology 23%
Pinned
🚨In Science🚨
Conspiracy beliefs famously resist correction, ya?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20%!
-Lasts over 2mo
-Works on entrenched beliefs
-Tailored AI response rebuts specific evidence offered by believers
www.science.org/doi/10.1126/...
1/
[X->BSky repost]

It's also basically impossible from a practical perspective bc there's no way to have a human expert available on the fly to answer any possible conspiracy theory a participant describes. I'm sure somebody will do something like this at some point, but it's not the direction I'm pursuing

We looked at what happens if you label the AI as an expert vs an AI (doesn't make it any less persuasive to call it an expert). But havent tried actual humans - almost certainly humans will do much worse bc they dont have easy access to all the relevant facts etc academic.oup.com/pnasnexus/ar...
Dialogues with large language models reduce conspiracy beliefs even when the AI is perceived as human
Abstract. Although conspiracy beliefs are often viewed as resistant to correction, recent evidence shows that personalized, fact-based dialogues with a lar
academic.oup.com

Reposted by David G. Rand

🧠🏔️ Below I'll share mine and others' presentations from the Society for Judgment and Decision Making conference in #Denver.

Did you attend a session I missed?
Did I fail to tag a presenter?
Feel free to add to the thread!

Long live #openAccess conferencing.

#SJDM #SJDM25 @sjdm-tweets.bsky.social

For a deeper dive, check out this 20 minute talk youtu.be/qVjjcw4w6-Q and try the bot yourself at www.DebunkBot.com

Hats off to lead author Nat Rabb, joint senior @tomcostello.bsky.social and collabs @gordpennycook.bsky.social @adamberinsky.bsky.social Alexander Levontin
Debunking Antisemitic Conspiracy Theories using AI - David Rand
YouTube video by David Rand
youtu.be
🚨New WP🚨
Dialogues with our AI DebunkBot:
✔️Reduced belief in antisemitic conspiracy theories among believers
✔️Effect durable at 1+ month
✔️Improved attitudes towards Jews among initially negative participants

🟰Debunking works for deeply rooted, identity-linked conspiracies
osf.io/preprints/ps...

I'll be at SJDM in Denver today through Sunday (I'm presenting on AI political persuasion on Sat @ 230pm). Let me know do you'd like to chat!

"Do you think that Trump was involved in crimes allegedly committed by Jeffrey Epstein?"

All:
Yes: 42%
No: 34%

Yes Among:
DEM: 79%
IND: 43%
GOP: 7%

YouGov / Nov 17, 2025

In Jan 2024, yes X was certainly more ideologically diverse than BlueSky

Well, this data is from Jan 2024. Unclear what it would look like today...

Interestingly, it's more about some notable high-quality outlets underperforming on engagement, rather than low-quality outlets overperforming:

Reposted by Garry Peterson

It's actually more about some notable high-quality outlets underperforming, rather than low-quality outlets overperforming:

Reposted by David G. Rand

Preprint w/ (rapid) analysis of Grokipedia, showing it to be “highly derivative of Wikipedia”, but differing, often on controversial topics, in that Grokipedia includes content/cites from low quality (hyper partisan & conspiracy laden) sources like Stormfront & Infowars. arxiv.org/pdf/2511.09685
arxiv.org

The paper, in figures:
F1=Cross-platforms Corr b/w partisan lean and quality
F2=Corr between lean and engagement varies cross-platform, s/t dominant-lean news gets more engagement
F3=Neg corr between quality and engagement across all platforms!
Led by @mmosleh.bsky.social w @jennyallen.bsky.social
🚨Out in PNAS🚨
Examining news on 7 platforms:
1)Right-leaning platforms=lower quality news
2)Echo-platforms: Right-leaning news gets more engagement on right-leaning platforms, vice-versa for left-leaning
3)Low-quality news gets more engagement EVERYWHERE - even BlueSky!
www.pnas.org/doi/10.1073/...

Reposted by David G. Rand

Fact-checks improve accuracy. But can they penalize spreaders of misinfo? At @polbehavior.bsky.social, Jacob Ausubel, Annika Davies and I show that the answer is yes--sometimes. Unknown misinfo producers can be penalized, but well-known figures get off. Link: link.springer.com/article/10.1...
The Reputational Penalty: How Fact-Checking Can Penalize Those Who Spread Misinformation - Political Behavior
Whether or not political leaders pay a price for spreading misinformation has profound implications for democracy. In this paper, we identify the conditions under which corrections of misinformation c...
link.springer.com

I'm trying to understand what this means and I haven't succeeded yet. Any chance you'd want to break to down more for us bskyers?

Reposted by David G. Rand

Adults who were required to use #GenAI to answer LSAT questions did better than a no-AI control group, but the GenAI group also exhibited greater metacognitive inaccuracy. I'd like to see some conceptual replications before drawing firm conclusions. #PsychSciSky #AcademicSky #EduSky
AI makes you smarter but none the wiser: The disconnect between performance and metacognition
Optimizing human–AI interaction requires users to reflect on their performance critically, yet little is known about generative AI systems’ effect on …
doi.org
"While high-quality content is posted more and receives more total engagement across platforms...a given author attracts higher levels of engagement when they post lower-quality content"

"pattern we find seems to be driven more by an underperformance of particularly popular high-quality outlets"
Divergent patterns of engagement with partisan and low-quality news across seven social media platforms | PNAS
In recent years, social media has become increasingly fragmented, as platforms evolve and new alternatives emerge. Yet most research studies a sing...
www.pnas.org

For me, the point of Multiverse analysis is when there are multiple ways of analyzing the data that all seem reasonable. So it's not that you just throw in any possible model and drown out the reasonable models in the specified ones, but rather that you show robustness among reasonable models

Reposted by David G. Rand

Directions of Polarization, Social Norms, and Trust in Societies—a workshop organized by Team Scientist @dgrand.bsky.social, BCFG collaborator Eugen Dimant & colleagues—is now open for registration.

Register before October 30th: sites.google.com/view/polariz...
Regrettably relevant research today

Disguised Repression: Targeting Opponents with Nonpolitical Crimes to Undermine Dissent
www.journals.uchicago.edu/doi/10.1086/...
Serious scholars have examined what happens when we change the number of H1-B visas issued.

Cities that get more H1-B immigrants subsequently see the wages of natives *rise* substantially.

Skilled immigrants bring new ideas, fill labor shortages and make us all more productive.