NYU's Center for Social Media and Politics
banner
csmapnyu.org
NYU's Center for Social Media and Politics
@csmapnyu.org
We work to strengthen democracy by conducting rigorous research, advancing evidence-based public policy, and training the next generation of scholars.

https://csmapnyu.org/links
OSF
osf.io
December 1, 2025 at 6:28 PM
The results reveal both the promise and limits of AI labeling. Labels communicate provenance when correctly applied, but do not reliably shift belief, change engagement, or reduce misinformation risk. Suggesting that labeling alone is unlikely to counter the influence of synthetic political visuals.
December 1, 2025 at 6:28 PM
🔎 The team finds evidence of a mixed pattern: exposure to labeled synthetic images can make some participants view unlabeled synthetic ones as more likely to be human-made — but this is offset by the broader skepticism about images being made by humans that label exposure also triggers.
December 1, 2025 at 6:28 PM
• Belief and engagement remained unchanged. Labels did not reduce belief that the depicted event occurred, nor did they affect intentions to like, share, comment, or seek more information.

📌 A follow-up experiment tested whether labeled synthetic images create an “implied authenticity effect.”
December 1, 2025 at 6:28 PM
👉 Key findings:
• AI labels can improve transparency when properly applied. Participants reliably inferred that labeled images were more likely created with AI, even when that wasn’t the case. +
December 1, 2025 at 6:28 PM
This enabled comparisons across both true and false political visuals. 🔍
December 1, 2025 at 6:28 PM
To build realistic stimuli, the team created synthetic images using ChatGPT-written prompts and Midjourney outputs, and paired them with visually similar real photos. They also found synthetic images of events that never happened, and matched them with authentic images from comparable contexts.
December 1, 2025 at 6:28 PM
Across two online experiments, participants viewed both authentic and AI-generated political images — some labeled “Made with AI,” others unlabeled — and rated:
• who created the image (provenance)
• whether the event happened (veracity)
• how likely they’d be to like, share, or comment (engagement)
December 1, 2025 at 6:28 PM
9/
Congratulations to the authors: Aaron Erlich, Kevin Aslett, Sarah Graham, and Joshua Tucker! @aaronerlich.bsky.social, @selisegraham.bsky.social ham.bsky.social, @jatucker.bsky.social @kevinaslett.bsky.social
November 14, 2025 at 9:20 PM
Taken together, the findings highlight that language itself can shape how people judge credibility in multilingual environments. Yet these effects are not uniform: they depend on which language a person prefers, and they don’t necessarily strengthen resilience against misinformation.
November 14, 2025 at 9:20 PM
We also tested a popular media literacy intervention — “tips to spot false news” — that has been used by platforms like Facebook. While the intervention reduced belief in stories overall, it lowered belief in both true and false stories equally, producing no net gain in discernment. 👇👇
November 14, 2025 at 9:20 PM
But there was also a tradeoff. Reading in a less-preferred language reduced belief in true stories as well as false ones. In other words, language shifted credibility judgments, but it did not improve people’s ability to distinguish fact from misinformation.
November 14, 2025 at 9:20 PM
The results were striking. Ukrainian-preferring respondents were less likely to believe both true and false stories when written in Russian. By contrast, Russian-preferring respondents sometimes showed greater belief in false stories when those same stories appeared in Ukrainian.
👇👇
November 14, 2025 at 9:20 PM
Our goal was simple yet important: to test whether individuals are more or less susceptible to believing false news stories when they are presented in people’s non-preferred language — and to determine if language itself functions as a credibility cue.
November 14, 2025 at 9:20 PM
Participants were randomly assigned to read stories in their preferred language or their less-preferred language, within days of publication. 👇👇
November 14, 2025 at 9:20 PM
To study this, we asked bilingual Ukrainians to evaluate news articles in Ukrainian and Russian as to whether they were true, false or misleading, or couldn’t tell.
November 14, 2025 at 9:20 PM
This means people encounter true and false information in two linguistic environments, one of which is also used in active disinformation campaigns and is the language of the invader in the current war.
November 14, 2025 at 9:20 PM
Ukraine is a crucial case: most citizens are bilingual in Ukrainian and Russian, regularly consuming news in both languages.
November 14, 2025 at 9:20 PM
Reposted by NYU's Center for Social Media and Politics
The paper is co-authored with Bernhard Von Clemm, @ericka.bric.digital
@jonathannagler.bsky.social @magdalenawojciesza.bsky.social

This is also one of the projects I started at @csmapnyu.org ---- thanks to the entire lab involved!

The paper can be found here: www.cambridge.org/core/journal...
Survey Professionalism: New Evidence from Web Browsing Data | Political Analysis | Cambridge Core
Survey Professionalism: New Evidence from Web Browsing Data
www.cambridge.org
October 7, 2025 at 6:49 PM