Myra Cheng
@myra.bsky.social
2.7K followers 130 following 41 posts
PhD candidate @ Stanford NLP https://myracheng.github.io/
Posts Media Videos Starter Packs
Reposted by Myra Cheng
kaitlynzhou.bsky.social
I'll be at COLM next week! Let me know if you want to chat! @colmweb.org

@neilrathi.bsky.social will be presenting our work on multilingual overconfidence in language models and the effects on human overreliance!

arxiv.org/pdf/2507.06306
arxiv.org
Reposted by Myra Cheng
steverathje.bsky.social
🚨 New preprint 🚨

Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.

Yet, people preferred sycophantic chatbots and viewed them as unbiased!

osf.io/preprints/ps...

Thread 🧵
Abstract and results summary
myra.bsky.social
Was a blast working on this with @cinoolee.bsky.social @pranavkhadpe.bsky.social, Sunny Yu, Dyllan Han, and @jurafsky.bsky.social !!! So lucky to work with this wonderful interdisciplinary team!!💖✨
myra.bsky.social
While our work focuses on interpersonal advice-seeking, concurrent work by @steverathje.bsky.social @jayvanbavel.bsky.social
et al. finds similar patterns for political topics, where sycophantic AI also led to more extreme attitudes when users discussed gun control, healthcare, immigration, etc.!
myra.bsky.social
There is currently little incentive for developers to reduce sycophancy. Our work is a call to action: we need to learn from the social media era and actively consider long-term wellbeing in AI development and deployment. Read our preprint: arxiv.org/pdf/2510.01395
arxiv.org
myra.bsky.social
Despite sycophantic AI’s reduction of prosocial intentions, people also preferred it and trusted it more. This reveals a tension: AI is rewarded for telling us what we want to hear (immediate user satisfaction), even when it may harm our relationships.
Rightness judgment is higher and repair likelihood is lower for sycophantic AI Response quality, return likelihood, and trust are higher for sycophantic AI
myra.bsky.social
Next, we tested the effects of sycophancy. We find that even a single interaction with sycophantic AI increased users’ conviction that they were right and reduced their willingness to apologize. This held both in controlled, hypothetical vignettes and live conversations about real conflicts.
Description of Study 2 (hypothetical vignettes) and Study 3 (live interaction) where self-attributed wrongness and desire to initiate repair decrease, while response quality and trust increases.
myra.bsky.social
We focus on the prevalence and harms of one dimension of sycophancy: AI models endorsing users’ behaviors. Across 11 AI models, AI affirms users’ actions about 50% more than humans do, including when users describe harmful behaviors like deception or manipulation.
Description of Study 1, where we characterize the prevalence of social sycophancy and find it to be highly prevalent across leading AI models
myra.bsky.social
AI always calling your ideas “fantastic” can feel inauthentic, but what are sycophancy’s deeper harms? We find that in the common use case of seeking AI advice on interpersonal situations—specifically conflicts—sycophancy makes people feel more right & less willing to apologize.
Screenshot of paper title: Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
myra.bsky.social
Thoughtful NPR piece about ChatGPT relationship advice! Thanks for mentioning our research :)
myra.bsky.social
Congrats Maria!! All the best!!
Reposted by Myra Cheng
aolteanu.bsky.social
#acl2025 I think there is plenty of evidence for the risks of anthropomorphic AI behavior and design (re: keynote) -- find @myra.bsky.social and I if you want to chat more about this or our "Dehumanizing Machines" ACL 2025 paper
aolteanu.bsky.social
Our FATE MTL team has been working on a series of projects on anthropomorphic AI systems for which we recently put out a few pre-prints I’m excited about. While working on these we tried to think carefully not only about key research questions but also how we study and write about these systems
Reposted by Myra Cheng
abeba.bsky.social
New paper hot off the press www.nature.com/articles/s41...

We analysed over 40,000 computer vision papers from CVPR (the longest standing CV conf) & associated patents tracing pathways from research to application. We found that 90% of papers & 86% of downstream patents power surveillance

1/
Computer-vision research powers surveillance technology - Nature
An analysis of research papers and citing patents indicates the extensive ties between computer-vision research and surveillance.
www.nature.com
myra.bsky.social
Aw thanks!! :)
myra.bsky.social
Paper: arxiv.org/pdf/2502.13259
Code: github.com/myracheng/hu...
Thanks to my wonderful collaborators Sunny Yu and @jurafsky.bsky.social and everyone who helped along the way!!
arxiv.org
myra.bsky.social
So we built DumT, a method using DPO + HumT to steer models to be less human-like without hurting performance. Annotators preferred DumT outputs for being: 1) more informative and less wordy (no extra “Happy to help!”) 2) less deceptive and more authentic to LLMs’ capabilities.
Plots showing that DumT reduces MeanHumT and has higher performance on RewardBench than the baseline models.
myra.bsky.social
We also develop metrics for implicit social perceptions in language, and find that human-like LLM outputs correlate with perceptions linked to harms: warmth and closeness (→ overreliance), and low status and femininity (→ harmful stereotypes).
human-like LLM outputs are strongly positively correlated with social closeness, femininity, and warmth (r = 0.87, 0.47, 0.45), and strongly negatively correlated with status (r = 0.80).
myra.bsky.social
First, we introduce HumT (Human-like Tone), a metric for how human-like a text is, based on relative LM probabilities. Measuring HumT across 5 preference datasets, we find that preferred outputs are consistently less human-like.
bar plot showing that human-likeness is lower in preferred responses
myra.bsky.social
Do people actually like human-like LLMs? In our #ACL2025 paper HumT DumT, we find a kind of uncanny valley effect: users dislike LLM outputs that are *too human-like*. We thus develop methods to reduce human-likeness without sacrificing performance.
Screenshot of first page of the paper HumT DumT: Measuring and controlling human-like language in LLMs
myra.bsky.social
thanks!! looking forward to seeing your submission as well :D
myra.bsky.social
We also apply ELEPHANT to identify sources of sycophancy (in preference datasets) and explore mitigations. Our work enables measuring social sycophancy to prevent harms before they happen.
Preprint: arxiv.org/abs/2505.13995
Code: github.com/myracheng/el...
GitHub - myracheng/elephant
Contribute to myracheng/elephant development by creating an account on GitHub.
github.com
myra.bsky.social
Grateful to work with Sunny Yu (undergrad!!!) @cinoolee.bsky.social @pranavkhadpe.bsky.social @lujain.bsky.social @jurafsky.bsky.social on this! Lots of great cross-disciplinary insights:)
myra.bsky.social
We also apply ELEPHANT to identify sources of sycophancy (in preference datasets) and explore mitigations. Our work enables measuring social sycophancy to prevent harms before they happen.
Preprint: arxiv.org/abs/2505.13995
Code: github.com/myracheng/el...