William J. Brady
banner
williambrady.bsky.social
William J. Brady
@williambrady.bsky.social

Assistant prof @ Kellogg School of Management, Northwestern University. Studying emotion, morality, social networks, psych of tech. #firstgen college graduate

Medicine 25%
Sociology 11%
Pinned
👀New preprint! In 3 prereg experiments we study how engagement-based algorithms amplify ingroup, moral and emotional (IME) content in ways that disrupt social norm learning (and test one solution!) w/ @joshcjackson.bsky.social and my amazing lab managers
@merielcd.bsky.social
& Silvan Baier 🧵👇
Out now in Scientific Reports! Despite high correlations, ChatGPT models failed to replicate human moral judgments. We propose tests beyond correlation to compare LLM data and human data.

With @mattgrizz.bsky.social @andyluttrell.bsky.social @chasmonge.bsky.social

www.nature.com/articles/s41...
ChatGPT does not replicate human moral judgments: the importance of examining metrics beyond correlation to assess agreement - Scientific Reports
Scientific Reports - ChatGPT does not replicate human moral judgments: the importance of examining metrics beyond correlation to assess agreement
www.nature.com
So there you have it, twin study estimates were greatly inflated, and molecular data sets the record straight. I walk through possible counter-arguments, but ultimately the uncomfortable truth is that genes contribute to traits much less than we always thought.

Great work by @natematias.bsky.social & Megan Price: public involvement in AI is an important part of rigorous science. AI systems are sociotechnical, meaning that the lived experience of the public is essential for validation, etc.

www.pnas.org/doi/10.1073/...
How public involvement can improve the science of AI | PNAS
As AI systems from decision-making algorithms to generative AI are deployed more widely, computer scientists and social scientists alike are being ...
www.pnas.org
New preprint out 📄
“Why Reform Stalls: Justifications of Force Are Linked to Lower Outrage and Reform Support.”

Why do some cases of police violence spark reform while others fade? We look at how people explain them—through justification or outrage.

osf.io/preprints/ps...
OSF
osf.io
🚨Out in PNAS🚨
Examining news on 7 platforms:
1)Right-leaning platforms=lower quality news
2)Echo-platforms: Right-leaning news gets more engagement on right-leaning platforms, vice-versa for left-leaning
3)Low-quality news gets more engagement EVERYWHERE - even BlueSky!
www.pnas.org/doi/10.1073/...

Reposted by William J. Brady

Excited to share a new preprint, accepted as a spotlight at #NeurIPS2025!

Humans are imperfect decision-makers, and autonomous systems should understand how we deviate from idealized rationality

Our paper aims to address this! 👀🧠✨
arxiv.org/abs/2510.25951

a 🧵⤵️
Estimating cognitive biases with attention-aware inverse planning
People's goal-directed behaviors are influenced by their cognitive biases, and autonomous systems that interact with people should be aware of this. For example, people's attention to objects in their...
arxiv.org

See preprint for more details on (1) development of our taxonomy, (2) how we measured motive inferences in natural language and (3) how our intervention worked!
doi.org/10.31234/osf...
OSF
doi.org

They also show we might be able to people more receptive to political dialogue with a political opponent even when outrage is clearly expressed against people’s own political views.

These results help solve a puzzle (cc @steverathje.bsky.social). Why do people express outrage when reporting they don't want to see it in their feeds? We suggest because when they express it they typically have behavioral motives, but they think others have contra-hedonic motives!

Key result #3: Motive inferences were malleable: we developed an intervention that corrected peoples’ motive inferences - increasing people’s inference of behavioral motive to out-partisan caused them to be more willing to have a political conversation even in context of outrage

Key result #2: Biased motive inferences predict greater partisan animosity, and specific inferences of behavioral / contra-hedonic motives predicted willingness to have a conversation. 👆 behavioral motive inference = 👆 willingness to converse, even if it was an out-partisan expressing outrage!

Key result #1: People largely reported their in-partisans’ (and their own) motives for outrage was to raise awareness or inspire action (behavioral motive), but thought political opponents motives were to shame or troll (contra-hedonic motive), which was a vast overestimation.

In online experiments and a field study on Reddit, we asked users to report their motives for posting outrage and then had observers infer the motives.
✨New preprint! Why do people express outrage online? In 4 studies we develop a taxonomy of online outrage motives, test what motives people report, what they infer for in- vs. out-partisans, and how motive inferences shape downstream intergroup consequences. Led by @felix-chenwei.bsky.social 🧵👇

Reposted by William J. Brady

These has been sharp rise in moralized language on social media

Two processes explained this shift:
(1) within-user increases in moral language over time
(2) highly moralized users became more active while less moralized users disengaged osf.io/preprints/ps...
Posting is correlated with affective polarization:
😡 The most partisan users — those who love their party and despise the other — are more likely to post about politics
🥊 The result? A loud angry minority dominates online politics, which itself can drive polarization (see doi.org/10.1073/pnas...)

Reposted by Mark J. Brandt

Reminder to apply to the DRRC postdoc fellowship! Deadline is this week.
Are you interested in topics related to conflict and intergroup relations *broadly construed*? Come join us as a postdoc in the Dispute Research Research Center! This position is up to 3 years, comes with your own research funding, and a phenomenal network of past DRRC postdocs.
Apply now for Kellogg’s DRRC Postdoc Fellowship, which supports outstanding research in conflict and cooperation, offering dedicated time for scholarship, access to exceptional resources, and a vibrant academic community. Deadline: Nov 1.
t.co/UDZwJCqDw5

Reposted by William J. Brady

Re-posting this because I really like it and I think we need to understand identity from a functionalist perspective more than ever.
osf.io/preprints/ps...
I wrote a chapter on a functionalist account of social identity.

IMO, thinking about identity in an instrumental way helps explain a lot of behavior that seems otherwise baffling.
osf.io/preprints/ps...
1. We ( @jbakcoleman.bsky.social, @cailinmeister.bsky.social, @jevinwest.bsky.social, and I) have a new preprint up on the arXiv.

There we explore how social media companies and other online information technology firms are able to manipulate scientific research about the effects of their products.
Great piece on the absurdity of brute force multiverse analyses.

www.pnas.org/doi/10.1073/...
Robustness is better assessed with a few thoughtful models than with billions of regressions | PNAS
Robustness is better assessed with a few thoughtful models than with billions of regressions
www.pnas.org
Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how “AI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/
AI Surrogates and illusions of generalizability in cognitive science
Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…
www.sciencedirect.com

Last call for data-blitz and poster submission for the Computational Psychology preconference @spspnews.bsky.social! See thread below for details and hope to see you in Chicago!
The computational psych preconference is back @spspnews.bsky.social for a full day! This year's lineup:

👉theory-driven modeling: Hyowon Gweon
👉data-driven discovery: @clemensstachl.bsky.social
👉application: me
👉 panel: @steveread.bsky.social Sandra Matz, @markthornton.bsky.social Wil Cunningham

Very difficult indeed. We study these types of issues empirically:

osf.io/preprints/os...
OSF
osf.io
🚨 New preprint 🚨

Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.

Yet, people preferred sycophantic chatbots and viewed them as unbiased!

osf.io/preprints/ps...

Thread 🧵

Cool work! Did y'all look at how people update when they discover AI makes an error?

Reposted by William J. Brady

Our new paper finds that AI can overcome partisan #bias

We find that AI sources are preferred over ingroup and outgroup sources--even when people know both are equally accurate (N = 1,600+): osf.io/preprints/ps...

Thanks to our rotating organizers: lead organizer Tessa Charlesworth & co-organizers: @chujunlin.bsky.social Brent Hughes Xuechunzi Bai

Post any questions here!

We are now accepting submissions for posters and data blitz! If your research is computational (broadly construed) you should apply! We try to program for a wide range of topics and computational approaches.

Submission guide here: spsp.sharepoint.com/:w:/g/EdvejV...

Deadline: October 23rd 🎃

The computational psych preconference is back @spspnews.bsky.social for a full day! This year's lineup:

👉theory-driven modeling: Hyowon Gweon
👉data-driven discovery: @clemensstachl.bsky.social
👉application: me
👉 panel: @steveread.bsky.social Sandra Matz, @markthornton.bsky.social Wil Cunningham