Curtis Puryear
@curtispuryear.bsky.social
400 followers 580 following 10 posts
Assistant Professor at UNCW studying morality, politics, and intergroup conflict. [email protected] http://curtispuryear.com
Posts Media Videos Starter Packs
Pinned
curtispuryear.bsky.social
New preprint! We developed new measurement tools to examine moralization in ~2B Twitter/X & Reddit posts and ~5M traditional media texts.

Key finding: moralization increased markedly on social media from 2013-2021; more than traditional media; associated with multiple user dynamics
🧵👇
Reposted by Curtis Puryear
polbehavior.bsky.social
Do ethnic minority interest parties grow through programs, or people? Schaaf, Otjes & Spierings show that DENK’s support in the Netherlands stems mainly from personal & religious networks, while online ties matter less. #ComparativePolitics
Read more:
link.springer.com/article/10.1...
The Role of Networks in Mobilization for Ethnic Minority Interest Parties - Political Behavior
Recently, parties that are run by and for ethnic minority citizens with a migration background have become more prominent. They can be considered a manifestation of ethnic political segregation. A key example of such a party is DENK in the Netherlands. So far, the explanatory literature has focused on how programmatic considerations drives voting for these parties. Other factors, such as the role of social networks in mobilization, have received limited testing and limited exploration in more detail. Furthermore, the literature on social networks is mainly based on majority populations. To inform our understanding of the role of social networks in voting (in general but also particularly among ethnic minority communities and for ethnic minority interest parties) this paper analyzes the voting behavior for DENK focusing on the role of personal, online and religious networks. The paper uses both qualitative interviews (with bicultural youth in the third largest city of the Netherlands in 2022) and quantitative surveys (the 2021 Dutch Ethnic Minority Electoral Study). Our analysis points to the importance of religious and personal networks for voting for DENK, whereas online networks appear to be less relevant.
link.springer.com
Reposted by Curtis Puryear
eikofried.bsky.social
Intervening on a central node in a network likely does little given that its connected neighbors will "flip it back" immediately. Happy to see this position supported now.

"Change is most likely [..] if it spreads first among relatively poorly connected nodes."

www.nature.com/articles/s41...
Transformation starts at the periphery of networks where pushback is less - Scientific Reports
Scientific Reports - Transformation starts at the periphery of networks where pushback is less
www.nature.com
Reposted by Curtis Puryear
brendannyhan.bsky.social
Depolarization is not "a scalable solution for reducing societal-level conflict.... achieving lasting depolarization will likely require....moving beyond individual-level treatments to address the elite behaviors and structural incentives that fuel partisan conflict" www.pnas.org/doi/10.1073/...
Reposted by Curtis Puryear
johnpfaff.bsky.social
I mean, we are living in two different realities now, and this really hasn't always been the case.
Partisan views on "more crime": used to move fairly closely, but now radically different (90% say up for GOP, 29% for Dems).
Reposted by Curtis Puryear
mierkezat.bsky.social
I’m very excited to share that my paper “Cleavage theory meets civil society: A framework and research agenda” with @eborbath.bsky.social & Swen Hutter has now been published online in ‪@wepsocial.bsky.social‬ (w/ open access funding thanks to @wzb.bsky.social‬!)

www.tandfonline.com/doi/full/10....
Reposted by Curtis Puryear
danicajdillion.bsky.social
New preprint 🚨

Cognitive bottlenecks make LLMs more morally aligned with people 🧠🤖

We made AI “think” more like people by narrowing its focus to a few key moral cues.

This AI better predicted people’s moral judgments & was more trusted.

🧵 ⬇️
Reposted by Curtis Puryear
dingdingpeng.the100.ci
Ever stared at a table of regression coefficients & wondered what you're doing with your life?

Very excited to share this gentle introduction to another way of making sense of statistical models (w @vincentab.bsky.social)
Preprint: doi.org/10.31234/osf...
Website: j-rohrer.github.io/marginal-psy...
Models as Prediction Machines: How to Convert Confusing Coefficients into Clear Quantities

Abstract
Psychological researchers usually make sense of regression models by interpreting coefficient estimates directly. This works well enough for simple linear models, but is more challenging for more complex models with, for example, categorical variables, interactions, non-linearities, and hierarchical structures. Here, we introduce an alternative approach to making sense of statistical models. The central idea is to abstract away from the mechanics of estimation, and to treat models as “counterfactual prediction machines,” which are subsequently queried to estimate quantities and conduct tests that matter substantively. This workflow is model-agnostic; it can be applied in a consistent fashion to draw causal or descriptive inference from a wide range of models. We illustrate how to implement this workflow with the marginaleffects package, which supports over 100 different classes of models in R and Python, and present two worked examples. These examples show how the workflow can be applied across designs (e.g., observational study, randomized experiment) to answer different research questions (e.g., associations, causal effects, effect heterogeneity) while facing various challenges (e.g., controlling for confounders in a flexible manner, modelling ordinal outcomes, and interpreting non-linear models).
Figure illustrating model predictions. On the X-axis the predictor, annual gross income in Euro. On the Y-axis the outcome, predicted life satisfaction. A solid line marks the curve of predictions on which individual data points are marked as model-implied outcomes at incomes of interest. Comparing two such predictions gives us a comparison. We can also fit a tangent to the line of predictions, which illustrates the slope at any given point of the curve. A figure illustrating various ways to include age as a predictor in a model. On the x-axis age (predictor), on the y-axis the outcome (model-implied importance of friends, including confidence intervals).

Illustrated are 
1. age as a categorical predictor, resultings in the predictions bouncing around a lot with wide confidence intervals
2. age as a linear predictor, which forces a straight line through the data points that has a very tight confidence band and
3. age splines, which lies somewhere in between as it smoothly follows the data but has more uncertainty than the straight line.
Reposted by Curtis Puryear
chesdata.bsky.social
The CHES EU team has published a new research note in @electoralstudies.bsky.social describing some trends across the 25 years now covered by our trend file and exploring two new items included in the 2024 wave of the survey: doi.org/10.1016/j.el...
Here’s a summary thread:
1/
Reposted by Curtis Puryear
tompepinsky.com
We live in an era of democratic backsliding. But the terminology of "backsliding" isn't up to the task of making sense of the deep crisis of liberal democracy around the world. I've just finished a working paper that lays out what I think is going on.

tl;dr it's about the state and society

🧵
State, Society, and the Politics of Democratic Backsliding
Recent scholarship on democratic backsliding has focused on measuring its global prevalence and identifying the causal processes and mechanisms that produce or
papers.ssrn.com
Reposted by Curtis Puryear
manikyaalister.bsky.social
We know that a consensus of opinions is persuasive, but how reliable is this effect across people and types of consensus, and are there any kinds of claims where people care less about what other people think? This is what we tested in our new(ish) paper in @psychscience.bsky.social
Screenshot of the article "How Convincing Is a Crowd? Quantifying the Persuasiveness of a Consensus for Different Individuals and Types of Claims"
Reposted by Curtis Puryear
mbarnfield.bsky.social
I have a new article out at @polstudies.bsky.social. In "Electoral Hope", I make the case that supposedly irrational "wishful thinking" is actually a crucial part of how voters make rational sense of their role in democracies.

OA link: doi.org/10.1177/0032...
Title page of article "Electoral Hope" in journal Political Studies.
Reposted by Curtis Puryear
psrm.bsky.social
👅Can moral language boost pro-immigrant messages and be as effective as anti-immigrant messages?

➡️ @kristinabsimonsen.bsky.social shows that pro-immigrant actors are not always bound to lose against the anti-immigrant side www.cambridge.org/core/journal... #FirstView #OpenAccess
Reposted by Curtis Puryear
annerasmussen.bsky.social
🚨 New paper in Science Advances @science.org

Can changing how we argue about politics online improve the quality of replies we get?

T HeideJorgensen, @gregoryeady.bsky.social & I use an LLM to manipulate counter-arguments to see how people respond to different approaches to arguments

Thread 🧵1/n
Reposted by Curtis Puryear
jayvanbavel.bsky.social
I have a new paper on "The Psychology of Virality" with @steverathje.bsky.social

We explain how similar psychological processes (eg preferential attention to negativity, social motives, etc.) drive the spread of information across online and offline contexts: www.sciencedirect.com/science/arti...
Reposted by Curtis Puryear
owasow.bsky.social
Really enjoyed my conversation with @chrislhayes.bsky.social about how protests can shape public opinion. He also generously invited me to share a bit of my personal story which helps put the research in context.
— Apple: podcasts.apple.com/us/podcast/w...
— Spotify: open.spotify.com/episode/2Byd...
The Resistance vs. Trump 2.0 with Omar Wasow
Podcast Episode · Why Is This Happening? The Chris Hayes Podcast · 07/22/2025 · 53m
podcasts.apple.com
Reposted by Curtis Puryear
joshcjackson.bsky.social
English language is filled with trait words like “caring” and “smart”

These words are the currency of personality/social psych, yet key questions remain about their evolution, function, and structure

We take on these questions in a preprint led by @yuanzeliu.bsky.social
osf.io/preprints/ps...
curtispuryear.bsky.social
Here is the link to the preprint: osf.io/preprints/ps...

Huge thanks to my collaborators: @williambrady.bsky.social @nourkteily.bsky.social , who helped shape this project from its inception. And to @joshcjackson.bsky.social and @ycleong.bsky.social , who helped expand the scope of this project.
OSF
osf.io
curtispuryear.bsky.social
Our results suggest social media can reshape public square, by pulling new topics into (and pushing some people out of) moral debate + turning up the heat on already moralized topics. Key Q: How can we design community norms and/or algorithms to curb moralization without chilling civic engagement?
curtispuryear.bsky.social
Finding 4: Moralization both spread into new topics (hobbies, entertainment) AND intensified within already moralized topics (e.g., politics). But we only observed intensification on Twitter/X, which again, suggests platform design may matter when it comes to mitigating runaway mitigation.
curtispuryear.bsky.social
Finding 3: Moralization rose in two ways: same users used 3 % more moral words each year BUT extreme voices also gained share. On Reddit we saw only the first (+0.3 %/yr); extremists didn’t take over. Hint that long comments, downvotes & lighter engagement ranking blunt selection effects.
curtispuryear.bsky.social
What user dynamics drive increases in moralization? Are the same people becoming more moral (e.g. social learning), or are the types of people engaged online changing, such that high moralizers dominate discourse (e.g., selection effects)? We find the answer is BOTH.
curtispuryear.bsky.social
Finding 2: Moralization increased relatively less in traditional media. Rate of moral words in Corpus of Contemporary American English increased, but occurred almost entirely in a single year (1.11% to 1.31% in 2016), and moral words actually decreased over time in the News on the Web corpus.
curtispuryear.bsky.social
Finding 1: Moralization increased significantly on social media. Rate of moral words increased on Twitter/X by 41% from 2013-2021 (1.28% of words in posts to 1.80%) and word embeddings showed topics shifted .296 SD toward morality. Moral words also increased on Reddit, to a lesser degree: by 6%
curtispuryear.bsky.social
Social media lets people share their perspectives globally and instantaneously for the first time in history. But it can also incentivize people to boil complex issues into simplistic, moralized narratives. This might create a moralizing shift in discourse, which we identify and explain here.
curtispuryear.bsky.social
New preprint! We developed new measurement tools to examine moralization in ~2B Twitter/X & Reddit posts and ~5M traditional media texts.

Key finding: moralization increased markedly on social media from 2013-2021; more than traditional media; associated with multiple user dynamics
🧵👇