Patrick Liu
@patrickpliu.bsky.social
25 followers 12 following 11 posts
Columbia Political Science | PhD Student
Posts Media Videos Starter Packs
patrickpliu.bsky.social
Our study draws renewed attention to the distinction between beliefs and attitudes. It also showcases how LLMs can be used to peer into belief systems. We welcome any feedback!
patrickpliu.bsky.social
Across 2 studies, focal + distal counterarguments reduced focal + distal belief strength (respectively). But focal arguments had larger and more durable effects on downstream attitudes.

We explore mechanisms in the paper, e.g., ppl recalled focal args better than distal args a week later.
patrickpliu.bsky.social
Ex: Respondent said they care about public infrastructure.

In the same wave, they held the following convo with an AI chatbot. After GPT synthesized a summary attitude, focal belief, and distal belief, they saw treatment/placebo text and answered pre- and post-treatment Qs.
patrickpliu.bsky.social
Ordinarily, a design that a) elicits personally important issues + relevant beliefs through convos, b) uses tailored treatments, & c) measures persistence of effects would require 3 survey waves and immense resource/labor costs.

We overcome these issues (+ replicate) using LLMs.
patrickpliu.bsky.social
We engaged ppl in direct dialogue to discuss an issue they care about and the reasons for their stance. We generated a “focal” belief from this text convo and a less relevant “distal” belief, then randomly assigned a focal belief counterargument, distal argument, or placebo text.
patrickpliu.bsky.social
Identifying relevant beliefs is challenging! Fact-checking studies rely on databases to identify prevalent misinfo and network methods map mental associations at a group level, but the beliefs ppl personally treat as relevant on an issue are diverse and shaped by political preferences.
patrickpliu.bsky.social
We build on classic psych models that represent attitudes as weighted sums of beliefs about an object. The impact of belief change on subsequent attitude change increases with the belief’s weight, capturing its relevance. Low relevance = small effect of info on attitudes.
patrickpliu.bsky.social
There is a tendency to conclude that attitudes (evaluations of an object) are stickier than beliefs (factual positions) about the object, possibly b/c of motivations to preserve attitudes.

But this assumes beliefs targeted by the informational treatment matter for the attitude.
patrickpliu.bsky.social
Puzzle: Studies widely find learning occurs w/o attitude change. Correcting vaccine misinformation fails to alter vax intentions, reducing misperceptions of the # of immigrants doesn’t reduce hostility, learning about govt spending doesn’t affect econ policy preferences… the list goes on.
patrickpliu.bsky.social
Link: go.shr.lc/4j9My8H

We find arguments targeting relevant beliefs produce strong and durable attitude change—more than arguments targeting distal beliefs. To ID relevant beliefs, we elicited deeply held attitudes + interviewed ppl about their reasons using an LLM chatbot. More on why below!
When Information Affects Attitudes: The Effectiveness of Targeting Attitude-Relevant Beliefs
Do citizens update strongly held beliefs when presented with belief-incongruent information, and does such updating affect downstream attitudes? Though fact-checking studies find that corrections reli...
go.shr.lc
patrickpliu.bsky.social
🧵 Why do facts often change beliefs but not attitudes?

In a new WP with @yamilrvelez.bsky.social and @scottclifford.bsky.social, we caution against interpreting this as rigidity or motivated reasoning. Often, the beliefs *relevant* to people’s attitudes are not what researchers expect.