Pranav Goel
@pranavgoel.bsky.social
120 followers 160 following 14 posts
Researcher: Computational Social Science, Text as Data On the job market in Fall 2025! Currently a Postdoctoral Research Associate at Network Science Institute, Northeastern University Website: pranav-goel.github.io/
Posts Media Videos Starter Packs
Reposted by Pranav Goel
For journalists and especially headline writers: even if a discrete piece of information is true, you've got to think carefully about whether the way you're presenting it is useful for promoting narratives that aren't.
Big picture: misleading claims are both *more prevalent* and *harder to moderate* than implied in current misinformation research. It's not as simple as fact-checking false claims or downranking/blocking unreliable domains. The extent to which information (mis)informs depends on how it is used!
If you want to advance misleading narratives — such as COVID-19 vaccine skepticism — supporting information from reliable sources is more useful than similar information from unreliable sources, if you have it.
This calls for a reconsideration of what misinformation is, how widespread it is, and the extent to which it can be moderated. Our core claim is that users are *using* information to promote their identities and advance their interests, not merely consuming information for its truth value.
We find that mainstream stories with high scores on this measure are significantly more likely to contain narratives present in misinformation content. This suggests that reliable information — which has a much wider audience — can be repurposed by users promoting potentially misleading narratives.
We do this by looking at co-sharing behavior on Twitter/X. We first identify users who frequently share information from unreliable sources, and then examine the information from reliable sources that those same users also share at disproportionate rates.
Our paper uses this dynamic — users strategically repurposing true information from reliable sources to advance misleading narratives — to move beyond conceptualizing misinformation as source reliability and measuring it by just counting sharing of / exposure to unreliable sources.
Take, for example, this headline from the Washington Post. The source is reliable and the information is, strictly speaking, true. But the people most excited to share this story wanted to advance a misleading claim: that the COVID-19 vaccine was ineffective at best.
Washington Post article: screenshot of the headline "Vaccinated people now make up a majority of covid deaths"
But users who want to advance misleading claims likely *prefer* to use reliable sources when they can. They know others see reliable sources as more credible!
When thinking about online misinformation, we'd really like to identify/measure misleading claims; unreliable sources are only a convenient proxy.
There's a lot of concern out there about online misinformation, but when we try and measure it by identifying sharing of/traffic to unreliable sources, it looks like a tiny share of users' information diets. What gives?
Reposted by Pranav Goel
If you are at #WebSci2025, join our "Beyond APIs: Collecting Web Data for Research using the National Internet Observatory" - a tutorial that addresses the critical challenges of web data collection in the post-API era.

national-internet-observatory.github.io/beyondapi_websci25/
Reposted by Pranav Goel
cetaceanneeded.bsky.social
If you study networks, or have been stuck listening to people who study networks for long enough (sorry to my loved ones), you may have heard that open triads – V shapes – in social networks tend to turn into closed triangles. But why does this happen? In part, because people repost each other.