Tiago Ventura
@tiagoventura.bsky.social
1.4K followers 240 following 69 posts
Assistant Professor at @McCourtSchool @Georgetown Working on computational social science, social media, and politics. De Belém 🇧🇷
Posts Media Videos Starter Packs
tiagoventura.bsky.social
Thank YOU for the opportunity to work together on this, Jonathan!
tiagoventura.bsky.social
Yeah.. absolutely. We make our efforts to control for this by imposing a long break between repeated attempts (30min if I’m not mistaken). But substantive results don’t change much with longer break like 1h or even 24h
tiagoventura.bsky.social
We use URLs from platforms we can identify a unique survey. So not all platforms we have in the paper and def not proprietary platforms. So think for example about qualtrics. In this case, a survey taker can try to take it more than once with the url, switching browsers, switching profiles, etc..
tiagoventura.bsky.social
Me too! As someone who does most of my research outside of the U.S. I’m also very curious about this phenomenon in other markets! Hopefully our paper can help in the measurement challenges of survey taking for future research!
tiagoventura.bsky.social
Yeah… that’s a great point we tried to be as thoughtful as possible! We provide a sensitivity analysis for this case, and show how our prevalence estimates would change. We discuss this limitation in detail in the paper!
tiagoventura.bsky.social
Try to! We really do not know if they were able to complete the survey page they were visiting
tiagoventura.bsky.social
ohh very cool! Can you share the paper? I would love to read it!
tiagoventura.bsky.social
Huge caveat: our data comes from a period before the widespread use of Generative AI.

How professionals are using these tools, and how easy access to GenAI models might affect the quality of survey research, remains an open question for future research.
tiagoventura.bsky.social
Our conclusion is that survey professionalism is widespread on online panels, but these do not, by and large, distort inferences, making us cautiously optimistic about research using these online samples!
tiagoventura.bsky.social
But one concerning pattern emerges:

Professionals are considerably more likely to attempt to take the same survey repeatedly.

For example, roughly 85 % of Lucid professionals tried to take at least one survey more than once
tiagoventura.bsky.social
c) When looking at the stability of responses over time, we find no evidence of professionals being less attentive or more likely to change their responses across survey waves.

We take this as evidence that they answer questions at least as attentively as non-professionals.
tiagoventura.bsky.social
Our results show:

a) No robust demographic or political differences between professionals and non-professionals.

b)Professionals are more likely to speed and attempt repeats, but those are easy to screen out in surveys.

And..
tiagoventura.bsky.social
Next, we ask: Do professionals risk survey quality and inference from survey research? We test this in three dimensions:

a) demographic & political differences;
b) low-effort behavior (speeding, straightlining),
c) survey stability across waves
tiagoventura.bsky.social
Prevalence varies sharply across platforms. Our most conservative estimate shows:

- 1.7 % of Facebook respondents are professionals
- 7.6 % on YouGov
- 34.7 % on Lucid

Professionalism is a real phenomenon, but it varies widely across samples!
tiagoventura.bsky.social
We analyze 3,886 respondents who donated their browsing histories (~96M web visits). We measure actual survey-taking, rather than relying on self-reports, as much of prior work on this topic does.

We focus on three outcomes: prevalence, data quality, and survey repetition.
tiagoventura.bsky.social
How common are “survey professionals” - people who take dozens of online surveys for pay - across online panels, and do they harm data quality?

Our paper, FirstView at @politicalanalysis.bsky.social, tackles this question using browsing data from three U.S. samples (Facebook, YouGov, and Lucid):
Reposted by Tiago Ventura
cambup-polsci.cambridge.org
#OpenAccess from @polanalysis.bsky.social -

Survey Professionalism: New Evidence from Web Browsing Data - https://cup.org/3KWgqtg

- Bernhard Clemm von Hohenberg, @tiagoventura.bsky.social, Jonathan Nagler, @ericka.bric.digital & Magdalena Wojcieszak

#FirstView
Image featuring a yellow and red banner with the text "POLITICAL ANALYSIS" in large white letters. Below in a smaller font, it reads "#OpenAccess.
tiagoventura.bsky.social
I use the YouTube API, but with someone’s else wrapper. To actually show how to query an api, I go with nyt. It is simple and easy to use.
Reposted by Tiago Ventura
csmapnyu.org
In the Global South, WhatsApp is more popular than X or Facebook.

New in @The_JOP, we ran a WhatsApp deactivation experiment during Brazil’s 2022 election to explore how the app facilitates the spread of misinformation and affects voters’ attitudes.

www.journals.uchicago.edu/doi/abs/10.1...
Abstract: In most advanced democracies, concerns about the spread of misinformation are typically associated with feed-based social media platforms like Twitter and Facebook. These platforms also account for the vast majority of research on the topic. However, in most of the world, particularly in Global South countries, misinformation often reaches citizens through social media messaging apps, particularly WhatsApp. To fill the resulting gap in the literature, we conducted a multimedia deactivation experiment to test the impact of reducing exposure to potential sources of misinformation on WhatsApp during the weeks leading up to the 2022 Presidential election in Brazil. We find that this intervention significantly reduced participants’ recall of false rumors circulating widely during the election. However, consistent with theories of mass media minimal effects, a short-term change in the information environment did not lead to significant changes in belief accuracy, political polarization, or well-being.
tiagoventura.bsky.social
🚨 Paper now as "just accepted" at @The_JOP. We ran the first WhatsApp deactivation experiment focused on multimedia content ahead of the 2022 election in Brazil. We find a reduction in users' recall of false rumors -- and, to a smaller degree, of true news. Null effects on attitudes. Full thread ⬇️
csmapnyu.org
In the Global South, WhatsApp is more popular than X or Facebook.

New in @The_JOP, we ran a WhatsApp deactivation experiment during Brazil’s 2022 election to explore how the app facilitates the spread of misinformation and affects voters’ attitudes.

www.journals.uchicago.edu/doi/abs/10.1...
Abstract: In most advanced democracies, concerns about the spread of misinformation are typically associated with feed-based social media platforms like Twitter and Facebook. These platforms also account for the vast majority of research on the topic. However, in most of the world, particularly in Global South countries, misinformation often reaches citizens through social media messaging apps, particularly WhatsApp. To fill the resulting gap in the literature, we conducted a multimedia deactivation experiment to test the impact of reducing exposure to potential sources of misinformation on WhatsApp during the weeks leading up to the 2022 Presidential election in Brazil. We find that this intervention significantly reduced participants’ recall of false rumors circulating widely during the election. However, consistent with theories of mass media minimal effects, a short-term change in the information environment did not lead to significant changes in belief accuracy, political polarization, or well-being.