Joris Frese
@fresejoris.bsky.social
3.5K followers 1.4K following 220 posts
PhD candidate in political science at the EUI. Interested in political behavior, quantitative methods, metascience. https://www.jorisfrese.com/
Posts Media Videos Starter Packs
Pinned
fresejoris.bsky.social
🇪🇺🇦🇫 Published Today in CPS 🇪🇺🇦🇫

“Stand by those who share our values” – how refugees fleeing the Taliban improved European attitudes toward immigration

Article: journals.sagepub.com/doi/10.1177/...

Pre-print: osf.io/preprints/os...

Thread: 1/8
Reposted by Joris Frese
tiagoventura.bsky.social
How common are “survey professionals” - people who take dozens of online surveys for pay - across online panels, and do they harm data quality?

Our paper, FirstView at @politicalanalysis.bsky.social, tackles this question using browsing data from three U.S. samples (Facebook, YouGov, and Lucid):
Reposted by Joris Frese
ssreditorial.bsky.social
Work from Diana Roxana Galos and @fresejoris.bsky.social examines online social class cues and employability. This article is open access here: www.sciencedirect.com/science/arti...
fresejoris.bsky.social
Thanks for sharing, Arnout!
Reposted by Joris Frese
robinwigglesworth.ft.com
Sentiment analysis on four decades worth of FT newspaper articles. 🥳 Rreally cool stuff from @joelsuss.ft.com. on.ft.com/4n2TVBq
Reposted by Joris Frese
Reposted by Joris Frese
britishelectionstudy.com
🚨DATA RELEASE 🚨

The BES team are pleased to announce the release of Wave 30 of the British Election Study Internet Panel.

Please follow the link below, and we look forward to seeing your research!

www.britishelectionstudy.com/bes-resource...
Release Note: British Election Study Internet Panel Wave 30 - The British Election Study
www.britishelectionstudy.com
fresejoris.bsky.social
This thread is a great takedown of a recent paper published in one of the top neurology journals. More evidence to bolster my impression that there is a LOT of really weak nutrition science out there getting disproportionate amounts of (uncritical) media coverage.
fresejoris.bsky.social
Herzlichen Glückwunsch! 🥳
fresejoris.bsky.social
I personally have used z-curves in a meta-analytic paper of mine: journals.sagepub.com/doi/10.1177/.... See the brief excerpt of my appendix where I give some reasons why I prefer them over p-curves. The cited studies by Bartos, Brunner, and Schimmack go into much more detail on this.
fresejoris.bsky.social
This is a fantastic paper and thread, demonstrating clearly what many have suspected for years: p-curves don't really do what they purport to do. Better alternatives are out there!
richarddmorey.bsky.social
Paper drop, for anyone interested in #metascience, #statistics, or #metaanalysis! @clintin.bsky.social and I show in a new paper in JASA that the P-curve, a popular forensic meta-analysis method, has deeply undesirable statistical properties. www.tandfonline.com/doi/full/10.... 1/?
Cover page for the manuscript: Morey, R. D., & Davis-Stober, C. P. (2025). On the poor statistical properties of the P-curve meta-analytic procedure. Journal of the American Statistical Association, 1–19. https://doi.org/10.1080/01621459.2025.2544397 Abstract for the paper: The P-curve (Simonsohn, Nelson, & Simmons, 2014; Simonsohn, Simmons, & Nelson, 2015) is a widely-used suite of meta-analytic tests advertised for detecting problems in sets of studies. They are based on nonparametric combinations of p values (e.g., Marden, 1985) across significant (p < .05) studies and are variously claimed to detect “evidential value”, “lack of evidential value”, and “left skew” in p values. We show that these tests do not have the properties ascribed to them. Moreover, they fail basic desiderata for tests, including admissibility and monotonicity. In light of these serious problems, we recommend against the use of the P-curve tests.
Reposted by Joris Frese
aufdroeseler.bsky.social
Thank you for this very impressive resource of 50 (!) reproducibility/replicability metrics; also includes a searchable online table at rachelheyard.com/reproducibil...
Reposted by Joris Frese
simonhix.bsky.social
Please share this thread. It is important that political scientists, in Europe and across the world, understand why EPSS exists and why we are encouraging people to attend our inaugural conference, in Belfast next June.
epssnet.bsky.social
EPSA have announced that they will hold a conference in July 2026.

😵‍💫 We understand that there might be some confusion about EPSS and EPSA.

👉🏽 So we thought we would clarify some things.

A short 🧵
Reposted by Joris Frese
Reposted by Joris Frese
fresejoris.bsky.social
🇨🇭🇪🇺 Just Published in Royal Society Open Science!

A scoping review on metrics to quantify reproducibility:
royalsocietypublishing.org/doi/10.1098/...

Ever conducted a replication and pondered when/how to conclude if it was (un)successful?
We have just the paper for you (led by Rachel Heyard)! 1/14
fresejoris.bsky.social
PS: Many thanks also to @forrt.bsky.social and @irise-eu.bsky.social for facilitating this interdisciplinary collaboration!
fresejoris.bsky.social
Rachel Heyard did an incredible job leading this project. Anyone interested in replication and open science should definitely check out her other work! 14/14
fresejoris.bsky.social
We hope that this paper can be useful for anyone conducting replication studies, as it provides a comprehensive, yet accessible, overview of the many ways to quantify replication success, and helps to pinpoint those that most closely align with the specific aims of a given study. 13/14
fresejoris.bsky.social
In the brief excerpts of the Table here, you may already identify several metrics that could be more suitable for the stylized examples described earlier in this thread (compared to a simple significance test), such as meta-analytic tests for (i) and tests based on effect sizes for (ii). 12/14
fresejoris.bsky.social
There are lots of informative summary statistics on all these parameters in the review, but the heart and the main contribution of our paper is Table 4, where we offer an overview of each of the 50 different reproducibility metrics and their use-cases. 11/14
fresejoris.bsky.social
… whether they are quantitative or qualitative, whether they were originally introduced as reproducibility metrics or for other purposes, which precise questions they help to answer, etc. 10/14
fresejoris.bsky.social
For these papers, a team of six coders extracted information on all metrics for the assessment of replication success, including their names, their description, their exact purpose (e.g., to quantify, to classify, or to predict reproducibility), the data inputs they require, 9/14