Daphne Halikiopoulou
@dafnoukos.bsky.social
1.5K followers 560 following 30 posts

Chair in Comparative Politics, University of York Research on far right, populism, nationalism and European Politics Joint Editor-In-Chief of Nations and Nationalism, Political Studies Steering Committee ECPR Extremism&Democracy LSE PhD .. more

Political science 76%
Sociology 17%
Posts Media Videos Starter Packs
tevoelker.bsky.social
New paper out with @dasalgon.bsky.social: “Far-Right Agenda Setting: How the Far Right influences the Political Mainstream” doi.org/10.1017/S1475676525100066 #openaccess in @ejprjournal.bsky.social🧵
Abstract

dafnoukos.bsky.social
At a time of rising support for the far-right, are we getting our research right? In our new @etui.bsky.social technical brief, Tim Vlandas and I show how the 'atomistic fallacy' can lead to misinterpretations and flawed policy recommendations about far-right success: www.etui.org/publications...
etui.bsky.social
At a time of rising support for the #FarRight, are we getting it right when it comes to understanding who votes for them and why?

🧵👇
kai-arzheimer.com
Excellent question. Unfortunately, the answers seems to be: yes
rrresrobot.bsky.social
The #RadicalRight in the US: L. de Jonge, V. Georgiadou, D. Halikiopoulou, et al. “Is the Far Right a Global Phenomenon? Comparing Europe and Latin America: A Scholarly Exchange”. In: Nations and Nationalism 31.1 (2025), pp. 7-24. http://dx.doi.org/10.1111/nana.13074.
benansell.bsky.social
On the morning of Keir Starmer's conference speech here's a new post on an odd psychopathology in British politics - our main parties don't like the people who vote for them - the dreaded Professional Managerial Class. And so they are acting out like a divorced dad seeking cooler voters. 1/n
British Politics' Midlife Crisis
Why British Parties Can't Make Peace with Their Actual Voters
benansell.substack.com
casmudde.bsky.social
If you are looking for informed voices on far-right (violence) in the U.S., here are some great follows:

@milleridriss.bsky.social
@davidneiwert.bsky.social
@sjacks26.bsky.social
@kathleenbelew.bsky.social
@jmberger.com
kai-arzheimer.com
This is very good (and mildly depressing)
Even honest research results can flip – a new approach to assessing robustness in the social sciences
estimated reading time: 4 min When academic studies get things wrong, it is often blamed on misconduct and fraud. Yet as Michael Ganslmeier and Tim Vlandasargue, even good-faith research, conducted using standard methods and transparent data, can produce contradictory conclusions. Recent controversies around research transparency have reignited longstanding concerns about the fragility of empirical evidence in the social sciences. While some discussions have centred on misconduct and fraud, an equally important challenge lies in the sensitivity of results to defensible modelling choices: what if the more widespread issue runs deeper, not in individual misconduct, but in how we conduct empirical research? In a new study, we set out to measure the fragility of findings in political science by asking how much do empirical results change when researchers vary reasonable and equally defensible modelling choices? To answer this question, we estimated over 3.6 billion regression coefficients across four widely studied topics in political science: welfare generosity, democratisation, public goods provision and institutional trust – although we only report results for the latter three in this blog post. Each topic is characterised by well-established theories, strong priors and extensive empirical literatures. Our results reveal a striking pattern: the same independent variable often yields not just significant and insignificant coefficients but also a very large number of both statistically significant positive and statistically significant negative effects, depending on how the model is set up. Thus, even good-faith research, conducted using standard methods and transparent data, can produce contradictory conclusions. Recent advances – such as pre-registration, replication files and registered reports – have significantly improved research transparency. However, they typically begin from a pre-specified model, and even when researchers follow best practices, they still face a series of equally plausible decisions: which years or countries to include, how to define concepts like “welfare generosity”, whether and which fixed effects to use, whether and how to adjust standard errors and so on. Each of these choices may seem minor on its own, and many researchers already use a wide range of robustness checks to explore their impact. But collectively, these decisions define an entire modelling universe and navigating that space can profoundly affect results. Standard robustness checks often examine one decision at a time, which may miss the joint influence of many reasonable modelling paths taken together. To map that model space systematically, we combined insights from extreme bounds analysis and the multiverse approach. We then varied five core dimensions of empirical modelling: covariates, sample, outcome definitions, fixed effects and standard error estimation. The goal was not to test a single hypothesis, nor indeed to replicate prior studies, but instead to observe how much the sign and significance of key coefficients change across plausible model specifications. For many variables commonly used to support empirical claims, we found many model specifications where the estimated effect was positive and statistically significant as well as others where it was strongly negative and statistically significant (Figure 1). Figure 1: Share of significant coefficients in the model space for three topics Note: The panels present the share of (positive and negative) significant coefficients (blue and red, respectively) of all independent variables in the unrestricted model universe for the three test cases: democratisation, regional provision and institutional trust. The dashed line indicates 90%. The figure is adapted from the authors’ accompanying article in the Proceedings of the National Academy of Sciences (PNAS). One clear implication is that conventional robustness checks, while valuable, may still be too limited in scope. Researchers frequently vary control variables, estimation techniques or subsamples to assess the stability of their findings. But by examining modelling decisions in isolation, these checks are typically applied sequentially and independently. Our results suggest that this approach can miss the larger picture: it is not just which decisions are made but how their combination determines the stability of empirical results. By systematically exploring a wide modelling space, while automating thousands of reasonable combinations of covariates, samples, estimators and operationalisations, our approach can assess the joint influence of modelling choices. This allows us to identify patterns of fragility that are invisible to conventional checks. In our study, we estimated the feature importance scores for these different model specification choices. To do so, we first extracted a random set of 250,000 regression coefficients from the unrestricted model universe for each topic. Then, we fitted a neural network to predict whether an estimate is “negative significant”, “positive significant” or “not significant”. Figure 2: Feature importance scores of model specification decisions Note: The panels show the feature importance scores (SHAP values) for different model specification choices. The figure is adapted from the authors’ accompanying article in the Proceedings of the National Academy of Sciences (PNAS). Figure 2 shows that the greatest source of variation is not driven by the control variables per se, but rather by decisions on sample construction – which countries or time periods are included – and how key outcomes are defined. These upstream decisions, often made early and treated as background, exert the strongest influence on whether results are statistically significant and in which direction. To be clear, the implication of our findings is not that quantitative social science is futile. On the contrary, our work underscores the value of systematically understanding where results are strong and where (and why) they might be less stable. With this new approach, we hope to provide an additional tool that researchers can use to carry out systematic robustness checks and to increase transparency. To that end, we provide our code which future research can use to analyse and visualise the model space around a result. For more information, see the authors’ accompanying paper in the Proceedings of the National Academy of Sciences (PNAS). Note: This article gives the views of the authors, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: Lightspring / Shutterstock.com
blogs.lse.ac.uk
populismblog.bsky.social
New post for POP 📢

𝗣𝗼𝗽𝘂𝗹𝗶𝘀𝗺: 𝗔𝗻 𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 is a great resource for teaching and learning 🔥

It was a pleasure to publish this article written by the editors of this amazing project @robert-a-huber.bsky.social & @cerpintaxt.bsky.social  

us.sagepub.com/en-us/nam/po...
Populism
An Introduction
us.sagepub.com

dafnoukos.bsky.social
Thrilled and honoured that my Vitoria Gasteiz TEDx talk is now featured on the main TED.com website- I discuss how far-right parties tap into people’s economic insecurities and distrust of institutions to rebrand their exclusionary policies: www.ted.com/talks/daphne...
What’s behind the rise of far right politics in Europe
Far-right parties are gaining popularity worldwide. Why is that? Political researcher Daphne Halikiopoulou reveals how rising leaders tap into people’s economic insecurities and distrust of institutio...
www.ted.com
ecpr-ead.bsky.social
📣 Our latest e-Extreme Newsletter is out! Find it here: ecpr.eu/news/news/de... (Current Issue)
✋ Kindly brought to you by @audreygagnon.bsky.social, @lazaroskaravasilis.bsky.social and @sabinedvolk.bsky.social!
🍿 Read on for expert interviews (on elections in 🇵🇹 & 🇷🇴), book reviews and more!
leoniedejonge.bsky.social
🏆 Our Extremism & Democracy Best Paper Prize Committee is looking for nominations! Are you an Early Career Researcher? Then nominate yourself! Did you see a great paper presentation in the E&D Section in Thessaloniki by a PhD researcher or post-doc? Send them our way! @ecpr.bsky.social
leoniedejonge.bsky.social
🥁 Next up, also on Wednesday: our @ecpr-ead.bsky.social business meeting, with @dafnoukos.bsky.social & @profannikawerner.bsky.social. We have lots of exciting updates. Do join!

Reposted by Alessandro Nai

dafnoukos.bsky.social
If you're attending #ecprgc25 in Thessaloniki next week, join us for panels on nationalism and the far-right in Europe and Latin America.
ecpr-ead.bsky.social
📢 Hello E&D Colleagues! We’ve got several updates from the @ecpr.bsky.social Extremism & Democracy Standing Group (@ecpr-ead.bsky.social). Whether you’re joining us in Thessaloniki or following from afar, here’s what’s coming up. 🧵👇
a sign that says coming soon is lit up with lights .
ALT: a sign that says coming soon is lit up with lights .
media.tenor.com

dafnoukos.bsky.social
Thank you so much! I hope they will..

dafnoukos.bsky.social
Thank you so much again Stuart!
nilssteiner.bsky.social
How does the second-order national election (SONE) theory hold up in the 2024 European Parliament election?

I explore this in a short letter now published in @eupthejournal.bsky.social.

Read it here: journals.sagepub.com/doi/10.1177/...

dafnoukos.bsky.social
The YouTube video of my TEDxVitoria Gasteiz talk on the far right was released on the TEDx channel as an 'Editors' pick' 24 hours ago. It has 16,000 views and many comments, some nice, many hateful. My favourite: 'she is a paid actor'. I'll take it as a compliment! www.youtube.com/watch?v=OCvr...
Why do people vote for the far right? | Daphne Halikiopoulou | TEDxVitoriaGasteiz
YouTube video by TEDx Talks
www.youtube.com
alessandronai.bsky.social
Editor pro tip after reading several excellent articles that got nonetheless desk rejected

Not citing any scholarship published in the journal (or similar/adjacent) where you are submitting is a major red flag that your piece is off-scope

Embed your piece in a debate that matters for your readers