Laura Bronner
@laurabronner.bsky.social
2K followers 230 following 41 posts
Scientific director, Public Discourse Foundation || Senior applied scientist, IPL/ETH Zürich || Data, media, experiments || Formerly FiveThirtyEight quant editor. www.laurabronner.com
Posts Media Videos Starter Packs
Pinned
laurabronner.bsky.social
A thread about being wrong:

5 years ago, we wrote a paper about how how newly enfranchised 16-year-olds vote in Austria. But we were wrong.

This year, @elisabethgraf.bsky.social, @schnizzl.bsky.social, Sylvia Kritzinger and I are setting the record straight: authors.elsevier.com/c/1juT5xRaZk...
laurabronner.bsky.social
This is so cool, congrats!!
Reposted by Laura Bronner
iasliu.bsky.social
Join us on Thursday, 18 Sept, at 14:30 CET for the International Roundtable on Computational Social Science with Laura Bronner @laurabronner.bsky.social 🔹Tackling harmful online comments on news platforms: three field experiments 🔹More info: liu.se/en/article/s...
Seminars and lectures at IAS
Welcome to IAS public lectures and seminars.
liu.se
Reposted by Laura Bronner
cragcrest.bsky.social
Wow, thank you for helping to keep this alive. I get so many notes from people using this in their coursework.
andrew.heiss.phd
I’ve long used FiveThirtyEight’s interactive “Hack Your Way To Scientific Glory” to illustrate the idea of p-hacking when I teach statistics. But ABC/Disney killed the site earlier this month :(

So I made my own with #rstats and Observable and #QuartoPub ! stats.andrewheiss.com/hack-your-way/
Screenshot of the linked Quarto website, with input checkboxes to change different conditions for a regression model that predicts economic performance based on US political party, with a reported p-value
laurabronner.bsky.social
It seems that Disney never really knew what to do with 538 (Nate's take below), which feels like a real missed opportunity. I hope other outlets will take up the mantle, hire those laid off yesterday, and really invest in data and rigor in journalism - which is more important now than ever.
laurabronner.bsky.social
This is a huge loss. I feel awful for everyone who was laid off, but I'm also just really sad that ABC News didn't appreciate the special blend of reporting chops, data skills, talent, and kindness they managed to amass. I think the *wrong* lesson to draw is that this blend isn't profitable -
laurabronner.bsky.social
6) That said, 538 was also special for journalism - exemplified, perhaps, by the decision to have someone on staff whose entire purpose was to slow stuff down: work through code, question analyses, and be annoying about causal claims. They cared about getting stuff right, even if it took longer.
laurabronner.bsky.social
5) At its best, journalism at 538 blended qualitative (reporting, deep understanding of the substance) with quantitative (data, advanced methods). Academics often think that good research is only done in academia. I think a lot of fantastic research is done in journalism.
laurabronner.bsky.social
4) Understanding what kind of effort gets you 90% of the way to answering something - and whether those 90% are enough to say something meaningful - is something I should remind myself of over and over. Academia spends an inordinate amount of time on those last 10%. Often, it's not worth it.
laurabronner.bsky.social
3) Good data is everything, and understanding data sources and their downsides is crucial for anyone who works with data. So much work went into collecting and auditing the data 538 used - it's a resource for people across (and beyond!) journalism.
bsky.app/profile/base...
baseballot.bsky.social
Just a reminder that all the data FiveThirtyEight collected—polls, election results, and much more—is available for download (for now) on our GitHub page. github.com/fivethirtyei...
laurabronner.bsky.social
2) At the same time, data-centric doesn't necessarily mean complex. Many of the most interesting analyses (e.g. differences in means) are simple; the difficulty is in understanding the data and the substance well enough to ensure those analyses and comparisons are meaningful.
laurabronner.bsky.social
1) Quantitative description and comparison is key to understanding what's going on. Some of 538's most important work was descriptive (e.g. poll trackers, geographic mapping), and much of their contribution to political journalism was in normalizing a much more data-centric type of reporting.
laurabronner.bsky.social
The end of 538 is a huge shame - both for the incredible people who worked there, and for political and data journalism as a whole.

I was lucky enough to work beside them for a few years and want to say a bit about what I think was so valuable that I hope doesn't vanish from the media landscape: 🧵
baseballot.bsky.social
Incredibly sad to report that ABC News is indeed eliminating 538.

I count myself incredibly lucky to have worked with such incredibly smart, kind people for 7 years. Thank you all for coming along for the ride.
Reposted by Laura Bronner
jonmellon.bsky.social
Out now open access at
@ajpseditor.bsky.social. 194 potential exclusion-restriction violations for studies using weather as an instrumental variable onlinelibrary.wiley.com/doi/full/10....
Reposted by Laura Bronner
laurabronner.bsky.social
Interested in how people talk to each other online? Care about causal inference and/or NLP? Want to design and implement field experiments?

Come do a PhD with Dominik Hangartner, me, and a bunch of awesome people at IPL in Zurich:
jobs.ethz.ch/job/view/JOP...
laurabronner.bsky.social
Interested in how people talk to each other online? Care about causal inference and/or NLP? Want to design and implement field experiments?

Come do a PhD with Dominik Hangartner, me, and a bunch of awesome people at IPL in Zurich:
jobs.ethz.ch/job/view/JOP...
laurabronner.bsky.social
This is consistent with stricter pre-publication review, though, right? It's just more of a focus on preventing rather than correcting errors. (Errors in articles/charts were routinely caught by attentive readers, so the incentive to avoid them was strong.)
laurabronner.bsky.social
Clearly 538 values the trade-off differently (or did while I was there), which I think is interesting. I wonder if this partially depends on where the blame for an error goes - the authors or the publication.
laurabronner.bsky.social
It would have required a more in-depth check than this, since our code did produce our results/figures - it's just that the code contained an error. And while correcting the record is great, preventing such errors from being published is also important!
bsky.app/profile/adam...
adam42smith.bsky.social
@aeggers.bsky.social and I supervise a small team that checks 1) that basic documentation is in place 2) that tables & figures in the manuscript can be reproduced starting from raw data w deposited code.
We run, don't review the code, so won't normally catch flipped scales, dropped observations etc
laurabronner.bsky.social
To put it in personal terms: I would much have preferred the error invalidating my Voting at 16 article to be caught in pre-publication code review, even though the resulting replication-and-extension collaboration probably couldn't have gone any better.
bsky.app/profile/laur...
laurabronner.bsky.social
A thread about being wrong:

5 years ago, we wrote a paper about how how newly enfranchised 16-year-olds vote in Austria. But we were wrong.

This year, @elisabethgraf.bsky.social, @schnizzl.bsky.social, Sylvia Kritzinger and I are setting the record straight: authors.elsevier.com/c/1juT5xRaZk...
laurabronner.bsky.social
Don't get me wrong, I think this is really cool! A way of creating positive incentives for something important that otherwise mostly has educational (part of a course) or negative (trying to prove something/someone wrong) incentives. But I don't think it's a substitute for preventing errors.
laurabronner.bsky.social
Publishing already takes a lot of time and effort, so further hurdles might be questionable. But in my experience, it definitely improves the quality of the output. People make mistakes (myself obviously included!) - proper code review should be seen as a service rather than a cost.
laurabronner.bsky.social
On the one hand, this catches errors before publication, which is obviously important. On the other hand, it takes a lot of time - both for the quant editor and for the author, who has to ensure the code & decisions can be understood.
laurabronner.bsky.social
I really like this development. But it's interesting how much effort is expended on post-publication review, rather than proper pre-publication code review like I used to do at 538. Maybe checking data munging & analysis should be an integral part of peer review, done by paid journal employees -
miguelpereira.bsky.social
APSR will start inviting replications for a random subset of accepted papers. I really like this as well as the constructive tone around it 👏 (from the latest Notes from the Editors)