Winston Lin
@linstonwin.bsky.social
1.9K followers 1.6K following 22 posts
senior lecturer in statistics, penn NYC & Philadelphia https://www.stat.berkeley.edu/~winston
Posts Media Videos Starter Packs
Pinned
linstonwin.bsky.social
Clarification: my paper doesn’t advocate a specific estimator. That’s one of the meanings of “agnostic” in the title :)
Reposted by Winston Lin
uptonorwell.bsky.social
Are you or one of your students considering doing a Ph.D. in a social science? I've spent a lot of time talking about this w/ students & finally wrote something up.

IMO, there are only 3 good reasons to do it. One of them needs to be true--otherwise, don't.

medium.com/the-quantast...
The Only Three Reasons to Do a Ph.D. in the Social Sciences
If none are true, don’t do it.
medium.com
Reposted by Winston Lin
coalition4evidence.bsky.social
See our No-Spin report on a widely-covered NBER study of Medicaid expansion. In brief: Despite the abstract's claims that expansion reduced adult mortality 2.5%, the study found much smaller effects that fell short of statistical significance in its main preregistered analysis.🧵
Reposted by Winston Lin
noahgreifer.bsky.social
Starting to look like I might not be able to work at Harvard anymore due to recent funding cuts. If you know of any open statistical consulting positions that support remote work or are NYC-based, please reach out! 😅
linstonwin.bsky.social
In case this is of interest, even ANCOVA I is consistent and asymptotically normal in completely randomized experiments (though II is asymptotically more efficient in imbalanced or multiarm designs)
Reposted by Winston Lin
emollick.bsky.social
Issues with interpreting p-values haunts even AI, which is prone to same biases as human researchers. ChatGPT, Gemini & Claude all fall prey to "dichotomania" - treating p=0.049 & p=0.051 as categorically different, and paying too much attention to significance. www.cambridge.org/core/journal...
Reposted by Winston Lin
brennankahan.bsky.social
NEW: CONSORT 2025 now published!

Some notable changes:
-items on analysis populations, missing data methods, and sensitivity analyses
-reporting of non-adherence and concomitant care
-reporting of changes to any study methods, not just outcomes
-and lots of other things

www.bmj.com/content/389/...
CONSORT 2025 explanation and elaboration: updated guideline for reporting randomised trials
Critical appraisal of the quality of randomised trials is possible only if their design, conduct, analysis, and results are completely and accurately reported. Without transparent reporting of the met...
www.bmj.com
Reposted by Winston Lin
economeager.bsky.social
today we will all read imbens 2021 on statistical significance and p values, which is a strong contender for having the best opening paragraph of any stats paper

pubs.aeaweb.org/doi/pdf/10.1...
linstonwin.bsky.social
Btw here's an email I sent Stata in 2012, suggesting a clarification to their descriptions of the "unequal" and "welch" ttest options. Got a polite reply but I don't think they changed it :)
Email to Stata explaining that Welch (1949) "welched" on Welch (1947), but refraining from asking if their co-founder Finis Welch was related to B. L. Welch. :) Continued in next screenshot Second half of email, suggesting that the "unequal" option be described as using "the formula of Satterthwaite (1946) and Welch (1949)" (because what many people call the Welch test is what Stata's "unequal" option does, not what their "welch" option does)
Reposted by Winston Lin
carlislerainey.bsky.social
Here's some older, related stuff (from me) aimed at political scientists.

Related paper #1

"Arguing for a Negligible Effect"

Journal: onlinelibrary.wiley....

PDF: www.carlislerainey.c...
Reposted by Winston Lin
carlislerainey.bsky.social
"The Need for Equivalence Testing in Economics"

from Jack Fitzgerald (@jackfitzgerald.bsky.social)

Preprint: osf.io/preprints/met...

We know that "not significant" does not imply evidence for "no effect," but I still see papers make this leap.

Good to see more work making this point forcefully!
linstonwin.bsky.social
In the '90s when I worked at Abt and MDRC, I wrote an email that initially had the subject header "OLS without apology". I shared a later version with Freedman, who cited it as "Lim (1999)" in his "Randomization does not justify logistic regression"
linstonwin.bsky.social
Clarification: my paper doesn’t advocate a specific estimator. That’s one of the meanings of “agnostic” in the title :)
Reposted by Winston Lin
eleafeit.bsky.social
An important plea from @lizstuart.bsky.social in today's SCI-OCIS Special Webinar Series:
A warning and a plea
- As fields start to use more advanced quantitative / "cause" methods, there is a desire to help consumers of the research (four journal reviewers, editor) to easily assess the study quality and validity (e.g. JAMA causality language) 
...
- We need to push against this - need ways to help people understand and assess the (inherently untestable) assumptions in many studies.
linstonwin.bsky.social
I don't wanna put words in Rosenbaum's mouth ("spectrum" is just a word that came to my mind for a quick Bluesky reply) and I'd really encourage anyone interested to read his papers and the Stat Sci discussion in full, and then critique them. :) But here's a screenshot from his reply to Manski
linstonwin.bsky.social
competing theories. He has an interesting debate with Manski on external validity in the comments on the Stat Sci paper (I'll send you some excellent responses that my undergrad students at Yale wrote).
linstonwin.bsky.social
can lead to badly misleading literatures; (3) we can sometimes learn from collections of studies with different designs & weaknesses (I think he's partly influenced by the literature on smoking & lung cancer, which he cites elsewhere); (4) we should try to falsify or corroborate predictions of 2/
linstonwin.bsky.social
Thanks, Alex! I think that's a small part of his message. It's hard for me to do justice to these papers in a short thread, but I think he's also saying (1) credibility is on a spectrum and we should try to learn from all sorts of designs; (2) repeating the same design with the same weaknesses 1/