Scholar

Winston T. Lin

H-index: 26
Economics 54%
Business 21%

by John KaneReposted by: Winston T. Lin

uptonorwell.bsky.social
Are you or one of your students considering doing a Ph.D. in a social science? I've spent a lot of time talking about this w/ students & finally wrote something up.

IMO, there are only 3 good reasons to do it. One of them needs to be true--otherwise, don't.

medium.com/the-quantast...
The Only Three Reasons to Do a Ph.D. in the Social Sciences
If none are true, don’t do it.
medium.com

Reposted by: Winston T. Lin

coalition4evidence.bsky.social
See our No-Spin report on a widely-covered NBER study of Medicaid expansion. In brief: Despite the abstract's claims that expansion reduced adult mortality 2.5%, the study found much smaller effects that fell short of statistical significance in its main preregistered analysis.🧵
noahgreifer.bsky.social
Starting to look like I might not be able to work at Harvard anymore due to recent funding cuts. If you know of any open statistical consulting positions that support remote work or are NYC-based, please reach out! 😅
linstonwin.bsky.social
In case this is of interest, even ANCOVA I is consistent and asymptotically normal in completely randomized experiments (though II is asymptotically more efficient in imbalanced or multiarm designs)

Reposted by: Winston T. Lin

emollick.bsky.social
Issues with interpreting p-values haunts even AI, which is prone to same biases as human researchers. ChatGPT, Gemini & Claude all fall prey to "dichotomania" - treating p=0.049 & p=0.051 as categorically different, and paying too much attention to significance. www.cambridge.org/core/journal...

Reposted by: Winston T. Lin

brennankahan.bsky.social
NEW: CONSORT 2025 now published!

Some notable changes:
-items on analysis populations, missing data methods, and sensitivity analyses
-reporting of non-adherence and concomitant care
-reporting of changes to any study methods, not just outcomes
-and lots of other things

www.bmj.com/content/389/...
CONSORT 2025 explanation and elaboration: updated guideline for reporting randomised trials
Critical appraisal of the quality of randomised trials is possible only if their design, conduct, analysis, and results are completely and accurately reported. Without transparent reporting of the met...
www.bmj.com
linstonwin.bsky.social
Btw here's an email I sent Stata in 2012, suggesting a clarification to their descriptions of the "unequal" and "welch" ttest options. Got a polite reply but I don't think they changed it :)
Email to Stata explaining that Welch (1949) "welched" on Welch (1947), but refraining from asking if their co-founder Finis Welch was related to B. L. Welch. :) Continued in next screenshot Second half of email, suggesting that the "unequal" option be described as using "the formula of Satterthwaite (1946) and Welch (1949)" (because what many people call the Welch test is what Stata's "unequal" option does, not what their "welch" option does)
economeager.bsky.social
today we will all read imbens 2021 on statistical significance and p values, which is a strong contender for having the best opening paragraph of any stats paper

pubs.aeaweb.org/doi/pdf/10.1...

Reposted by: Winston T. Lin

carlislerainey.bsky.social
Here's some older, related stuff (from me) aimed at political scientists.

Related paper #1

"Arguing for a Negligible Effect"

Journal: onlinelibrary.wiley....

PDF: www.carlislerainey.c...

Reposted by: Winston T. Lin

carlislerainey.bsky.social
"The Need for Equivalence Testing in Economics"

from Jack Fitzgerald (@jackfitzgerald.bsky.social)

Preprint: osf.io/preprints/met...

We know that "not significant" does not imply evidence for "no effect," but I still see papers make this leap.

Good to see more work making this point forcefully!
linstonwin.bsky.social
In the '90s when I worked at Abt and MDRC, I wrote an email that initially had the subject header "OLS without apology". I shared a later version with Freedman, who cited it as "Lim (1999)" in his "Randomization does not justify logistic regression"
linstonwin.bsky.social
Clarification: my paper doesn’t advocate a specific estimator. That’s one of the meanings of “agnostic” in the title :)

Reposted by: Winston T. Lin

eleafeit.bsky.social
An important plea from @lizstuart.bsky.social in today's SCI-OCIS Special Webinar Series:
A warning and a plea
- As fields start to use more advanced quantitative / "cause" methods, there is a desire to help consumers of the research (four journal reviewers, editor) to easily assess the study quality and validity (e.g. JAMA causality language) 
...
- We need to push against this - need ways to help people understand and assess the (inherently untestable) assumptions in many studies.
linstonwin.bsky.social
I don't wanna put words in Rosenbaum's mouth ("spectrum" is just a word that came to my mind for a quick Bluesky reply) and I'd really encourage anyone interested to read his papers and the Stat Sci discussion in full, and then critique them. :) But here's a screenshot from his reply to Manski
linstonwin.bsky.social
competing theories. He has an interesting debate with Manski on external validity in the comments on the Stat Sci paper (I'll send you some excellent responses that my undergrad students at Yale wrote).
linstonwin.bsky.social
can lead to badly misleading literatures; (3) we can sometimes learn from collections of studies with different designs & weaknesses (I think he's partly influenced by the literature on smoking & lung cancer, which he cites elsewhere); (4) we should try to falsify or corroborate predictions of 2/
linstonwin.bsky.social
Thanks, Alex! I think that's a small part of his message. It's hard for me to do justice to these papers in a short thread, but I think he's also saying (1) credibility is on a spectrum and we should try to learn from all sorts of designs; (2) repeating the same design with the same weaknesses 1/
linstonwin.bsky.social
I'm late to this & not a political scientist, but here are two underappreciated oldies by Rosenbaum, who takes internal & external validity very seriously but has a different vision of replication

www.jstor.org/stable/2685805

doi.org/10.1214/ss/1...
www.jstor.org
bengolub.bsky.social
In economics, editors, referees, and authors often behave as if a published paper should reflect some kind of authoritative consensus.

As a result, valuable debate happens in secret, and the resulting paper is an opaque compromise with anonymous co-authors called referees.

1/
linstonwin.bsky.social
In completely randomized experiments, avg marginal effects from logit MLE or OLS (with pre-treatment covariates) are consistent for the avg treatment effect even if the model's wrong. Not true of probit MLE. This old tweet links to a helpful thread by @jmwooldridge.bsky.social

x.com/linstonwin/s...
x.com
x.com

Reposted by: Winston T. Lin

jmwooldridge.bsky.social
Because I've seen the law of iterated expectations, Jensen's inequality, and the central limit theorem mentioned in the past few days, I'll migrate one of my early Twitter posts about the tools necessary to master econometrics -- which includes each of those. Here it is.
linstonwin.bsky.social
I don't know the history of the terminology, but here's Scheffe (1959) defining "completely randomized" the same way that Peng does
Excerpt from Henry Scheffe's 1959 book "The Analysis of Variance": "We shall consider the efficiency of the randomized blocks design relative to the _completely randomized design_, in which the treatments are assigned at random to the IJ units subject only to the condition that each treatment appears J times, in such a way that all such assignments have equal probability."

References

Fields & subjects

Updated 1m