alex hayes
alexpghayes.com
alex hayes
@alexpghayes.com
postdoc @ stanford econ + incoming assistant prof @ oregon state statistics. networks, causal inference, contagion, measurement error, #rstats. he/him

https://www.alexpghayes.com
hmmm something like E[max coreness | author]? napkin sketches suggest this is ~scaled max degree correction parameter of author in DC-SBMs
January 29, 2026 at 7:49 PM
can you say more about how you're running this? i managed to connect claude code to this mcp server but i'm much more interested in getting wolfram connected to a reasoning model
January 29, 2026 at 12:13 AM
can you share any references about these more effective strategies? i'm interested in reading more!
January 29, 2026 at 12:08 AM
{broom} has a lot of early tidyverse baggage! It's great that other packages have smoothed things out!
January 23, 2026 at 7:38 AM
Also it's not simple! I think cognitive aspects of learning the new semi-parametric methods is a fairly serious constraint for applied use
January 1, 2026 at 7:06 PM
Yeah I thought epi used sequential G things? Also, I'm not sure I think of front door as all that morally different from IV
January 1, 2026 at 7:04 PM
You get a free license through UW
December 31, 2025 at 6:15 PM
@karlrohe.bsky.social both this and the mathematica mcp bit higher up the thread
December 31, 2025 at 5:43 AM
I've pulled out of several projects but far before any manuscript writing began
December 23, 2025 at 5:36 AM
I fairly agnostic about whether or not service is a good thing; think there are many reasonable choices. I'm a lot more interested in how statisticians talk about that choice, and other disciplines respond. The choice has consequences, whatever it is
December 16, 2025 at 12:02 AM
I wonder if there's a way to develop a targeted estimator for ranking observations by P(Y=1|X) via the TMLE machinery. @sherrirose.bsky.social do you know of any work like this?
December 3, 2025 at 2:10 AM
I suppose in neural net land there's also directly minimizing the (smoothed) AUCROC, but that's less generic since you can't always modify your loss function
December 3, 2025 at 1:56 AM
I've been seeing his name around the R community for years, and then just the other day I had an issue with {ivreg} and he very kindly helped me out. I'm saddened to learn of his passing.
December 3, 2025 at 1:54 AM
If you were going to handle a ranking problem, what kinds of tools would you reach for? Under/over-sampling or something else?
December 3, 2025 at 1:45 AM
My impression from brief stinks in industry was that folks primarily up/down-sampled because they wanted a system good at ordering samples by P(Y=1|X), and that beyond ranking having a calibrated classifier didn't matter much
December 3, 2025 at 1:45 AM
The LW shade made me cackle
November 29, 2025 at 9:51 PM
You might also enjoy www.science.org/doi/10.1126/...

Gerlach, Martin, Tiago P. Peixoto, and Eduardo G. Altmann. “A Network Approach to Topic Models.” Science Advances 4, no. 7 (2018): 1–11. doi.org/10.1126/scia....
A network approach to topic models
A new approach to topic models finds topics through community detection in word-document networks.
www.science.org
November 23, 2025 at 1:38 AM
Even if you don't find the main figure concerning, do you find the contrast with pre-registered RCT z-scores concerning? It's hard to imagine a world in which this contrast is benign
November 15, 2025 at 11:58 PM