Richard McElreath 🐈‍⬛
@rmcelreath.bsky.social
17K followers 1.3K following 990 posts

Anthropologist - Bayesian modeling - science reform - cat and cooking content too - Director @ MPI for evolutionary anthropology https://www.eva.mpg.de/ecology/staff/richard-mcelreath/

Richard McElreath is an American professor of anthropology and a director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. He is an author of the Statistical Rethinking applied Bayesian statistics textbook, among the first to largely rely on the Stan statistical environment, and the accompanying rethinking R language package. .. more

Psychology 22%
Sociology 21%
Posts Media Videos Starter Packs
Pinned
rmcelreath.bsky.social
If you hate statistics like I do, then you'll love my free lectures. Putting science before statistics, 20 lectures from basics of inference & causal modeling to multilevel models & dynamic state space models. It's all free, made with love and sympathy. 🧪 #stats www.youtube.com/playlist?lis...

Reposted by Richard McElreath

leguinbot.bsky.social
Infinite are the arguments of mages.

rmcelreath.bsky.social
I guess because software allows it, ppl keep trying to estimate diversification rates on phylogenies. This is not in principle possible, because infinite combinations of diversification and extinction rates can explain almost any tree. Short, clear recent-ish paper: www.nature.com/articles/s41...

rmcelreath.bsky.social
This is a nice paper. It faces down a common problem: in order to explain to why a common approach cannot work even in theory, we first need to teach a framework in which regression is not magic that tells us which variables on right cause the variable on left. It's exhausting.

Anyway great paper!

dingdingpeng.the100.ci
Happy to announce that I'll give a talk on how we can make rigorous causal inference more mainstream 📈

You can sign up for the Zoom link here: tinyurl.com/CIIG-JuliaRo...
Causal inference interest group, supported by the Centre for Longitudinal Studies

Seminar series
20th October 2025, 3pm BST (UTC+1)

"Making rigorous causal inference more mainstream"
Julia Rohrer, Leipzig University

Sign up to attend at tinyurl.com/CIIG-JuliaRohrer

rmcelreath.bsky.social
Someone sent this to me and I want to hug, um, share it with all of you (src @theunderfold.bsky.social )
comic by @theunderfold.bsky.social

rmcelreath.bsky.social
Thinking about and discussing this more with colleagues, I'd really like a continuous time solution, like the Gillespie algorithm but for perfect conterfactuals as defined in thread below. I can't find that this has been done, but somehow believe some rogue chemist has already worked it out
rmcelreath.bsky.social
Are we doing simulations wrong? This paper convinced me we are. doi.org/10.1098/rstb... Usually we run 2 sets of "worlds" w and w-out intervention. Gives large uncertainties that include negative (harm) effects of interventions that are actually always positive (beneficial)!
Figure 5. Time series showing cumulative number of cases averted at each time caused by the intervention calculated using our method (single-world) and a
standard method. Shaded regions denote 90% confidence intervals. Note that there is more variation in the middle of the epidemic, so it may seem as though the
number of cases averted is large during those times. (Online version in colour.)

rmcelreath.bsky.social
The most romantic sentence in the German language: Möchtest du mit mir eine Kartoffel essen gehen? (Today and tommorrow in Leipzig at Augustusplatz) kartoffelverband-sachsen.de/saechsisches...

dingdingpeng.the100.ci
Just finished reading this *excellent* article by Gabriel et al. which discusses which effects can be identified in randomized controlled trials. With DAGs!>

link.springer.com/article/10.1...
Elucidating some common biases in randomized controlled trials
using directed acyclic graphs

Although the ideal randomized clinical trial is the gold standard for causal inference, real randomized trials often suffer
from imperfections that may hamper causal effect estimation. Stating the estimand of interest can help reduce confusion
about what is being estimated, but it is often difficult to determine what is and is not identifiable given a trial’s specific
imperfections. We demonstrate how directed acyclic graphs can be used to elucidate the consequences of common imperfections,
such as noncompliance, unblinding, and drop-out, for the identification of the intention-to-treat effect, the total
treatment effect and the physiological treatment effect. We assert that the physiological treatment effect is not identifiable
outside a trial with perfect compliance and no dropout, where blinding is perfectly maintained Table 1 showing the Identifiability of target estimands depending on whether there is blinding, full compliance, and no drop-out An example DAG from the paper.
Fig. 4: A blinded trial with noncompliance.

U are unobserved confounders, Z is treatment assignment, C is compliance, X is the realized treatment, S is the subject's physical and mental health status, Xself and Xcln are the treatment that the participant and the clinician believed the participant received, Y is the outcome.

rmcelreath.bsky.social
The algorithm is more intensive than usual method. So some algorithm research is needed here. And getting ppl to do the right thing will need friendly tool support. The authors of paper provide their code, which is a great start to templating this to generic ABMs or ODEs. github.com/HopkinsIDD/c...

rmcelreath.bsky.social
We want instead "perfect" counterfactuals that change only the intervention (and consequences of it) and compare across same worlds in all other events. This can be done by computing a potential event graph first, and then forking it and applying intervention logic.
Figure 3. ‘Single-world’ simulation process (continued). In order to measure the impact of the intervention on the epidemic, we use the PEG (figure 2) to construct two
epidemics: one uncontrolled (left), one with intervention (right). We start by setting the actual state (denoted by colouring the whole circle) of each individual at time t0
to their initial condition, and then changing their state according to the intervention, in this case setting p2 as vaccinated (dark green). We then prune events between t0
and t1: removing inconsistent events and allowing the intervention to stochastically remove events (b,c). We then use those events to determine the actual state of each
individual at t1, allow the intervention to alter that state, then prune inconsistent events again (d,e). We repeat the process for times t2 ( f,g), t3 (h,i) and so on. This final
graph can be used to extract our outcome of interest. Any graphs made from the same PEG represent the results of different interventions in a ‘single world’.

Reposted by David Rosnick

rmcelreath.bsky.social
Are we doing simulations wrong? This paper convinced me we are. doi.org/10.1098/rstb... Usually we run 2 sets of "worlds" w and w-out intervention. Gives large uncertainties that include negative (harm) effects of interventions that are actually always positive (beneficial)!
Figure 5. Time series showing cumulative number of cases averted at each time caused by the intervention calculated using our method (single-world) and a
standard method. Shaded regions denote 90% confidence intervals. Note that there is more variation in the middle of the epidemic, so it may seem as though the
number of cases averted is large during those times. (Online version in colour.)

richardsever.bsky.social
Another record month for bioRxiv - and further evidence the pandemic spike+dip was just that and growth continues. Thanks to all involved and that includes 🫵

rmcelreath.bsky.social
I suspect researchers want to find p<0.05 and collinearity makes that harder, so they invented procedures for searching model space so they could get more p<0.05.

There is no statistical philosophy in which those procedures are good for prediction or inference. Good for publishing though

rmcelreath.bsky.social
I am told that his Orthogonal series is his best work, so I might read those books next. But happy to receive opinions.

rmcelreath.bsky.social
I recently read Egan's Incandescence. This is a book that people hate or love. I liked it! It doesn't condescend to the reader, but expects you to pay attention and figure out the major plot links on your own. Also endless pages of aliens talking about geometry.

Not going to be a movie anytime soon
cover of Greg Egan's Incandescence

rmcelreath.bsky.social
My first attempt to dispel concern with this kind of naive correlation filtering: It can arise from different causes and vanish in a properly specified model. Example: Data generated by this DAG

X –> Z –> Y

can have highly correlated X and Z. But in model Y ~ X + Z it won't show any "collinearity"