Frank Harrell
f2harrell.bsky.social
Frank Harrell
@f2harrell.bsky.social
Professor of Biostatistics
Vanderbilt University School of Medicine
Expert Biostatistics Advisor
FDA Center for Drug Evaluation and Research
https://hbiostat.org https://fharrell.com
Partly. A uniform prior is the least reasonable one. But wasn’t he avoiding the posterior mean/median/pseudomedian? When you check the calibration of those under early stopping when the design prior is uniform you’ll still find perfect calibration.
January 16, 2026 at 4:07 PM
That design could even be augmented to demand at least 0.5 probability of being above the MCID, but I think that demanding a higher probability of non-trivial efficacy is a somewhat better approach.
January 16, 2026 at 12:33 PM
Yes I’ve suggested a MCID/3 scaling as a default until clinical expertise weighs in to create a more justified triviality / minimum detectable difference threshold.
January 16, 2026 at 12:30 PM
The bias is not directly related to that but rather to the use of the prior to pull back the whole posterior distribution for smaller sample sizes.
January 16, 2026 at 12:29 PM
The examples in the C3TI program are what you might consider initial drafts of statisticial analysis plans that will ultimately be fleshed out with details, especially about diagnostics and prior selection.
January 16, 2026 at 12:28 PM
Krushke's statement is misleading in the sense that it forgot advise analyzes not to use the observed point estimates but rather posterior means/medians/pseudomedians which completely debias the estimates. More pullback to the prior the earlier you stop. And beware of the meaning of "false pos.".
January 15, 2026 at 1:51 PM
As exemplified in hbiostat.org/bayes/bet/de... you need to define a triviality threshold (not MCID) then everything works.
9  Bayesian Clinical Trial Design – Introduction to Bayes for Evaluating Treatments
hbiostat.org
January 15, 2026 at 1:49 PM
You'd need to know all the background for this to make sense to you. To oversimplify, sponsors tend to be more conservative than the FDA and this document will help significantly in making sponsors comfortable in the use of pure Bayesian designs and not having to hybridize with frequentism.
January 14, 2026 at 12:47 PM
One of the fastest ways to speed up drug development even if you have no trustworthy prior information is to use Bayesian sequential designs with early stopping for (especially) inefficacy, because there is no type I assertion probability in need of control hbiostat.org/bayes/bet/de...
9  Bayesian Clinical Trial Design – Introduction to Bayes for Evaluating Treatments
hbiostat.org
January 13, 2026 at 5:44 PM
The most natural way to state how much borrowing from adults you want to make for kids comes from eliciting the probability that adult data are applicable to kids. This provides the mixing probability between a skeptical prior and the posterior distribution solely from adults.
January 13, 2026 at 5:41 PM
Common in polygenic risk score papers, this practice of comparing extremes to show they differ is incredibly poor statistical practice. It's like comparing the odds of death for 110 year olds to that of 30 year olds. You're going to see a whopping big odds ratio.
January 12, 2026 at 12:43 PM
Correction: variogram. An example is in hbiostat.org/rmsc/long and a more flexible diagnostic for longitudinal data is in hbiostat.org/rmsc/markov
7  Modeling Longitudinal Responses using Generalized Least Squares – Regression Modeling Strategies
hbiostat.org
January 10, 2026 at 1:16 PM
Amen to that. And only your earlier post one example of a much needed model diagnostic not directly related to the main estimand (but greatly affecting the standard error) is a variorum for longitudinal data to check correlation structure.
January 9, 2026 at 9:02 PM
Corollary: Feeling good about a sensitivity analysis that shows little sensitivity to varying assumptions tells me that the analytical methods employed were not optimal. I want things to be sensitive.
January 8, 2026 at 5:58 PM
2/2 Contrast that with Bayesian designs where magic happens: If the analysis at the planned study end is valid, all interim Bayesian analyses must be equally valid regardless of the stopping rule, and they require no modification, no consideration of sampling distributions.
January 8, 2026 at 5:55 PM
1/2 I have never seen a clinical trial paper where the frequentist results were fully honest when interim stopping was possible. Both point estimates and confidence intervals need corrections, and the sampling distribution can even be bimodal. I've always seen calculations ignore interim testing.
January 8, 2026 at 5:55 PM
A design I propose specifies that you need to run the trial long enough to estimate the treatment effect if it is effective, but if you show it is ineffective you can stop without the ability to estimate how ineffective it is. hbiostat.org/bayes/bet/de...
January 7, 2026 at 10:19 PM
Note that the regulatory guidance failed to note that alpha is not the probability of making an error.
January 7, 2026 at 3:55 PM
Elizabeth do you like the name "Data Science and AI Institute"?
January 7, 2026 at 12:45 PM