Noah Haber
@whaleactually.com
2.8K followers 300 following 270 posts
econ, epi, stats, meta, causal inference mutant scientist, epistemic humility fairy godmother, chaos muppet. doing researchy metasciencey stuff at the Center for Open Science
Posts Media Videos Starter Packs
whaleactually.com
To the eight of you out there whose brains are also damaged in this particular way, thank you/I am sorry.
whaleactually.com
"Elapsed time" is to Intention to Treat as "Moving time" is to Per-Protocol.

I will not explain any further.
whaleactually.com
It's giving Borne vibes....
whaleactually.com
A few of the first recruitment attempts to US-based journals have been met with a "times are too uncertain to take on a project like this," as expected

Small potatoes in the scheme of things, but just another example of the astounding destruction of scientific progress happening right now.
whaleactually.com
Hi folks! Want to reintroduce a thing I'm leading at @cos.io: the Registered Revisions (meta) Trial.

This project about new peer review policy and a WILD new way of doing actionable evidence generation via RCTs. A LOT of RCTs.

Now piloted and ready for the main stage, and looking for partners! 🧵👇
whaleactually.com
I am become randomista, bringer of exogeneity.
whaleactually.com
The "meta trial" idea is maybe the most ambitious thing I've ever gotten to try in real life.

In theory, it goes a LONG way towards solving the feasibility, logistics, and incentives problems inherent in multi-unit policy experiments at scale.

Not just for journal policy. For policy period.
whaleactually.com
What we would really want in the end is a BUNCH of compatible, but realistic and pragmatic trials of custom variations on this policy and get to wrap it all up in a nice meta analysis.

For the last year or so, we've been piloting a pretty wild approach to getting exactly that:

A study-in-a-kit
whaleactually.com
Yep, exactly. In theory, it's a win all around. More incentive for partners to collaborate, more flexibility for partners, more realistic policy roll outs, etc.

No reason to limit this to journal experiments either; the same idea can be done for any multi-unit policy experiment.
Reposted by Noah Haber
joachim.cidlab.com
This meta trial idea is pretty cool. Eventually you could arrive at a situation where journals help researchers get individual credit for research labor on large collaborative projects
whaleactually.com
Hi folks! Want to reintroduce a thing I'm leading at @cos.io: the Registered Revisions (meta) Trial.

This project about new peer review policy and a WILD new way of doing actionable evidence generation via RCTs. A LOT of RCTs.

Now piloted and ready for the main stage, and looking for partners! 🧵👇
Reposted by Noah Haber
whaleactually.com
Hi folks! Want to reintroduce a thing I'm leading at @cos.io: the Registered Revisions (meta) Trial.

This project about new peer review policy and a WILD new way of doing actionable evidence generation via RCTs. A LOT of RCTs.

Now piloted and ready for the main stage, and looking for partners! 🧵👇
whaleactually.com
And of course, typos are my own, and not those of the NSF which funds this project (grant #2152424) nor the IRB who approved it (University of Virginia Institutional Review Board Protocol #6358).
whaleactually.com
@cos.io also recently launched a journal that puts a whole project from conception to publication (including multi-stage review) in one place called the Lifecycle Journal.

If Registered Revisions is like a mini registered report, Lifecycle Journal is like a mega one.

lifecyclejournal.org
Lifecycle Journal | Adding trust to your research, from conception through completion
lifecyclejournal.org
whaleactually.com
Yep, if I am interpreting you correctly, those are usually called "Registered Reports." There are quite a few journals that have that as an option, but takeup from authors is pretty low.

Registered *revisions* is a more specific take on that idea.
whaleactually.com
Now we're ramping up to the main phase of the project and gathering journal partners.

Are you a potentially interested journal editor? Know someone who might be? Let's chat!

Feel free to DM me here or email me ([email protected]).

More info: www.cos.io/r3ct/registe...
Impact of Registered Revisions
Publication pre-commitment devices such as Preregistration, Registered Reports, and Registered Revisions may substantially reduce publication biases, prepublication biases (e.g. p-hacking and HARKING)...
www.cos.io
whaleactually.com
It's sorta like someone smashed together a multi-center trial with a prospective meta analysis.

If this works, it's a potentially game-changing way to do large scale policy evidence generation.

And we have a good idea that it DOES work, because we've been piloting it with six journal partners.
whaleactually.com
That means you also get to publish your own results. That might be nice for the journal, but could be extra nice for, say, a junior editor who leads it.

The "trick" here is that we use those data in a pre-planned meta analysis in a couple years (with coathorship, of course).
whaleactually.com
The "kit" is a package for journals that includes things like:

Protocols with flexibility for variation
A data collection infrastructure
An IRB pathway (pre approved for most)
Data cleaning
Suggested code
A support community

But the best part is this: if you run the trial, you own the trial.
whaleactually.com
What we would really want in the end is a BUNCH of compatible, but realistic and pragmatic trials of custom variations on this policy and get to wrap it all up in a nice meta analysis.

For the last year or so, we've been piloting a pretty wild approach to getting exactly that:

A study-in-a-kit
whaleactually.com
Even bigger question:

How on earth are you going to get an experiment large enough with a bunch of diverse journals, coordinated together with the exact same protocols, on the same exact timelines, and producing actionable evidence?

You can't.

Which is where the whole "Meta Trial" thing comes in.
whaleactually.com
Then the authors go execute their plan. As long as they followed the plan and the other revisions are sufficiently addressed, the paper is accepted, regardless of the new results.

But how does that impact timelines? Questionable research practices? Author experience?

¯\_(ツ)_/¯
whaleactually.com
When an author gets a "do something new" comment, editors as the authors to address it by giving a summary of what they plan to do to address the comment.

The editors (and maybe reviewers) then decide whether that plan is acceptable to address the issue, and issue an in principle acceptance.
whaleactually.com
It leaves a lot of uncertainty for what authors should do, what peer reviewers expect, pressure for questionable research practices to slip in to prevent late rejection for "wrong" results, etc.

Bad times.

Registered Revisions aims to address that through Revision Plan and in principle acceptance.
whaleactually.com
What's a "Registered Revision" policy?

The idea is similar to registered reports, but it occurs during standard peer review.

You've all seen peer review comments like "hey this stat is bad can you run something else?" or "can you collect more samples to text X?"

Those are tough to deal with.
whaleactually.com
Hi folks! Want to reintroduce a thing I'm leading at @cos.io: the Registered Revisions (meta) Trial.

This project about new peer review policy and a WILD new way of doing actionable evidence generation via RCTs. A LOT of RCTs.

Now piloted and ready for the main stage, and looking for partners! 🧵👇
whaleactually.com
only the sith deal in absolute attribution
whaleactually.com
death to "is it A or is it B that causes Y"

long live "how much do A and B contribute to Y"