Ian Hussey
banner
ianhussey.mmmdata.io
Ian Hussey
@ianhussey.mmmdata.io

Meta-scientist and psychologist. Senior lecturer @unibe.ch‬. Chief recommender @error.reviews. "Jumped up punk who hasn't earned his stripes." All views a product of my learning history. If behaviorism did not exist, it would be necessary to invent it. .. more

Psychology 66%
Sociology 10%
Pinned
Lego Science is research driven by modular convenience.

When researchers combine methods or concepts, more out of convenience than any deep curiosity in the resulting research question, to create publishable units.

"What role does {my favourite construct} play in {task}?"
🔍 Sleuthing challenge 🔍

How many suspicious patterns can you find in this table?

(You don't need to know what the variables are.)

Hints in next post! 👇

Standards are nowhere near as universal as the cohen ones. The {effectsize} R package collects and implements guidelines for interpretation, it’s a good go-to for a standard with references. I don’t know if my comment is in compliance with it. But the effect sizes are substantively very large.

More than 18 months have passed since I contacted the authors of this meta-analysis about serious flaws that substantially change the results.

More than 6 have passed since the published was involved.

No action taken.

One of the component studies has since been retracted.
PubPeer - Effect of acceptance and commitment therapy for depressive d...
There are comments on PubPeer for publication: Effect of acceptance and commitment therapy for depressive disorders: a meta-analysis (2023)
pubpeer.com

There are discrepancies between Table 3 and Table 4.

If you recalculate from Table 3, the primary outcome is Cohen's d = 21 (!)

pubpeer.com/publications...

Seems like they have a mix of SDs and SEs

Perhaps there are other inconsistencies - I only looked at PSS at post.

My calculator takes rounding into account and lets you explore if SE has been confused for SE - I think that’s likely here. It still comes out as d = 3.7 if you assume they are SEs though.

errors.shinyapps.io/recalc_indep...
recalc: Bounds of an independent t-test's p-value and Cohen's d from M/SD/N
errors.shinyapps.io

Seems like there are inconsistencies in the summary stats repeated between Table 3 and Table 4. I think you’ve used those from table 4. When I use those from table 3 I get cohens d of about 21 (!)
recalc: Bounds of an independent t-test's p-value and Cohen's d from M/SD/N
errors.shinyapps.io
“These findings provide clear evidence that data collected on MTurk simply cannot be trusted.”

You have huge signal boosting power, and I think that comes with the responsibility not to contribute to misunderstanding in this space. Every existing critique of this literature emphasises that causal claims are routinely made with weak or no causal evidence. Let’s learn from that.

I think you’re missing my point. We can’t squeeze causal claims out of longitudinal data, only inject causal assumptions. What we as readers can do is not repeat these causal statements as if they’re evidenced rather than assumed.
First MEP report.

Medical research often uses non-parametric tests. We looked, and noticed that small-sample rank-based tests (i.e. M-W, Wilcoxon) have significant granularity... like p-values!

So, here's GRIM for U values -- GRIM-U.

Enjoy.

medicalevidenceproject.org/grim-u-obser...
GRIM-U: A GRIM-like observation to establish impossible p values from ranked tests - Medical Evidence Project
The Medical Evidence Project uses forensic metascientific methods to examine medical research publications that have a disproportionately high impact on human health. Our aim is to determine where pro...
https://medicalevidenceproject.org/grim-u-observation-establish-impossible-p-values-ranked-tests/"

Surely our job is to critically evaluate research, not merely repeat authors' talking points.

Your opening tweet is a clearly causal claim that you've repeated, which immediately doesn't stand up to scrutiny. This is a ubiquitous critique of this type of work.

Reposted by Ian Hussey

This hasn't received much attention, but it looks like scammers also hacked into Francesca Gino's MTurk account, and asked MTurkers to buy her book Rebel Talent to boost the first-week sales of the book.

Poor lady cannot catch a break.

(links in comments)
How long should it take to retract a paper with incontrovertible signs of data fabrication? Sleuths think 2 months is too long, particularly when clinical risks are involved. deevybee.blogspot.com/2026/01/an-o...
#retraction #stemcells #cardiology
@erictopol.bsky.social
An Open Letter to the BMJ Editorial Board
to: Editor in chief, Kamran Abbasi , [email protected]      Executive editor, Theodora Bloom , [email protected]      Head of research, Elizab...
deevybee.blogspot.com

Reposted by Ian Hussey

so maybe @mdpiopenaccess.bsky.social could make a resolution to remove all editors and reviewers who have accepted nonsensical papers. It would be quite a cull
As a pioneer in Open Access scholarly publishing, MDPI upholds high ethical publishing standards across its journals.

Learn more about how MDPI continues to strengthen its publication ethics policies: buff.ly/nssdnLb

#MDPI #PublicationEthics #OpenAccess
As a pioneer in Open Access scholarly publishing, MDPI upholds high ethical publishing standards across its journals.

Learn more about how MDPI continues to strengthen its publication ethics policies: buff.ly/nssdnLb

#MDPI #PublicationEthics #OpenAccess

Reposted by Ian Hussey

To bring in the New Year, here's a proposal for external regulation of academic publishing, through a voluntary system of journal certification to the ISO 9001 quality management standard. 🧪 #ResearchIntegrity (1/2) www.nature.com/articles/d41...
www.nature.com

Reposted by Ian Hussey

Just pour the scalding coffee into my cupped hands, please

None of this is incompatible with my statement. Are you arguing for a lack of caution? Strange position to take in science.

I think once we find out that someone has fabricated much of their work, we should be very careful to treat the remainder of their work as trustworthy.

Reposted by Ian Hussey

Anyone check in on Johnny Haidt to see if he thinks Bari Weiss still embodies the “telos of truth”

Your last saved meme is your moral philosophy

Reposted by Ian Hussey

🚨🚨 ATTENTION: I’d like to announce that I, unilaterally but bindingly, have changed the name from “Spearman’s rho” to “Nivard-Spearman’s rho”. I’ll be in talks with package and software maintainers to organize a smooth transfer to the new, more appropriate, terminology.

Reposted by Ian Hussey

I regularly cite Prinz et al.(www.nature.com/articles/nrd...) as a ref for low replicability in (non-psych) preclinical research. Believe it or not: They've omitted which studies they attempted to replicate!

I'm guessing this isn't news to everyone, but it was to me. Bizarre.
Believe it or not: how much can we rely on published data on potential drug targets? - Nature Reviews Drug Discovery
Nature Reviews Drug Discovery - Believe it or not: how much can we rely on published data on potential drug targets?
www.nature.com

I’ll send one in future 😉

Reposted by Alexander Wuttke

I’m vocally skeptical of silicon samples, yet vocally impressed by SurveyBot3000.

The difference: this does not rely on magic beans or assumed omniscience, it is trained and validated against a large corpus of highly relevant data and makes specific predictions with known accuracy and precision.
Finally, @bjoernhommel.bsky.social's and my paper introducing the SurveyBot3000 is officially out in AMPPS. It's a fine-tuned language model that guesstimates correlations between survey items from text alone. Not perfectly, but useful for search, for example.
journals.sagepub.com/doi/10.1177/...
Finally, @bjoernhommel.bsky.social's and my paper introducing the SurveyBot3000 is officially out in AMPPS. It's a fine-tuned language model that guesstimates correlations between survey items from text alone. Not perfectly, but useful for search, for example.
journals.sagepub.com/doi/10.1177/...

The fact that you have written about thus seriously elsewhere makes it more likely that this piece is taken seriously. That makes it worse, not better. How is the reader to know to read it as if you have your fingers crossed while writing it?

Reposted by Alberto Acerbi

Author of “How the phone ban saved high school" clarifies that the article is not meant to imply that the phone ban has saved high school.

There is no way this could cause confusion in this heated space.
¯\_(ツ)_/¯