Ian Hussey
banner
ianhussey.mmmdata.io
Ian Hussey
@ianhussey.mmmdata.io

Meta-scientist and psychologist. Senior lecturer @unibe.ch‬. Chief recommender @error.reviews. "Jumped up punk who hasn't earned his stripes." All views a product of my learning history.

Psychology 66%
Sociology 10%
Pinned
Lego Science is research driven by modular convenience.

When researchers combine methods or concepts, more out of convenience than any deep curiosity in the resulting research question, to create publishable units.

"What role does {my favourite construct} play in {task}?"

Reposted by Ian Hussey

Staying curious has saved me from bad data more times than I can count. I always have a validation plan when wrangling data, but it's still always possible to miss something. Time again, the unplanned "Hmmm, I wonder if I should also check this" moment, illuminates a new unexpected issue.

So, not published “in Nature”.

This was not published in nature, it was published in "npj mental health research"

Reposted by Ian Hussey

Congratulations to @simine.com for winning the Einstein Foundation Individual Award! 🎉

A well-deserved recognition for her seminal efforts to improve scientific rigor, which includes instituting detailed checks for errors and computational reproducibility at Psychological Science.
Congratulations to @simine.com well deserved winner of the Einstein Foundation Individual Award for Promoting Quality in Research 2025 🎉 www.einsteinfoundation.de/en/media/pre...

Any details of that AIRA does exactly?

This tendency to confuse credibility of claims with “were the authors doing their best” is strong in psych. Eg defensiveness about Cross Lagged Panel Models despite now knowing they have false positives rates. “But they didn’t know that at the time” doesn’t increase evidential value now.

Reposted by Ian Hussey

Never understand the idea that failing to pre-register an RCT before 2010 (say) does not constitute risk of bias. RoB is not about figuring out whether researchers did their best according to the standards of the time. Meta-research suggests that estimates did tend to reduce after intro of reg.

We do this in Bern! There is even cake for attendees

Chaotic evil: N(0,10) applied linear probability model.

@solomonkurz.bsky.social schooled me on this a year ago, mind was blown.

How are you out only-child-ing me
The default prior for the intercept in both {rstanarm} and {brms} are very wide.

Counterintuitively - being on the logit scale, this is actually translates to a **strong** prior that p(y=1) is near 1 or near 0.

Always check your priors!

#rstats

Reposted by Ian Hussey

"There are stupid fabricators and there are more competent ones.

Potentially, LLMs lower the bar and allow the stupid ones to do a better job"

~a chilling problem highlighted by Jack Wilkinson at the International Research Integrity Conference

Reposted by Ian Hussey

An international computing society has begun retracting conference papers for “citation falsification” only months after the sleuth who flagged the suspect articles was convicted for defamation in a lawsuit filed by one of the offending authors.
Computing society pulls works for ‘citation falsification’ months after sleuth is convicted of defamation
Solal Pirelli An international computing society has begun retracting conference papers for “citation falsification” only months after the sleuth who flagged the suspect articles was convicted for …
retractionwatch.com

Re bad idea 1:

Unsurprisingly, a new independent double blind RCT replication shows that approach avoidance training does not reduce problematic drinking behaviour.

journals.sagepub.com/doi/10.1177/...
A paper critiquing post-publication peer review has numerous made-up references, including a @nature.com article falsely attributed to our Ivan Oransky.
link.springer.com/article/10.1...
PubPeer - An expert criticism on post-publication peer review platform...
There are comments on PubPeer for publication: An expert criticism on post-publication peer review platforms: the case of pubpeer (2025)
pubpeer.com

It would be a step backwards rather than forwards to have a vast and expensive integrity assessment system that was both ineffective and provided false trust.

But here too the metaphor breaks down, because the evidence that the TSA is effective is very scant. DHS red-team exercises show very high failure rates, published risk analyses questions its assumed efficacy, many critiques of 'security theatre' over actual efficacy etc.

These are all demand side arguments about what is needed, but it’s a supply side problem. Who will do these reviews, who has the skills, time and interest to spend 3x the amount of time they’re already not being paid or rewarded to do in their career? There are fewer levers to pull on here.

My current interest is in making post publication more feasible, rapid, and effective. Concerns that would kill a manuscript during peer review are currently ignored after publication, when the burden of proof switches to critics needing to prove beyond a doubt there are issues, for no good reason.

This is where the analogy breaks down for me, apart from my discomfort with comparing it with serious crime: police don’t make specific efforts to prevent murder, they are investigated after the fact, which is at odds with the TSA analogy.

This is the primary tension. Do we actually endorse such changes as feasible and worth pursing? I don’t (currently) advocate for universal pre publication integrity checks because I consider them unfeasible to implement.

Maybe pre-publication peer review can be strengthened, but the review system is already under strain and this has huge scalability issues. Strengthening post publication review, eg through targeted citation-triggered reviews, is another option. Eg: mmmdata.io/posts/2025/0...
Post-publication peer-review should focus on highly influential articles
Authors: Ian Hussey & Jamie Cummins tl;dr: 9.2% of all citations go to just 0.32% of psychology articles. To have the most impact, post-publication peer-review should focus on these influential articl...
mmmdata.io

I think you make a very useful and honest point when you say “we rely on the good faith of authors and their institutions”. Trust requires clarity about what is or isn’t checked, and journals by and large currently have no immune system to protect against fraud.

Now, I patiently wait for @briannosek.bsky.social to do a wellness check on me for uncharacteristic optimism.

I think this is an overly pessimistic take from the @bmj.com.

Sharing data does not inherently increase trust, rather it enables verification which allows for trust calibration.

This example is a win. Serious issues were rapidly detected that would not have been without mandatory data sharing.
We released a pretty cool dataset/preprint today looking at video game play, cognition, time-use and a ton of self-reported psych measures at osf.io/preprints/ps... with @nballou.bsky.social @matti.vuorre.com @thomashakman.bsky.social @rpsychologist.com and @shuhbillskee.bsky.social RRs coming soon
Risk of bias in robustness reports: https://osf.io/wj26e
The link between the gut #microbiome and autism is not backed by science, researchers say.

Read the full opinion piece in @cp-neuron.bsky.social: spkl.io/63322AbxpA

@wiringthebrain.bsky.social, @statsepi.bsky.social, & @deevybee.bsky.social