Danny Maupin
@dmaupin.bsky.social
110 followers 1K following 140 posts
🔬Research Fellow in Health Science University of Surrey 🩺Specialist Vestibular Physiotherapist
Posts Media Videos Starter Packs
Employment benefits to underemployed workers to benefit their mental health. Now time to find a home for this piece!

#academicsky #opensci #episky
New preprint on MedRxiv!

In this we looked at how unemployment benefits impact the mental health of underemployed workers. We had relatively small sample sizes so couldn't use causal methods and had to settle for meta-regression, but this piece provides some initial evidence for expanding /1
The Impact of Unemployment Benefits on the Mental Health of Underemployed Workers: Findings from the Understanding America Study
Background: Underemployment, working fewer hours than desired, has a negative impact on mental health. Research suggests that unemployment benefits can support the mental health of unemployed individu...
www.medrxiv.org
Impactful publications but that is hard to measure
Particularly if all that is being churned out is junk or fast churn science. It's easy to use AI to develop about x predicts y and another paper about z predicts why or redundant publications. This has been a problem before AI but is accelerated now. Looking forward to it publishing more robust /
No I don't think it would be, just write things vague enough
You think he will bother pre-registering? I'm doubtful
I may be confusing the proof-evidence distinction so please educate me if I am, but I don't think we make proof
The hypothesis remark was specific to a reply about looking for a hypothesized effect.

I don't think you make proof if you're doing research without a hypothesis though? You are discovering proof that exists. I think people take issue with the word make particularly with someone openly biased
The same page about the idea being ridiculous
So I still take issue with the make proof aspect, we could test the hypothesis, collect data all that but make proof has a different connotation especially when said by someone openly biased.

I think this conversation is in good fun as it's interesting debating science philosophy and glad we are /
But we shouldn't look for hypothesized effects. We make a hypothesis and then we test it. That's different. And even the species example the more I think about it isn't making proof, we are looking for evidence yes but we don't make the proof. The proof has existed previously we just didn't see it /
using science as a monolith when there are examples that prove otherwise so they probably should've been more specific
I'd say that's a little disingenuous because we aren't looking at discovering new species, this is specifically health research, and not health research that is looking for a new disease, but looking at causality of a condition with multiple confounders. The other poster is being too broad /
Not that it always is as researchers often have biases, but this feels like poor wording from individuals that don't seem concerned with gold standard science despite them saying they are
I think people take issue with the wording due to the belief that they are going to interfere with any data or conclusions to back up their point, thus making proof.

If I understand correctly you are saying science is always looking to make proof, but ideally this will be proof for or against /
Congratulations Niall! Looking forward to working alongside you
Excited to be a part of this cohort and can't wait to work with fellow colleagues to explore this area further

#academicsky #metascience
We're pleased to be funding early career researchers to explore how AI is transforming science.

The UK Metascience Unit has announced a cohort of 29 researchers that will receive funding through the AI Metascience Fellowship Programme.

Read more: www.ukri.org/news/interna...
International fellowships to explore AI’s impact on science
New £4 million programme funds early career researchers in the UK, US and Canada to investigate how artificial intelligence (AI) is transforming science.
www.ukri.org
Really good blog, enjoyed how you didn't completely dismiss the appeal of using LLMs. I do worry about ChatGPT being done for stats, particularly as less stats disciplines use it. What happens if they say o just a do t test on this data cause that's what I know even if it's not ideal for the ?
That makes sense. I guess my concern (mentioned in that thread discussion) is that single papers often become such hot topic despite being done poorly (to be fair different issue) or not being replicated but the idea sticks in the public. This may be more of a fault of science as a whole with focus
Looking at this. Am I right in assuming that you are not worried about replication crises because that's what science should do? Continue to iterate and drop ideas that consistently don't work even if there are odd results that do?

I get too that this doesn't mean fraud stat magic etc.
Interesting thread and thank you for sharing! My first thought when it comes to replication is being able to produce the same result with the same data as described in methods though I know this is not always what is assessed. Your thread has communicated well the idea the variation in studies 1/
I like the idea of a week long process to write a 10 page recommendation letter in Stanford law. Seems like a good way to evaluate someone's contributions though needs to be done well to minimise bias
“Using metrics to assess researchers can be ‘very dodgy terrain,’” says @jameswilsdon.bsky.social.

Great overview of how research assessment is changing worldwide in @nature.com , featuring recent work by RoRI with the Global Research Council: www.nature.com/articles/d41...