James Steele
@jamessteeleii.bsky.social
690 followers 480 following 4.8K posts
I'm just a guy who's an academic for fun
Posts Media Videos Starter Packs
Pinned
jamessteeleii.bsky.social
Excited to have now started new role as Head of Research with Macrofactor (macrofactorapp.com). Looking forward to getting stuck into some really cool projects and getting that out to all the users!
Macrofactor Logo - An M and F in vertical orientation.
Reposted by James Steele
liamsatchell.bsky.social
New paper! - I was wrong! We checked and checked again we were definitely wrong.

And I couldn't be happier that we are able to publish these findings at QJEP - Satchell, Hall & Jones 🧵

osf.io/preprints/ps...
OSF
osf.io
jamessteeleii.bsky.social
After this, if we end up finding precise effect estimates very much in line with theory I'm calling it there... further research would genuinely be a waste of time (brief commentary coming soon on this point of research waste too 😜).
jamessteeleii.bsky.social
We have our next Project Discover currently underway conducting another high-powered, pre-registered test of theoretically derived precise predictions regarding comparative effects of training volume:

osf.io/7vmxk/
OSF
osf.io
jamessteeleii.bsky.social
@f2harrell.bsky.social just managed to squeeze in a paraphrase and credit to you for this in a commentary piece!
jamessteeleii.bsky.social
Oh I'm under no illusion that it won't be terrifying again when I venture out of the shallows of the kelp garden and into the unfathomable depths 😬
jamessteeleii.bsky.social
Alright, I'm hooked now... Still have this pit in my stomach the whole time playing, but as I start to figure stuff out I'm getting a bit more confident and just the process of keeping on top of surviving with food, water etc whilst searching shallows for materials keeps you occupied...
jamessteeleii.bsky.social
Man I started playing Subnautica last night for the first time and fuck me... that confirmed my thalassophobia. Scariest game I have ever played. No chance of me trying VR version 😅 Friend was cracking up on the stream. Confirmed for me I'm never scuba diving 😂
jamessteeleii.bsky.social
Certainly from a Meehlian perspective, and even considering it's essentially retrodictive, that would tend to provide further corroboration for the theory that trained folks just don't really grow much. Especially when combined with our estimates from the latest Project Discover.

🧵6/6
A forest plot-style graph titled “Degree of corroboration of theory derived predictions” with the subtitle "Grey band indicates the theory tolerance i.e., interval estimate derived from theory; Spielraum based upon 95% prediction interval from large scale meta-analysis (DOI: 10.1080/02640414.2023.2286748) of varied resistance training interventions". The x-axis is labeled “Standardised Mean Effect” ranging from approximately -0.25 to 1.75. This plot is similar to the previous one but shows all the hypertrophy and strength outcome estimates from https://sportrxiv.org/index.php/server/preprint/view/485 as facets showing their precision and closeness to the corresponding theory derived predictions and supporting their degree of corroboration for the theory.
jamessteeleii.bsky.social
So I went back and pulled out the data from these studies to take a look, and lo and behold the estimates weren't that far off what we'd expect... the confidence interval still contains zero, but the estimate is fairly close with the power/precision boost from pooling the data.

🧵5/6
A forest plot-style graph titled “Degree of corroboration of theory derived predictions” with the subtitle "Grey band indicates the theory tolerance i.e., interval estimate derived from theory; Spielraum based upon 95% prediction interval from large scale meta-analysis (DOI: 10.1080/02640414.2023.2286748) of varied resistance training interventions". The x-axis is labeled “Standardised Mean Effect” ranging from approximately -0.25 to 1.00. A black point estimate with a narrow horizontal error bar is shown with the value 0.02 [95%CI: -0.01, 0.05] printed next to it, derived from individual participant-level modeling of previous studies (n=323). A much wider horizontal interval labeled “Spielraum” spans roughly -0.1 to 0.77, representing the 95% prediction interval from a large-scale meta-analysis. A shaded grey vertical band represents the predicted effect from theory ranging an interval of 0.042 to 0.057, indicating theory tolerance. Text annotations on the right report a Corroboration Index of 0.96 and a normalised Corroboration Index of 0.93. The plot caption reads "Estimate derived from individual participant (n=323) level modelling of data from previous studies; DOIs: 10.1139/apnm-2014-0162; 10.1519/JSC.0000000000001222; 10.1139/apnm-2016-0180; 10.1139/apnm-2018-0376; 10.1080/02701367.2022.2097625; Main effect of time (weeks converted to 12 week effect) from: Fat Free Mass Z-Score ~ Weeks + (1|Study/Participant)".
jamessteeleii.bsky.social
Out of curiosity I wanted to see what the pooled estimates analysed across all our previous studies (n=323 across five studies) would be as a standardised mean effect and compare this with the retrodiction made from our theory.

🧵4/6
jamessteeleii.bsky.social
But I wondered if might just be that our previous studies lacked power individually with this more noisy outcome to detect what are the very small effects we'd certainly now predict from our theory of linear-log adaptation over time wiht exposure to resistance training.

🧵3/6
jamessteeleii.bsky.social
In those studies we collected bodpod data so instead had fat free mass of our participants. Granted, this is not a direct measure of changes in muscle and is an inherently more noisy outcome.

🧵2/6
jamessteeleii.bsky.social
In previous studies that we have conducted as part of Project Discover with Discover Strength over the past decade many folks have complained about the lack of direct measures of hypertrophy.

#exercisescience #metascience

🧵1/6
jamessteeleii.bsky.social
Very tempted to add to a paper that a particular series of studies were excluded from our meta-analysis because they are "an absolute shit show of presumably reporting errors, but maybe worse" and leave it at that 😂
jamessteeleii.bsky.social
Excited to have now started new role as Head of Research with Macrofactor (macrofactorapp.com). Looking forward to getting stuck into some really cool projects and getting that out to all the users!
Macrofactor Logo - An M and F in vertical orientation.
jamessteeleii.bsky.social
"The majority of clinicals trials that are successfully launched end with equivocal results, with confidence intervals that are too wide to allow drawing a conclusion other than “the money was spent”."

Now that is a hook... gonna be paraphrasing that alot given how rife research waste is.
Reposted by James Steele
scientificdiscovery.dev
It's great some authors will share their work if you email them for a pdf.

But this isn't a good use of your time, or theirs.

I do think academics have more important things to do than a system where they reply one by one to every potential reader who wants to read beyond the abstract.
jamessteeleii.bsky.social
That's fair... In which case the SMD is conditional. But I rarely see it used in that manner tbh.
jamessteeleii.bsky.social
This SMD "trick" (though I suspect most don't understand the bias so don't do it deliberately) and also the reporting of within group change scores, or percentage changes/differences which can also be incredibly misleading are common from sport science influencers.

12/12
jamessteeleii.bsky.social
Also, it's not uncommon for folks to hone in and overemphasise certain statistics. It's not just the researchers whio might focus too much on the more "impressive" seeming SMDs in such cases. We see this sort of thing in social media representations of results too.

11/12
jamessteeleii.bsky.social
This isn't the end of the world... remember our raw units estimate is still unbiased. But it can cause a problem for evidence synthesis via meta-analysis, particularly when we are pooling effects from across studies using different operationalisations and scales.

10/12
jamessteeleii.bsky.social
Because that selection process means our sample estimate of the pre-intervention SD (red distribution) is not reflective of the population SD. As such, our restricted SD means our standardised effect estimate is inflated.

9/12