Lukas Beinhauer
banner
lbnhr.bsky.social
Lukas Beinhauer
@lbnhr.bsky.social
data science in energy decarbonisation | stats & programming | meta-analysis and heterogeneity in wild Psychology
And since we all prefer to read free and openly on the internet - here's a preprint:
osf.io/preprints/ps...
5/5
OSF
osf.io
March 26, 2025 at 3:32 PM
If the variance in true scores (latent factor scores) varies across comparisons, even if you are measuring equally precise, the reliability coefficients will differ anyways. If you're not just interested in this relative measurement precision, then Boot-Err might be the better tool for you.
4/5
March 26, 2025 at 3:32 PM
Why wouldn't we want to look at reliability coefficients directly? Score reliability is a relative parameter, informing you which fraction of variance in your data is attributable to differences in true scores (in latent factor scores for those people).
3/5
March 26, 2025 at 3:32 PM
We propose an alternative way to assess in how far measurement precision differs across sites: Just model the error variance explicitly (sheepishly named Boot-Err).

Using bootstrapping & appropriate transformations (variance-stabilizing etc.), we can inspect random measurement error directly.
2/5
March 26, 2025 at 3:32 PM
Looks decent to me
January 12, 2024 at 4:28 PM
I wasn't aware of this, this is outrageous, what has Wiesbaden ever done to them?
January 12, 2024 at 4:18 PM
I see, the magic lies in the "prompt engineering" - I guess my prompt was too specific/distracting, so it started spewing information on what a replication is instead. That looks very promising, thanks for the follow-up!
January 12, 2024 at 9:11 AM
@forrt.bsky.social
Ideally, it would carry information on each conceptual replication on what kind of intervention and what kind of outcome measure was used. We're currently not looking at specific effects, but trying to understand how different interventions of the same effect may impact results.
January 12, 2024 at 9:09 AM
Just to clarify - I didn't mean to throw shade! Greatly appreciate everything you're doing!

Is this the database you are referring to? metaanalyses.shinyapps.io/replicationd...

Because this is great stuff, it just doesn't happen to carry the niche information I am currently looking for.
metaanalyses.shinyapps.io
January 12, 2024 at 9:07 AM
Oh really, I just remember some controversies surrounding audits from that period - thanks for the heads-up
January 11, 2024 at 4:37 PM
Ah yes, that's great! I saw it used for direct replications in the past, but they did keep track of some individual replications as well! Thanks! :)
January 11, 2024 at 4:36 PM
Tried it now! Seems like it's closer to scite.ai and other tools, which make use of AI to answer research questions, right? At least I haven't been able to make it spew out the meta-analyses that I am looking for (not even the ones I already know of).
January 11, 2024 at 3:56 PM
Not like I'm sad about it - I'm happy there's not more replications of "Feeling the future" than we already have!
January 11, 2024 at 3:54 PM