Timo Gnambs
@tgnambs.bsky.social
220 followers 240 following 44 posts
Head of Educational Measurement at LIfBi Bamberg Germany #psychology #psychometrics #assessment #metaanalysis #LSA #stats
Posts Media Videos Starter Packs
Reposted by Timo Gnambs
jmwiarda.bsky.social
Verschieben, weil die Ergebnisse nicht passen?

Die Bildungsministerkonferenz will die Veröffentlichung einer neuen Bildungsstudie überraschend stoppen – auf unbestimmte Zeit. Bliebe es dabei, wäre es ein beispielloser Vorgang.

Im Wiarda-Blog: www.jmwiarda.de/blog/2025/10...
tgnambs.bsky.social
These findings emphasize that ICT literacy is not merely a technical skill set but is also closely related to other cognitive abilities.
tgnambs.bsky.social
Additionally, a cross-lagged panel analysis demonstrated that reading and math competencies predicted ICT literacy growth over three years, while ICT literacy also had reciprocal effects on domain-specific competencies.
tgnambs.bsky.social
Two studies on German students investigated the role of reading and mathe competence in the development of ICT literacy in adolescence. A variance decomposition analysis revealed that both competence domains together accounted for nearly half of the explained item variances in ICT literacy tests.
tgnambs.bsky.social
The ability to efficiently use digital technologies is a fundamental skill in modern society. In a recent paper, I explored the cognitive roots of information and communication technology (ICT) literacy and its relation to traditional competence domains.
www.sciencedirect.com/science/arti...
Reciprocal effects between information and communication technology literacy and conventional literacies
Information and communication technology (ICT) literacy encompasses a range of cognitive abilities that facilitate the effective use of digital techno…
www.sciencedirect.com
Reposted by Timo Gnambs
srstudent.bsky.social
Out today in BRM!

We investigate the small(er) sample performance of an MCMC method for checking whether item response data produce an interval scale using the Rasch model. These checks are viable at achievable sample sizes in survey research.

Open access: link.springer.com/article/10.3...
Applying Bayesian checks of cancellation axioms for interval scaling in limited samples - Behavior Research Methods
Interval scales are frequently assumed in educational and psychological research involving latent variables, but are rarely verified. This paper outlines methods for investigating the interval scale assumption when fitting the Rasch model to item response data. We study a Bayesian method for evaluating an item response dataset’s adherence to the cancellation axioms of additive conjoint measurement under the Rasch model, and compare the extent to which the axiom of double cancellation holds in the data at sample sizes of 250 and 1000 with varying test lengths, difficulty spreads, and levels of adherence to the Rasch model in the data-generating process. Because the statistic produced by the procedure is not directly interpretable as an indicator of whether an interval scale can be established, we develop and evaluate procedures for bootstrapping a null distribution of violation rates against which to compare results. At a sample size of 250, the method under investigation is not well powered to detect the violations of interval scaling that we simulate, but the procedure works quite consistently at N = 1000. That is, at moderate but achievable sample sizes, empirical tests for interval scaling are indeed possible.
link.springer.com
Reposted by Timo Gnambs
ingorohlfing.bsky.social
Correcting for collider effects and sample selection bias in psychological research
psycnet.apa.org/record/2024-... #CausalSky I am not sure collider bias is still that unknown. What was new to me is the proposed correction procedure (from 1982) that may help under sample selection
Reposted by Timo Gnambs
ianhussey.mmmdata.io
My article "Data is not available upon request" was published in Meta-Psychology. Very happy to see this out!
open.lnu.se/index.php/me...
LnuOpen | Meta-Psychology
open.lnu.se
Reposted by Timo Gnambs
carltonfong.bsky.social
As co-chair of AERA's @srma-sig.bsky.social, I am pleased to announce our Fall 2025 webinar series focused on meta-analysis and systematic reviews!

On Friday (Oct 3), our first webinar will be given by James Pustejovsky @jepusto.bsky.social! 🎉

Register here: us06web.zoom.us/meeting/regi...
Reposted by Timo Gnambs
eikofried.bsky.social
Had missed this absolutely brilliant paper. They take a widely used social media addiction scale & replace 'social media' with 'friends'. The resulting scale has great psychometric properties & 69% of people have friend addictions.

link.springer.com/article/10.3...
Development of an Offline-Friend Addiction Questionnaire (O-FAQ): Are most people really social addicts? - Behavior Research Methods
A growing number of self-report measures aim to define interactions with social media in a pathological behavior framework, often using terminology focused on identifying those who are ‘addicted’ to engaging with others online. Specifically, measures of ‘social media addiction’ focus on motivations for online social information seeking, which could relate to motivations for offline social information seeking. However, it could be the case that these same measures could reveal a pattern of friend addiction in general. This study develops the Offline-Friend Addiction Questionnaire (O-FAQ) by re-wording items from highly cited pathological social media use scales to reflect “spending time with friends”. Our methodology for validation follows the current literature precedent in the development of social media ‘addiction’ scales. The O-FAQ had a three-factor solution in an exploratory sample of N = 807 and these factors were stable in a 4-week retest (r = .72 to .86) and was validated against personality traits, and risk-taking behavior, in conceptually plausible directions. Using the same polythetic classification techniques as pathological social media use studies, we were able to classify 69% of our sample as addicted to spending time with their friends. The discussion of our satirical research is a critical reflection on the role of measurement and human sociality in social media research. We question the extent to which connecting with others can be considered an ‘addiction’ and discuss issues concerning the validation of new ‘addiction’ measures without relevant medical constructs. Readers should approach our measure with a level of skepticism that should be afforded to current social media addiction measures.
link.springer.com
tgnambs.bsky.social
Comparable analyses for traditional educational competencies showed that the development of digital competencies more closely mirrored changes in math than reading literacy. Overall, the results indicated persistent digital disparities throughout adolescence.
4/4
tgnambs.bsky.social
gender differ­ences favoring boys were small at first (Cohen’s d = 0.07) but widened during adolescence (Cohen’s d = 0.18); and the migrant gap was already substantial in early adolescence (Cohen’s d = − 0.18) and remained stable later on.
3/n
tgnambs.bsky.social
Using longitudinal data from the German National Educational Panel Study, we show that digital competencies follows distinct developmental patterns between ages 12 and 18:
socioeconomic dis­parities were initially large (Cohen’s d = − 0.22) but decreased over time (Cohen’s d = − 0.16);
2/n
tgnambs.bsky.social
In a recent paper (with Anna Hawrot) we explored the development of digital inequalities during adolescence for socioeconomic status, gender, and migration background.
doi.org/10.1016/j.ch...
1/n
Redirecting
doi.org
Reposted by Timo Gnambs
markusappel.bsky.social
We are hiring 11 Doctoral Researchers (100%) in the DFG-RTG "The Experience of Stories in the Digital Age". Uni Wuerzburg, Germany. Disciplines: Communication, Psychology, Computer Science. Topics: VR / XR, storytelling robots, influencers, misinformation. More: go.uniwue.de/rtg3087jobs Please share
Reposted by Timo Gnambs
tsrauf.bsky.social
Life satisfaction mostly declines with age. Previous findings (esp. the famous U-shaped age-SWB trajectory) were artifacts of misspecified models. doi.org/10.1093/esr/...
Reposted by Timo Gnambs
denewjohn.bsky.social
Sorry did you just not see this? It is an amazing interview. If you can't like and share this, please don't follow me as we have nothing in common.
bmcarthur17.bsky.social
This Brit nails it.

#Immigration
Reposted by Timo Gnambs
lakens.bsky.social
New blog post from Data Colada, responding to the recent criticisms on p-curve analysis. It is a *very* good response. As in, it addresses exactly the points I would have expected in a reply, and it explains why I will still teach p-curve analysis. datacolada.org/129
Reposted by Timo Gnambs
hcp4715.bsky.social
as a big fun of multiverse, I am really happy to see a nice tutorial was just out: psycnet.apa.org/record/2026-...

Congrats to @epronizius.bsky.social, @slewis5920.bsky.social @aggieerin.bsky.social & @psysciacc.bsky.social (sorry i did not follow all of the authors here) #Metascience #OpenScience
Reposted by Timo Gnambs
heinzleitgoeb.bsky.social
Thrilled to share: our Special Issue in Social Science Computer Review (@sscratsage.bsky.social) on digital behavioral data quality is out now. Many thanks to all contributing authers and my co-editors @clauwa.bsky.social and Bernd Weiß:
journals.sagepub.com/doi/10.1177/...
Reposted by Timo Gnambs
maksimrudnev.com
Rethinking measurement invariance causally by @dingdingpeng.the100.ci

It is preferable to work with a causal definition of measurement invariance.
A violation of measurement invariance is a potentially substantively interesting observation.
Group differences can be thought of as descriptive results
Rethinking measurement invariance causally
Measurement invariance is often touted as a necessary statistical prerequisite for group comparisons. Typically, when there is evidence against measur…
doi.org
Reposted by Timo Gnambs
xrg.bsky.social
We often hear from reviewers: "what about demand effects?" So we developed a method to eliminate them. Something weird happened during testing: We couldn’t detect demand effects in the first place! (1/8)
Summary of design and results from our three studies. (A: Design) Each study used a similar experimental design, measuring both positive and negative demand in an online experiment, with three commonly-used task types (dictator game, vignette, intervention). Our experiments had ns ≈ 250 per cell. (B: Results) Observed demand effects were statistically indistinguishable from zero. The plot shows means and 95% confidence intervals for standardized mean differences derived from frequentist analyses of each experiment and an inverse variance-weighted fixed-effect estimator pooling all experiments (solid bars). Prior measurements of experimenter demand from a previous dictator game experiment (de Quidt et al., 2018; standardized mean difference from regression coefficient) and a meta-analysis primarily including small-sample, in-person studies (Coles et al., 2025; Hedge’s g statistic) are also shown for comparison (striped bars). The main text includes Bayesian analyses that quantify our uncertainty.
tgnambs.bsky.social
Germany. About one international and one national conference per year.