Daniel 🕹️
@strengejacke.de
570 followers 100 following 410 posts
He/she/it - 's' muss mit. We're lower than the world! R easystats project: https://easystats.github.io/easystats/
Posts Media Videos Starter Packs
strengejacke.de
Also haven't heard of machete order before 👀
Although for younger kids, JarJar Binks is a good entry into science fiction movies, I guess.
strengejacke.de
I think I showed my kids some "making of" scenes so they realised no cat was hurt during the movie shooting and that all actors are still alive (I mean they survived the movie in real life). 😜
strengejacke.de
Then maybe start around 8-10 years of age 😉 most important is not to leave out star wars.
strengejacke.de
Even without character limits.
strengejacke.de
About 6-8 years is a good age to start with star wars movies.
strengejacke.de
"estimate generating" might be too vague, because models also generate estimates, and then you could stop there. :-)
?
strengejacke.de
(and by post-estimation toolbox/framework I mean all the stuff you can do with #rstats packages like marginaleffects, modelbased or emmeans)
strengejacke.de
... and this framework roughly requires/includes:
- what is your estimand?
- fit an appropriate statistical model
- answer the question(s) with a "post-estimation toolbox" (using models as "prediction machines", to use the metaphor of the authors)
strengejacke.de
Is there a term for this kind of approach, which maybe also include "simpler" analyses, like the general use of EMMs or comparisons and alike? I always use "pairwise comparisons" as a synonym, because that is what more people know. What I'm actually referring to is a "post-estimation framework" /1
dingdingpeng.the100.ci
Ever stared at a table of regression coefficients & wondered what you're doing with your life?

Very excited to share this gentle introduction to another way of making sense of statistical models (w @vincentab.bsky.social)
Preprint: doi.org/10.31234/osf...
Website: j-rohrer.github.io/marginal-psy...
Models as Prediction Machines: How to Convert Confusing Coefficients into Clear Quantities

Abstract
Psychological researchers usually make sense of regression models by interpreting coefficient estimates directly. This works well enough for simple linear models, but is more challenging for more complex models with, for example, categorical variables, interactions, non-linearities, and hierarchical structures. Here, we introduce an alternative approach to making sense of statistical models. The central idea is to abstract away from the mechanics of estimation, and to treat models as “counterfactual prediction machines,” which are subsequently queried to estimate quantities and conduct tests that matter substantively. This workflow is model-agnostic; it can be applied in a consistent fashion to draw causal or descriptive inference from a wide range of models. We illustrate how to implement this workflow with the marginaleffects package, which supports over 100 different classes of models in R and Python, and present two worked examples. These examples show how the workflow can be applied across designs (e.g., observational study, randomized experiment) to answer different research questions (e.g., associations, causal effects, effect heterogeneity) while facing various challenges (e.g., controlling for confounders in a flexible manner, modelling ordinal outcomes, and interpreting non-linear models).
Figure illustrating model predictions. On the X-axis the predictor, annual gross income in Euro. On the Y-axis the outcome, predicted life satisfaction. A solid line marks the curve of predictions on which individual data points are marked as model-implied outcomes at incomes of interest. Comparing two such predictions gives us a comparison. We can also fit a tangent to the line of predictions, which illustrates the slope at any given point of the curve. A figure illustrating various ways to include age as a predictor in a model. On the x-axis age (predictor), on the y-axis the outcome (model-implied importance of friends, including confidence intervals).

Illustrated are 
1. age as a categorical predictor, resultings in the predictions bouncing around a lot with wide confidence intervals
2. age as a linear predictor, which forces a straight line through the data points that has a very tight confidence band and
3. age splines, which lies somewhere in between as it smoothly follows the data but has more uncertainty than the straight line.
strengejacke.de
(i.e. not particular mention that we're not talking about *Bayesian* priors, because that should be clear when writing the analysis section, where glmmTMB is mentioned, and "Bayesian" is not)

Frankly, I haven't thought about how to "name" glmmTMB priors to avoid misunderstandings? Good question!
strengejacke.de
I would probably write that priors were used for regularisation to address convergence and singular fit issues (and refer to the R code file). @bbolker.bsky.social any thoughts?
strengejacke.de
When you start setting the zoom in Word or Acrobat to 125%, you know it's time for an eye doctor's appointment again.
strengejacke.de
Danke! Ich habe aus verschiedenen Ecken des Internets bisher auch Positives gehört, noch keine negativen Bewertungen. Dann mache ich mich mal auf den Weg zum local dealer... Ich hoffe, ich bekomme das erste Starter Set vor Ort.
strengejacke.de
(note the price from the German rulebook 👀 - the cards are really good, but in best condition)
strengejacke.de
This weekend I dug out my very old #starwars #ccg cards (decipher) because my son really wanted to play the game. Now I'm thinking about checking out Star Wars Unlimited, because it's no longer easy to get cards for the CCG. Anyone experience with Star Wars Unlimited? Would you recommend it? #tcg
strengejacke.de
It became better since I use Positron, which has no breakpoint feature for debugging (yet) 😎