Paul Bürkner
@paulbuerkner.com
6K followers 1.8K following 74 posts
Full Professor of Computational Statistics at TU Dortmund University Scientist | Statistician | Bayesian | Author of brms | Member of the Stan and BayesFlow development teams Website: https://paulbuerkner.com Opinions are my own
Posts Media Videos Starter Packs
paulbuerkner.com
the logging is done in rstan. so a fix if needed will have to be there I assume.
paulbuerkner.com
there is not unfortunately. I didn't have time to look into it anymore.
Reposted by Paul Bürkner
marvin-schmitt.com
I defended my PhD last week ✨

Huge thanks to:
• My supervisors @paulbuerkner.com @stefanradev.bsky.social @avehtari.bsky.social 👥
• The committee @ststaab.bsky.social @mniepert.bsky.social 📝
• The institutions @ellis.eu @unistuttgart.bsky.social @aalto.fi 🏫
• My wonderful collaborators 🧡

#PhDone 🎓
Image of a graduating PhD student in the trending Studio Ghibli style.
paulbuerkner.com
can you post a reprex on github?
Reposted by Paul Bürkner
hadley.nz
What advice do folks have for organising projects that will be deployed to production? How do you organise your directories? What do you do if you're deploying multiple "things" (e.g. an app and an api) from the same project?
Reposted by Paul Bürkner
marvin-schmitt.com
Amortized inference for finite mixture models ✨

The amortized approximator from BayesFlow closely matches the results of expensive-but-trustworthy HMC with Stan.

Check out the preprint and code by @kucharssim.bsky.social and @paulbuerkner.com👇
bayesflow.org
Finite mixture models are useful when data comes from multiple latent processes.

BayesFlow allows:
• Approximating the joint posterior of model parameters and mixture indicators
• Inferences for independent and dependent mixtures
• Amortization for fast and accurate estimation

📄 Preprint
💻 Code
Reposted by Paul Bürkner
bayesflow.org
Finite mixture models are useful when data comes from multiple latent processes.

BayesFlow allows:
• Approximating the joint posterior of model parameters and mixture indicators
• Inferences for independent and dependent mixtures
• Amortization for fast and accurate estimation

📄 Preprint
💻 Code
Reposted by Paul Bürkner
avehtari.bsky.social
If you know simulation based calibration checking (SBC), you will enjoy our new paper "Posterior SBC: Simulation-Based Calibration Checking Conditional on Data" with Teemu Säilynoja, @marvinschmitt.com and @paulbuerkner.com
arxiv.org/abs/2502.03279 1/7
Title: Posterior SBC: Simulation-Based Calibration Checking Conditional on Data

Authors: Teemu Säilynoja, Marvin Schmitt, Paul Bürkner, Aki Vehtari

Abstract: Simulation-based calibration checking (SBC) refers to the validation of an inference algorithm and model implementation through repeated inference on data simulated from a generative model. In the original and commonly used approach, the generative model uses parameters drawn from the prior, and thus the approach is testing whether the inference works for simulated data generated with parameter values plausible under that prior. This approach is natural and desirable when we want to test whether the inference works for a wide range of datasets we might observe. However, after observing data, we are interested in answering whether the inference works conditional on that particular data. In this paper, we propose posterior SBC and demonstrate how it can be used to validate the inference conditionally on observed data. We illustrate the utility of posterior SBC in three case studies: (1) A simple multilevel model; (2) a model that is governed by differential equations; and (3) a joint integrative neuroscience model which is approximated via amortized Bayesian inference with neural networks.
Reposted by Paul Bürkner
bayesflow.org
A study with 5M+ data points explores the link between cognitive parameters and socioeconomic outcomes: The stability of processing speed was the strongest predictor.

BayesFlow facilitated efficient inference for complex decision-making models, scaling Bayesian workflows to big data.

🔗Paper
paulbuerkner.com
cool idea! I will think about how to achieve something like this. can you open an issue on GitHub so I don't forget aboht it?
Reposted by Paul Bürkner
bayesflow.org
Join us this Thursday for a talk on efficient mixture and multilevel models with neural networks by @paulbuerkner.com at the new @approxbayesseminar.bsky.social!
approxbayesseminar.bsky.social
A reminder of our talk this Thursday (30th Jan), at 11am GMT. Paul Bürkner (TU Dortmund University), will talk about "Amortized Mixture and Multilevel Models". Sign up at listserv.csv.warwick... to receive the link.
Reposted by Paul Bürkner
approxbayesseminar.bsky.social
Paul Bürkner (TU Dortmund University), will give our next talk. This will be about "Amortized Mixture and Multilevel Models", and is scheduled on Thursday the 30th January at 11am. To receive the link to join, sign up at listserv.csv.warwick...
Reposted by Paul Bürkner
marvin-schmitt.com
Paul Bürkner (@paulbuerkner.com) will talk about amortized Bayesian multilevel models in the next Approximate Bayes Seminar on January 30 ⭐️

Sign up to the seminar’s mailing list below to get the meeting link 👇
approxbayesseminar.bsky.social
Paul Bürkner (TU Dortmund University), will give our next talk. This will be about "Amortized Mixture and Multilevel Models", and is scheduled on Thursday the 30th January at 11am. To receive the link to join, sign up at listserv.csv.warwick...
Reposted by Paul Bürkner
codendahl.bsky.social
More than 60 German universities and research outfits are announcing that they will end their activities on twitter.

Including my alma mater, the University of Münster.

HT @thereallorenzmeyer.bsky.social nachrichten.idw-online.de/2025/01/10/h...
Hochschulen und Forschungsinstitutionen verlassen Plattform X - Gemeinsam für Vielfalt, Freiheit und Wissenschaft
nachrichten.idw-online.de
Reposted by Paul Bürkner
fusaroli.bsky.social
what are your best tips to fit shifted lognormal models (in #brms / Stan)? I'm using:
- checking the long tails (few long RTs make the tail estimation unwieldy)
- low initial values for ndt
- careful prior checks
- pathfinder estimation of initial values
still with increasing data, chains get stuck
paulbuerkner.com
I think this should be documented in the brms_families vignette. perhaps you can double check if the information you are looking for is indeed there.
paulbuerkner.com
happy to work with you on that if we find the time :-)
paulbuerkner.com
indeed, I saw it at StanCon but I am not sure anymore how production ready the method was.
paulbuerkner.com
something like this, yes. but ensuring the positive definiteness of arbitrary constraint correlation matrices is not trivial. so there may need to be some restrictions of what correlation patterns are allowed.
paulbuerkner.com
I already thought about this. a complete SEM syntax in brms would support selective error correlations by generalizing set_rescor()
Reposted by Paul Bürkner
jebyrnes.bsky.social
OK, here is a very rough draft of a tutorial for #Bayesian #SEM using #brms for #rstats. It needs work, polish, has a lot of questions in it, and I need to add a references section. But, I think a lot of folk will find this useful, so.... jebyrnes.github.io/bayesian_sem... (use issues for comments!)
Full Luxury Bayesian Structural Equation Modeling with brms
jebyrnes.github.io
Reposted by Paul Bürkner
bhvieira.github.io
Ternary plots to represent data in a simplex, yay or nay? #stats #statistics #neuroscience
A ternary plot visualizing data points classified into three groups: CN (Cognitively Normal), MCI (Mild Cognitive Impairment), and AD (Alzheimer's Disease dementia). The triangular axes represent predicted probability corresponding to each diagnosis category, with the corners labeled CN (top), MCI (bottom left), and AD (bottom right). Each point is colored according to its real diagnosis group (blue for CN, green for MCI and red for AD). Trajectories of predicted probabilities pertaining to the same subject are denoted with arrows connecting the points. Shaded regions in blue, green, and orange further highlight distinct areas of the plot associated with CN, MCI, and AD, respectively. A legend on the right identifies the diagnosis categories by color.
Reposted by Paul Bürkner
kevinmkruse.bsky.social
Writing is thinking.

It’s not a part of the process that can be skipped; it’s the entire point.
annaeclark.bsky.social
This is what's so baffling about so many suggestions for AI in the humanities classroom: they mistake the product for the point. Writing outlines and essays is important not because you need to make outlines and essays but because that's how you learn to think with/through complex ideas.
jaxwendy.bsky.social
I'm sure many have said this before but I'm reading a student-facing document about how students might use AI in the classroom (if allowed) and one of the recs is: use AI to make an outline of your reading! But ISN'T MAKING THE OUTLINE how one actually learns?
Reposted by Paul Bürkner
bayesflow.org
1️⃣ An agent-based model simulates a dynamic population of professional speed climbers.
2️⃣ BayesFlow handles amortized parameter estimation in the SBI setting.

📣 Shoutout to @masonyoungblood.bsky.social & @sampassmore.bsky.social

📄 Preprint: osf.io/preprints/ps...
💻 Code: github.com/masonyoungbl...
Reposted by Paul Bürkner
bayesflow.org
Neural superstatistics are a framework for probabilistic models with time-varying parameters:

⋅ Joint estimation of stationary and time-varying parameters
⋅ Amortized parameter inference and model comparison
⋅ Multi-horizon predictions and leave-future-out CV

📄 Paper 1
📄 Paper 2
💻 BayesFlow Code