Adrian Seyboldt
@aseyboldt.bsky.social
220 followers 34 following 16 posts
Posts Media Videos Starter Packs
aseyboldt.bsky.social
🥧 nutpie got a website now! pymc-devs.github.io/nutpie/
If you're doing Bayesian inference with PyMC or Stan, this might be worth checking out. Nutpie can sample PyMC and Stan model, and typically twice as fast.
#BayesianStats #PyMC #Stan
Nutpie
pymc-devs.github.io
aseyboldt.bsky.social
Do you somewhere have a write-up of how that works on an example? I can't think of a reason we couldn't do the same thing in pymc with pytensor? After all, we also have the model graph in a data structure and can analyze and modify it.
aseyboldt.bsky.social
Had a lot of fun on the podcast! Hope it is as much fun listening to it as it was recording :-)
learnbayesstats.bsky.social
🎙️ Ep. 133 is out now!

@alex-andorra.bsky.social chats with ‪ @spinkney.bsky.social
& Adrian Seyboldt about making Bayesian models more efficient without losing rigor — zero-sum constraints, Cholesky tricks, practical wins & more

🎧 learnbayesstats.com/episode/133-...

#Bayesianstats #podcast #LBS
Learning Bayesian Statistics – Laplace to be for new & veteran Bayesians alike!
Laplace to be for new & veteran Bayesians alike!
learnbayesstats.com
aseyboldt.bsky.social
Cool stuff, will have to do some reading :-) If you want to add a sampler to this, would be fun to combine it with nuts-rs.
aseyboldt.bsky.social
pytensor (and with it pymc) will use many of those helpers automatically using rewrites if appropriate, even if you write naive code. It doesn't always catch everything, so it is still good to know about them, but it can help beginners a lot.
aseyboldt.bsky.social
I found that using zero-sum constrained regression values and then taking the softmax to map that to the simplex usually is very nice to work with.
aseyboldt.bsky.social
My first instinct about how to model this isn't to use a MvNormal, but maybe to have one scalar variable for the total volume, and then do a regression on the simplex that tells you what ratio of the total volume is in which region?
aseyboldt.bsky.social
I don't get it, what's so strange about that quoted sentence? A bit pretentious? But if you turn all nouns *and verbs" into "something", how would any sentence survive?
aseyboldt.bsky.social
You could also do `use std::ops::Neg; num.ln().neg().ln().neg()`, not sure I'd really like to read it that way unless it is in a longer postfix chain anyway...
I sometimes just write `f64::ln(num)` though. Bit verbose with the type all the time, but I don't think it's too bad.
aseyboldt.bsky.social
Funny, I would not want to go from arviz/xarray (with properly chosen dims and coords) to a dataframe. The only time I do that is if I want to make a plot with seaborn, but that's simply a `values.to_dataframe()` call away...
aseyboldt.bsky.social
I'd also love to be part of the list :-)
aseyboldt.bsky.social
You can do this easily in pytorch: pytorch.org/docs/stable/...
Also seems to work with onnx (github.com/pymc-devs/nu...)
But for some reason I can't find any references in the jax docs. I'm really confused by this by the way, and maybe I just misunderstand something...
CUDA semantics — PyTorch 2.5 documentation
A guide to torch.cuda, a PyTorch module to run CUDA operations
pytorch.org
aseyboldt.bsky.social
I don't think you would have to write a kernel. The main problem with nuts on the gpu seems to be that the gpu waits while we check the turning criterion. But we could easily keep the GPU busy during that time with a different chain. And cuda streams are a mechanism for exactly this.
aseyboldt.bsky.social
Really cool :-)
One thing that has always bugged me in jax is that I can't find a way to use multiple cuda streams. I think at least a part of the NUTS overhead goes away if different chains run in different streams, so that the GPU doesn't have to sit around idle when a different chain could run.