Adrien Corenflos
adriencorenflos.bsky.social
Adrien Corenflos
@adriencorenflos.bsky.social
Assistant Professor at the university of Warwick.
I compute integrals for a living.
https://adriencorenflos.github.io/
According to AI, the exam I've written for my students "does not spark any joy".
January 23, 2026 at 5:49 PM
There there
January 12, 2026 at 9:59 AM
Fixed it for you.
December 7, 2025 at 8:44 PM
So MANY layers!
November 1, 2025 at 10:08 AM
We believe there are many nice and important consequences to the construction. One we find particularly nice is that it gives diagnostics of convergence for MCMC algorithms in *any* f-divergence, e.g. the total variation. This is because we essentially have f-div ≤ Σₙ f(2N Wⁿ) for "correct" weights.
October 20, 2025 at 4:10 PM
We prove that this sequence of weights is consistent (in the Monte Carlo sense: it gives corrected weighted averages) and that it converges to the constant weights 1/(2N): this is what you'd have if your chain was at stationarity.
October 20, 2025 at 4:10 PM
This is what the original illustration showed. You make pairs, wait until they meet, at which point you equalise their weights: both particles are one and the same, so it doesn't matter! Once that's done, you just need to swap them (shuffle) with the other pairs that met, and the cycle continues.
October 20, 2025 at 4:10 PM
The idea is to make several Markov chains interact and exchange information via their weights. To do so we rely on *couplings* of chains X₀, X₁, ..., Xₙ, which are computable joint distributions the chains satifying P(Xₙ = Yₙ | Xₙ₋₁, Yₙ₋₁) > 0. sites.google.com/site/pierrej...
October 20, 2025 at 4:10 PM
A little self promotion: Hai-Dang Dau (NUS) and I recently released this pre-print, which I'm not half proud of.
arxiv.org/abs/2510.07559

The main problem we solve in it is to construct importance weights for Markov chain Monte Carlo. We achieve it via a method we call harmonization by coupling.
October 20, 2025 at 4:10 PM
I was complaining about the same thing so my friends got me a katana!
October 16, 2025 at 6:38 AM
Applied mechanics
August 11, 2025 at 7:14 PM
I'm not that much into painful Gaussian algebra, but to each their own kink I suppose.
July 13, 2025 at 6:48 AM
I'm a big fan of the fuse in the plug, it's lacking a critical ergonomic feature.
July 5, 2025 at 10:21 AM
What's short for "Decision maker"? Didi?
July 2, 2025 at 10:07 PM
HMC explained in a comic strip. Note the final velocity flip.
June 27, 2025 at 9:15 AM
The gist of it is that, even though PMMH may eventually work better than PGibbs (because it is based on a better acceptance function in the MCMC sense), it's brittle in the low particle regime. On the other hand, PGibbs works out of the box (after some minor modification of the original algorithm).
June 3, 2025 at 7:32 AM
I recently pushed a note on ArXiv, arxiv.org/abs/2505.04611, the title of which I am particularly proud of.
I argue there against a misconception that parameter estimation in state-space models is usually better done with particle marginal Metropolis Hastings (PMMH) than particle Gibbs (PGibbs).
June 3, 2025 at 7:30 AM
Canary wharf ruined star wars for me.
April 25, 2025 at 6:31 PM
Today we remember the passion of Gauss who suffered so we didn't have to.

en.wikipedia.org/wiki/Date_of...
April 18, 2025 at 3:36 PM
Also the mug 'cause I don't like people using mine (also YOLO)
April 17, 2025 at 2:51 PM
If someone's looking for me at the next conference, I got myself the t-shirt version.
April 17, 2025 at 11:27 AM
e.g., today: I am computing some weird SDE bridge, with an unstable behaviour at the final time step; but eh, there's just one value of the bridge end point that can help.
April 14, 2025 at 7:08 PM
Use something you did during your PhD. For mine I made a "firework" using MCMC. Not sure about "pretty" but I like it which is what matters.
March 31, 2025 at 7:57 PM
March 29, 2025 at 8:33 AM
My little particles are en route for the weekend!
March 28, 2025 at 7:15 PM