Sam Power
banner
spmontecarlo.bsky.social
Sam Power
@spmontecarlo.bsky.social
Lecturer in Maths & Stats at Bristol. Interested in probabilistic + numerical computation, statistical modelling + inference. (he / him).

Homepage: https://sites.google.com/view/sp-monte-carlo
Seminar: https://sites.google.com/view/monte-carlo-semina
I mean, lol
November 24, 2025 at 1:29 PM
Tour coming to an end, as I settle in for 5 hours of train journey! Had lots of fun talking about Random Walk Metropolis, Gradient Flows, and Skill Rating in Sports (among other chats). Slides from all talks are saved at github.com/sampower88/t....
Enjoyed visiting Imperial College London to speak at the Statistics Seminar last Friday. Now en route to Edinburgh to speak on Monday, followed by another talk at Newcastle on Friday - three (distinct!) talks in 8 days!
November 21, 2025 at 4:46 PM
splendid
November 20, 2025 at 10:46 PM
Super gripping (and fun!) lectures here:
youtu.be/OHDYdmuLMW0?...
"Fourier Analysis & Beyond I" - Mini-course
- Stefan Steinerberger
youtu.be
November 19, 2025 at 11:52 PM
great stuff, right up my alley:

arxiv.org/abs/2511.11497
'A Recursive Theory of Variational State Estimation: The Dynamic Programming Approach'
- Filip Tronarp
November 19, 2025 at 11:04 PM
A neat little calculation: for p in (0, 1), let Z(p) be a "standardised Bernoulli" random variable, i.e. a general coin flip, shifted and scaled to have mean 0 and variance 1. Then, for distinct p, q, one cannot compare Z(p) and Z(q) in the convex ordering.
November 19, 2025 at 10:16 AM
Enjoyed visiting Imperial College London to speak at the Statistics Seminar last Friday. Now en route to Edinburgh to speak on Monday, followed by another talk at Newcastle on Friday - three (distinct!) talks in 8 days!
November 16, 2025 at 2:06 PM
Reposted by Sam Power
🔥 WANTED: Student Researcher to join me, @vdebortoli.bsky.social, Jiaxin Shi, Kevin Li and @arthurgretton.bsky.social in DeepMind London.

You'll be working on Multimodal Diffusions for science. Apply here google.com/about/career...
Student Researcher, 2026 — Google Careers
google.com
November 15, 2025 at 5:23 PM
Nice reference on using LaTeX as a mathematician: github.com/nchopin/best...
November 15, 2025 at 5:04 PM
This viral ICLR review has some very fun excerpts. I look forward to wheeling out "In the current impetuous and intricate society, if one aspires to be a scholar, it is imperative to attain inner calm" after writing the world's most demanding review.
November 14, 2025 at 7:10 PM
I'm realising that I get slightly wound up by the vagueness with which the word "inherently" gets used in various mathematical contexts - "inherently sequential", "inherently nonlinear", and so on. Often not clear exactly what is meant, often because such claims are followed by their contradiction.
November 10, 2025 at 7:45 PM
Something which I'd like to gather my thoughts on at some stage is the family of undergraduate maths topics for which I often use their definitions, but rarely use their theorems. It comes up occasionally with students, and it would be good to be able to articulate well on this point.
November 9, 2025 at 11:01 PM
I feel like the "Universal Inference" paper is a very worthwhile read in part because it highlights a particular way in which likelihoods are a special object inferentially, in a way that is pretty difficult to replicate with other strategies.
November 9, 2025 at 10:12 PM
Very cool (from Ehm-Gneiting-Jordan-Krüger, JRSSB 2016): for mean estimation, all consistent scoring rules can be obtained as conic combinations of 'extremal' consistent scoring rules, with an explicit structure. Similar results hold for quantiles (and perhaps other tasks as well!).
November 9, 2025 at 5:04 PM
Me and the gang
November 9, 2025 at 2:26 PM
Silly question: are there 'standard' neural networks based on matrix-matrix multiplies? i.e. instead of propagating a vector with matvecs and activations, propagating a matrix with matmats and activations?
November 6, 2025 at 11:00 AM
another hit: "wombling"
November 5, 2025 at 6:42 PM
Another paper round-up - many new, many not; some read-in-full, many not. All interesting! As with the last bundle, summaries will be kept brief and hopefully stoke curiosity, rather than providing answers, and the ordering doesn't reflect anything informative.
November 5, 2025 at 9:55 AM
A bit random, but I find that whenever the 'critique' is raised that the KL divergence is not a distance, it is pretty rare that there is a strong case given for why this is actually a problem.

(It's of course reasonable to mention such things as a warning, etc.; this is not my concern)
November 4, 2025 at 7:32 PM
"tsunameters" is a banger; shout out spatial statistics
November 4, 2025 at 3:13 PM
I appreciate that this is technically a comprehensible sentence (and pretty benign in terms of how complex the concepts are), but the density of jargon did hit me with that vague feeling of "am I having a stroke".
November 4, 2025 at 10:35 AM
Well worth a read in general. Randomised Numerical Linear Algebra is a super cool field, and I have the impression that even its more basic results are not as widely known as they ought to be. Hopefully, this will start to change gradually (maybe through some well-chosen applications).
November 2, 2025 at 4:43 PM
I find it super funny to see how the terms { "old-school", "classical", etc. } get used in ML circles occasionally. It's healthiest to assume a bit of self-awareness in many of these cases, but regardless, it can be pretty striking to hear them used to describe things that are e.g. 10-15 years old.
October 31, 2025 at 8:59 AM
Reposted by Sam Power
A bit of blog (again, dusting off some old notes with a cute observation):

hackmd.io/@sp-monte-ca...
"Attention as Deconvolution"
This is indeed in the works (after combing through some of my folders of notes), but in the interim, I can share a few things which I've put up directly as .pdf files on my website (sites.google.com/view/sp-mont...), rather than as blog posts per se. Notes 3-5 are 'new'.
October 26, 2025 at 4:30 PM
There is something quite clean about distilling a large range of statistical principles down to

"Well if that _were_ the case, then *this* would really be quite unlikely."
October 30, 2025 at 9:52 AM