Sam Duffield
@samduffield.com
1K followers 370 following 40 posts
Stats, ML and open-source
Posts Media Videos Starter Packs
samduffield.com
As described in the paper, LRW provides multiple benefits but the key motivation for us @normalcomputing.com was the co-design with novel stochastic computing hardware which we believe can drastically accelerate general-purpose SDE sampling.
samduffield.com
New paper on arXiv! And I think it's a good'un 😄

Meet the new Lattice Random Walk (LRW) discretisation for SDEs. It’s radically different from traditional methods like Euler-Maruyama (EM) in that each iteration can only move in discrete steps {-δₓ, 0, δₓ}.
Reposted by Sam Duffield
spmontecarlo.bsky.social
In slides from a recent talk - the { virtuous / vicious } cycle of filtering, smoothing, and parameter estimation in state space models.
samduffield.com
Oh you king this is great thanks! I was at Lau Pa Sat the other day but went for shrimp noodles (which were great) because the satay queue was too long
samduffield.com
Didn’t listen, good decision
samduffield.com
Me: Hey so where’s good to eat round here?
Singapore taxi driver: Malaysia
samduffield.com
However! We’re working on a much broader generalisation of abile which hopefully will be able to share soon 🤞🔜
samduffield.com
We've also updated the paper and made some cool updates to the library 😎

Paper: arxiv.org/abs/2406.00104
Repo: github.com/normal-compu...
samduffield.com
📃 Poster #419
🗓️ Sat 26th, 10:00–12:30
📍 #ICLR2025, Singapore

Swing by if you’re into probml, thermodynamic computing or just wanna say hi
samduffield.com
posteriors 𝞡 published at ICLR!

I’ll be in Singapore next week, let’s chat all things scalable Bayesian learning! 🇸🇬👋
Reposted by Sam Duffield
spmontecarlo.bsky.social
A new instalment of office decor:
samduffield.com
Should have said, here h is stepsize 😅
samduffield.com
So simple!

Normally we order our minibatches like
a, b, c, ...., [shuffle], new_a, new_b, new_c, ....
but instead, if we do
a, b, c, ...., [reverse], ...., c, b, a, [shuffle], new_a, new_b, ....

The RMSE of stochastic gradient descent reduces from O(h) to O(h²)

arxiv.org/abs/2504.04274
Reposted by Sam Duffield
alexxthiery.bsky.social
Sequential Monte Carlo (aka. Particle Socialism?):

"why send one explorer when you can send a whole army of clueless one"
samduffield.com
Yep! That would be clearer
samduffield.com
Was revisiting the Neural ODEs paper the other day and greatly enjoying.

But I found this super confusing, it’s not an A=B+A statement
Reposted by Sam Duffield
spmontecarlo.bsky.social
Thrillingly (/s), I have today (lightly) updated my website (sites.google.com/view/sp-mont...).

I highlight that I've added
i) links to several slide decks for talks about my research, and
ii) materials related to the (few) short courses which I've given in the past couple of years.

Enjoy!
Sam Power's site
Hello! My name is Sam, and I am a researcher in Statistics. I am currently Lecturer in Statistical Science at the University of Bristol. Prior to this role, I was a Senior Research Associate (also at...
sites.google.com
Reposted by Sam Duffield
algoperf.bsky.social
Hi there! This account will post about the AlgoPerf benchmark and leaderboard updates for faster neural network training via better training algorithms. But let's start with what AlgoPerf is, what we have done so far, and how you can train neural nets ~30% faster.
samduffield.com
Thinking about it more, I think the sharp jumps are an artefact of the plotting. The plotting function will linearly interpolate but you can actually probabilistically interpolate using the smoothing equations
samduffield.com
Oh you are right! Very nice!
samduffield.com
I don’t think so - I can’t see any backward iterations in the code. Also the sharp changes after a result in e.g. the boxing plot are a classic filtering feature - there is a reason smoothing is called smoothing after all 😄