Arnaud Doucet
@arnauddoucet.bsky.social
890 followers 220 following 10 posts
Senior Staff Research Scientist @Google DeepMind, previously Stats Prof @Oxford Uni - interested in Computational Statistics, Generative Modeling, Monte Carlo methods, Optimal Transport.
Posts Media Videos Starter Packs
Pinned
arnauddoucet.bsky.social
The slides of my NeurIPS lecture "From Diffusion Models to Schrödinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
BreimanLectureNeurIPS2024_Doucet.pdf
drive.google.com
Reposted by Arnaud Doucet
majhas.bsky.social
(1/n)🚨Train a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
Reposted by Arnaud Doucet
logml.bsky.social
🌟Applications open- LOGML 2025🌟

👥Mentor-led projects, expert talks, tutorials, socials, and a networking night
✍️Application form: logml.ai
🔬Projects: www.logml.ai/projects.html
📅Apply by 6th April 2025
✉️Questions? [email protected]

#MachineLearning #SummerSchool #LOGML #Geometry
LOGML 2025
London Geometry and Machine Learning Summer School, July 7-11 2025
logml.ai
arnauddoucet.bsky.social
Just write a short informal email. If the person needs a long-winded polite email to answer, then perhaps you don't want to have to interact with them.
Reposted by Arnaud Doucet
k-neklyudov.bsky.social
SuperDiff goes super big!
- Spotlight at #ICLR2025!🥳
- Stable Diffusion XL pipeline on HuggingFace huggingface.co/superdiff/su... made by Viktor Ohanesian
- New results for molecules in the camera-ready arxiv.org/abs/2412.17762
Let's celebrate with a prompt guessing game in the thread👇
Reposted by Arnaud Doucet
masudhusain.bsky.social
Why academia is sleepwalking into self-destruction. My editorial @brain1878.bsky.social If you agree with the sentiments please repost. It's important for all our sakes to stop the madness
academic.oup.com/brain/articl...
Reposted by Arnaud Doucet
grant.rotskoff.cc
Excited to see our paper “Computing Nonequilibrium Responses with Score-Shifted Stochastic Differential Equations” in Physical Review Letters this morning as an Editor’s Suggestion! We uses ideas from generative modeling to unravel a rather technical problem. 🧵 journals.aps.org/prl/abstract...
arnauddoucet.bsky.social
Great intro to PAC-Bayes bounds. Highly recommended!
pierrealquier.bsky.social
I already advertised for this document when I posted it on arXiv, and later when it was published.

This week, with the agreement of the publisher, I uploaded the published version on arXiv.

Less typos, more references and additional sections including PAC-Bayes Bernstein.

arxiv.org/abs/2110.11216
arnauddoucet.bsky.social
Well you can do it but we don't have any proof. We actually also ran alpha-DSBM with zero-variance noise, i.e. so really an "alpha-rectified flow": experimentally it does "work" but we have no proof of convergence for the procedure.
arnauddoucet.bsky.social
Yes the trajectories are not quite smooth as they correspond to a Brownian bridge and, as the variance of the reference Brownian motion of your SB goes to zero, you get back to the deterministic and straight paths of OT.
Reposted by Arnaud Doucet
Better diffusions with scoring rules!

Fewer, larger denoising steps using distributional losses; learn the posterior distribution of clean samples given the noisy versions.

arxiv.org/pdf/2502.02483

@vdebortoli.bsky.social Galashov Guntupalli Zhou @sirbayes.bsky.social @arnauddoucet.bsky.social
arxiv.org
arnauddoucet.bsky.social
A standard ML approach for parameter estimation in latent variable models is to maximize the expectation of the logarithm of an importance sampling estimate of the intractable likelihood. We provide consistency/efficiency results for the resulting estimate: arxiv.org/abs/2501.08477
On the Asymptotics of Importance Weighted Variational Inference
For complex latent variable models, the likelihood function is not available in closed form. In this context, a popular method to perform parameter estimation is Importance Weighted Variational Infere...
arxiv.org
arnauddoucet.bsky.social
Speculative sampling accelerates inference in LLMs by drafting future tokens which are verified in parallel. With @vdebortoli.bsky.social , A. Galashov & @arthurgretton.bsky.social , we extend this approach to (continuous-space) diffusion models: arxiv.org/abs/2501.05370
arnauddoucet.bsky.social
I personally read at least a couple of hours per day. It is not particularly focused and I might "waste" time but I just enjoy it.
arnauddoucet.bsky.social
Very nice paper indeed. I like it.
Reposted by Arnaud Doucet
joeybose.bsky.social
🔊 Super excited to announce the first ever Frontiers of Probabilistic Inference: Learning meets Sampling workshop at #ICLR2025 @iclr-conf.bsky.social!

🔗 website: sites.google.com/view/fpiwork...

🔥 Call for papers: sites.google.com/view/fpiwork...

more details in thread below👇 🧵
Reposted by Arnaud Doucet
lebellig.bsky.social
Schrödinger Bridge Flow for Unpaired Data Translation (by @vdebortoli.bsky.social et al.)

It will take me some time to digest this article fully, but it's important to follow the authors' advice and read the appendices, as the examples are helpful and well-illustrated.

📄 arxiv.org/abs/2409.09347
arnauddoucet.bsky.social
The slides of my NeurIPS lecture "From Diffusion Models to Schrödinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
BreimanLectureNeurIPS2024_Doucet.pdf
drive.google.com
Reposted by Arnaud Doucet
alexxthiery.bsky.social
One #postdoc position is still available at the National University of Singapore (NUS) to work on sampling, high-dimensional data-assimilation, and diffusion/flow models. Applications are open until the end of January. Details:

alexxthiery.github.io/jobs/2024_di...
arnauddoucet.bsky.social
I couldn't speak for the following 3 days :-)
Reposted by Arnaud Doucet
gabrielpeyre.bsky.social
I have updated my course notes on Optimal Transport with a new Chapter 9 on Wasserstein flows. It includes 3 illustrative applications: training a 2-layer MLP, deep transformers, and flow-matching generative models. You can access it here: mathematical-tours.github.io/book-sources...
Reposted by Arnaud Doucet
neu-rips.bsky.social
exciting new work by my truly brilliant postdoc Eugenio Clerico on the optimality of coin-betting strategies for mean estimation!

for fans of: mean estimation, online learning with log loss, optimal portfolios, hypothesis testing with E-values, etc.

dig in:
arxiv.org/abs/2412.02640
On the optimality of coin-betting for mean estimation
Confidence sequences are sequences of confidence sets that adapt to incoming data while maintaining validity. Recent advances have introduced an algorithmic formulation for constructing some of the ti...
arxiv.org