Mathurin Massias
@mathurinmassias.bsky.social
610 followers 91 following 36 posts
Tenured Researcher @INRIA, Ockham team. Teacher @Polytechnique and @ENSdeLyon Machine Learning, Python and Optimization
Posts Media Videos Starter Packs
Pinned
mathurinmassias.bsky.social
New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719

🤯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?

w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social 👇👇👇
Reposted by Mathurin Massias
tonysf.bsky.social
My paper on Generalized Gradient Norm Clipping & Non-Euclidean (L0, L1)-Smoothness (together with collaborators from EPFL) was accepted as an oral at NeurIPS! We extend the theory for our Scion algorithm to include gradient clipping. Read about it here arxiv.org/abs/2506.01913
mathurinmassias.bsky.social
Our work on the generalization of Flow Matching got an oral at Neurips !

Go see @quentinbertrand.bsky.social present it there :)
mathurinmassias.bsky.social
New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719

🤯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?

w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social 👇👇👇
Reposted by Mathurin Massias
mathurinmassias.bsky.social
Oui, tout sera en anglais !
mathurinmassias.bsky.social
Oui... c'est un compromis avec le fait d'avoir suffisamment de créneaux et de temps de discussions aux posters. Tu peux éventuellement arriver un peu après le début, et sinon ça devrait être accessible à distance 🤞
mathurinmassias.bsky.social
Super elegant approach !
rflamary.bsky.social
Distributional Reduction paper with H. Van Assel, @ncourty.bsky.social, T. Vayer , C. Vincent-Cuaz, and @pfrossard.bsky.social is accepted at TMLR. We show that both dimensionality reduction and clustering can be seen as minimizing an optimal transport loss 🧵1/5. openreview.net/forum?id=cll...
mathurinmassias.bsky.social
on second thoughts I'm not sure I understood. In the classical FM loss you do have to learn this derivative no ? The loss is :
mathurinmassias.bsky.social
I was thinking of the linear interpolant yes ; I haven't seen papers where other are used
mathurinmassias.bsky.social
it could even be velocity matching, and this time you do learn match the *conditional* velocities
mathurinmassias.bsky.social
Thanks for the kind words
mathurinmassias.bsky.social
Then why does flow matching generalize?? Because it fails!

The inductive bias of the neural network prevents from perfectly learning u* and overfitting.

In particular neural networks fail to learn the velocity field for two particular time values.

See the paper for a finer analysis 😀
mathurinmassias.bsky.social
We propose to regress directly against the optimal (deterministic) u* and show that it never degrades the performance
On the opposite, removing target stochasticity helps generalizing faster.
mathurinmassias.bsky.social
Yet flow matching generates new samples!

An hypothesis to explain this paradox is target stochasticity: FM targets the conditional velocity field ie only a stochastic approximation of the full velocity field u*

*We refute this hypothesis*: very early, the approximation almost equals u*
mathurinmassias.bsky.social
New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719

🤯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?

w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social 👇👇👇
mathurinmassias.bsky.social
On Saturday Anne will also present some very, very cool work on how to leverage Flow Matching models to obtain sota Plug and Play methods:

PnP-Flow: Plug-and-Play Image Restoration with Flow Matching, poster #150 in poster session 6, Saturday at 3 pm

arxiv.org/abs/2410.02423
PnP-Flow: Plug-and-Play Image Restoration with Flow Matching
In this paper, we introduce Plug-and-Play (PnP) Flow Matching, an algorithm for solving imaging inverse problems. PnP methods leverage the strength of pre-trained denoisers, often deep neural networks...
arxiv.org
Reposted by Mathurin Massias
samuelvaiter.com
The proximal operator generalizes projection in convex optimization. It converts minimisers to fixed points. It is at the core of nonsmooth splitting methods and was first introduced by Jean-Jacques Moreau in 1965. www.numdam.org/article/BSMF...
Reposted by Mathurin Massias
tachellajulian.bsky.social
🚢🚢 deepinv v0.3.0 is here, with many new features! 🚢 🚢

Our passionate team of contributors keeps shipping more exciting tools!

Deepinverse (deepinv.github.io) is a library for solving imaging inverse problems with deep learning.
Redirecting to https://deepinv.github.io/deepinv/
deepinv.github.io
mathurinmassias.bsky.social
I had a blast giving a summer school on generative models at AI Hub Senegal, in particular flow matching, with @quentinbertrand.bsky.social and @remiemonet.bsky.social

Our material is publicly available !!! github.com/QB3/SenHubIA...

ensdelyon.bsky.social