lebellig
@lebellig.bsky.social
2.3K followers 630 following 120 posts
Ph.D. student on generative models and domain adaptation for Earth observation 🛰 Previously intern @SonyCSL, @Ircam, @Inria 🌎 Personal website: https://lebellig.github.io/
Posts Media Videos Starter Packs
Reposted by lebellig
arcampbell.bsky.social
Very excited to share our preprint: Self-Speculative Masked Diffusions

We speed up sampling of masked diffusion models by ~2x by using speculative sampling and a hybrid non-causal / causal transformer

arxiv.org/abs/2510.03929

w/ @vdebortoli.bsky.social, Jiaxin Shi, @arnauddoucet.bsky.social
Reposted by lebellig
gandry.bsky.social
🚀 After more than a year of work — and many great discussions with curious minds & domain experts — we’re excited to announce the public release of 𝐀𝐩𝐩𝐚, our latent diffusion model for global data assimilation!

Check the repo and the complete wiki!
github.com/montefiore-s...
GitHub - montefiore-sail/appa: Code for the publication "Appa: Bending Weather Dynamics with Latent Diffusion Models for Global Data Assimilation".
Code for the publication "Appa: Bending Weather Dynamics with Latent Diffusion Models for Global Data Assimilation". - montefiore-sail/appa
github.com
lebellig.bsky.social
"Be Tangential to Manifold: Discovering Riemannian Metric for Diffusion Models" Shinnosuke Saito et al. arxiv.org/abs/2510.05509
High-density regions might not be the most interesting areas to visit. Thus, they define a new Riemannian metric for diffusion models relying on the Jacobian of the score
Reposted by lebellig
cnrsinformatics.bsky.social
#Distinction 🏆| Charlotte Pelletier, lauréate d'une chaire #IUF, développe des méthodes d’intelligence artificielle appliquées aux séries temporelles d’images satellitaires.
➡️ www.ins2i.cnrs.fr/fr/cnrsinfo/...
🤝 @irisa-lab.bsky.social @cnrs-bretagneloire.bsky.social
lebellig.bsky.social
Reposting because part of me wants to see EBM make a comeback and hopes flow-based training can help it scale.
lebellig.bsky.social
"Energy Matching: Unifying Flow Matching and
Energy-Based Models for Generative Modeling" by Michal Balcerak et al. arxiv.org/abs/2504.10612
I'm not sure EBM will beat flow-matching/diffusion models, but this article is very refreshing.
Reposted by lebellig
marcocuturi.bsky.social
Our two phenomenal interns, Alireza Mousavi-Hosseini and Stephen Zhang @syz.bsky.social have been cooking some really cool work with Michal Klein and me over the summer.

Relying on optimal transport couplings (to pick noise and data pairs) should, in principle, be helpful to guide flow matching

🧵
lebellig.bsky.social
You can learn (condition+time)-dependent weights for classifier-free guidance using reward functions like the CLIP score arxiv.org/abs/2510.00815. I wonder if, for text-to-image models, the temporal evolution of learned weights reveals information about the sizes of objects described in the caption
Learn to Guide Your Diffusion Model
Classifier-free guidance (CFG) is a widely used technique for improving the perceptual quality of samples from conditional diffusion models. It operates by linearly combining conditional and unconditi...
arxiv.org
lebellig.bsky.social
Are you targeting a specific task like regression/classification/generation?
lebellig.bsky.social
I agree that results may differ on higher-dimensional datasets. Still, I appreciate this line of work which questions the generalization capabilities of flow-based modes by combining mathematical insights with experimental observations on image datasets (not only 2d gaussian mixtures)
Reposted by lebellig
cnrs.fr
CNRS @cnrs.fr · 27d
#Communiqué 🗞️ La médaille d'or 2025 du CNRS est décernée à Stéphane Mallat, mondialement reconnu pour ses travaux autour des mathématiques appliquées au traitement du signal et à l’intelligence artificielle. 👏

👉 cnrs.fr/fr/presse/en...

#TalentsCNRS 🏅
Reposted by lebellig
lebellig.bsky.social
Grateful for the opportunity to speak at tomorrow’s Learning Machines seminar ([email protected]) on generative domain adaptation and geospatial foundation models benchmarking for robust Earth observation 🌍

Join on Sept 11 at 15:00 CET! www.ri.se/en/learningm...
Reposted by lebellig
tachellajulian.bsky.social
☀️ Just wrapped up the DeepInverse Hackathon!

We had 30+ imaging scientists from all over the world coding during 3 days next to the beautiful Calanques in Marseille, France. It was a great moment to meet new people, discuss science, and code new imaging algorithms!
lebellig.bsky.social
Grateful for the opportunity to speak at tomorrow’s Learning Machines seminar ([email protected]) on generative domain adaptation and geospatial foundation models benchmarking for robust Earth observation 🌍

Join on Sept 11 at 15:00 CET! www.ri.se/en/learningm...
Reposted by lebellig
jhauret.bsky.social
If you’re interested in joining the visio, shoot me a DM and I’ll send you the link!

( Time Zone is Paris )
jhauret.bsky.social
PhD defense coming up! 🎓

"Deep learning for speech enhancement applied to radio communications using non-conventional sound capture devices"

🗓️ Sept 12, 2025 – 2PM
📍 Cnam Paris, Amphithéâtre Laussédat

Here’s a short teaser demo I just recorded.

Everyone is welcome to attend!
Real-time speech enhancement in noise using a throat microphone
YouTube video by Julien Hauret
www.youtube.com
Reposted by lebellig
francois-rozet.bsky.social
Does a smaller latent space lead to worse generation in latent diffusion models? Not necessarily! We show that LDMs are extremely robust to a wide range of compression rates (10-1000x) in the context of physics emulation.

We got lost in latent space. Join us 👇
Reposted by lebellig
lucamb.bsky.social
I am very happy to finally share something I have been working on and off for the past year:

"The Information Dynamics of Generative Diffusion"

This paper connects entropy production, divergence of vector fields and spontaneous symmetry breaking

link: arxiv.org/abs/2508.19897
Reposted by lebellig
samduffield.com
New paper on arXiv! And I think it's a good'un 😄

Meet the new Lattice Random Walk (LRW) discretisation for SDEs. It’s radically different from traditional methods like Euler-Maruyama (EM) in that each iteration can only move in discrete steps {-δₓ, 0, δₓ}.
lebellig.bsky.social
Late to the party but I like the fact that you can use geodesic random walk (like really simulating the random walks) to derive the SDEs necessary for diffusion models on Riemannian manifolds (from arxiv.org/abs/2202.02763)
lebellig.bsky.social
Hugging Face’s Transformers library has dropped jax support... but if, by any chance, someone builds a great and beautifully written flow matching/diffusion library in jax, I’d seriously consider switching from torch 🤗
lebellig.bsky.social
I'll be at #GRETSI in Strasbourg next week! Friday morning, I'll present our work on Riemannian flow matching for SAR interferometry (generation and denoising) 🛰️

Also really looking forward to the poster sessions and all the exciting conferences on the program!

📄 hal.science/hal-05140421