lebellig
banner
lebellig.bsky.social
lebellig
@lebellig.bsky.social
Ph.D. student on generative models and domain adaptation for Earth observation 🛰
Previously intern @SonyCSL, @Ircam, @Inria

🌎 Personal website: https://lebellig.github.io/
Pinned
I created 3 introductory notebooks on Flow Matching models to help get started with this exciting topic! ✨

1. Annotated Flow Matching paper: github.com/gle-bellier/...
2. Discrete Flow Matching: github.com/gle-bellier/...
3. Minimal FM in Jax: github.com/gle-bellier/...
GitHub - gle-bellier/flow-matching: Annotated Flow Matching paper
Annotated Flow Matching paper. Contribute to gle-bellier/flow-matching development by creating an account on GitHub.
github.com
Reposted by lebellig
I'm excited to open the new year by sharing a new perspective paper.

I give a informal outline of MD and how it can interact with Generative AI. Then, I discuss how far the field has come since the seminal contributions, such as Boltzmann Generators, and what is still missing
January 16, 2026 at 10:25 AM
Should we ban Brian Eno from bandcamp?
I applaud BandCamp for promoting human creativity, yet there's nuance needed since tech has been used creatively by musicians for ages. Are algorithmic compositions generative AI? ML amp models? LANDR mastering?
Will retroactive enforcement violate agreements w/ users?
stereogum.com/2485199/band...
Bandcamp Bans AI Music
AI music has become a big problem on streaming services. Remember the AI-generated psych-rock band the Velvet Sundown and the AI-generated metalcore band Broken Avenue racking up streams on Spotify? R...
stereogum.com
January 15, 2026 at 5:08 PM
Reposted by lebellig
New blog 💙: I reflect on why I worked on what I worked on...

I think a PhD is a very special time. You get to challenge yourself, push your boundaries, and grow. My thoughts go against the current AI/academia narrative online, so I hope you find it interesting.

chaitjo.substack.com/p/phd-thesis...
A Cambridge PhD thesis in three research questions
Geometric Deep Learning for Molecular Modelling and Design: A personal scientific journey
chaitjo.substack.com
January 8, 2026 at 4:38 AM
You may add the real test (or training 👀) dataset if you are into leaderboard chasing
1. Select many diffusion/flow-matching models
2. Generate 50k images per model
3. Use FID of each set as a label
4. Train a model to predict FID from a single image

What’s the probability this actually works, gives a cheap proxy for FID and enable fast generative model prototyping?
January 8, 2026 at 5:33 PM
1. Select many diffusion/flow-matching models
2. Generate 50k images per model
3. Use FID of each set as a label
4. Train a model to predict FID from a single image

What’s the probability this actually works, gives a cheap proxy for FID and enable fast generative model prototyping?
January 8, 2026 at 3:17 PM
Reposted by lebellig
We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7
January 7, 2026 at 5:28 PM
Reposted by lebellig
📖 We put together with Mike Davies a review of self-supervised learning for inverse problems, covering the main approaches in the literature with a unified notation and analysis.

arxiv.org/abs/2601.03244
Self-Supervised Learning from Noisy and Incomplete Data
Many important problems in science and engineering involve inferring a signal from noisy and/or incomplete observations, where the observation process is known. Historically, this problem has been tac...
arxiv.org
January 8, 2026 at 12:37 PM
Can we train neural networks just with permutations of their initial weights? And then whats the best initialisation distribution ?
December 10, 2025 at 5:21 PM
Reposted by lebellig
PS: We also recently released a unified codebase for discrete diffusion, check it out!

𝕏 Thread : x.com/nkalyanv99/...
🔗 GitHub: github.com/nkalyanv99/...
📚 Docs: nkalyanv99.github.io/UNI-D2/
December 9, 2025 at 4:06 PM
Reposted by lebellig
🆕 “Foundations of Diffusion Models in General State Spaces: A Self-Contained Introduction”

Huge thanks to Tobias Hoppe, @k-neklyudov.bsky.social,
@alextong.bsky.social, Stefan Bauer and @andreadittadi.bsky.social for their supervision! 🙌

arxiv : arxiv.org/abs/2512.05092 🧵👇
December 9, 2025 at 4:05 PM
"Improved Mean Flows: On the Challenges of Fastforward Generative Models" arxiv.org/abs/2512.02012 questions this approximation and proposes a new training process for mean flows.
I was intrigued by "Mean Flows for One-Step Generative Modeling" and, in particular, how it handles averaging the marginal velocity field during training. In practice, they don't and replace it with the conditional one in their loss function. I wonder how mismatches impact generation...
December 3, 2025 at 9:32 PM
Hi @ google, can you provide 100k TPU hours to explore the design space of diffusion bridges for image-to-image translation? x1 vs drift pred, architectures and # params, # dataset, scaling couplings and batch sizes (for minibatch-based couplings). I can run everything in jax in return...
December 3, 2025 at 9:19 PM
Reposted by lebellig
Yesterday, @nicolasdufour.bsky.social defended is PhD. I really enjoyed the years of collaboration w/ @vickykalogeiton.bsky.social (& @loicland.bsky.social)

Video: youtube.com/live/DXQ7FZA...

Big thanks to the jury @dlarlus.bsky.social @ptrkprz.bsky.social @gtolias.bsky.social A. Efros & T. Karras
November 27, 2025 at 7:14 PM
Reposted by lebellig
@climateainordics.com is now on youtube! Check out some amazing talks on how to help fight climate change using AI!

youtube.com/@climateaino...
♻️ Re-watch, re-learn, re-connect!

We are on the tube now! Check out the recordings from our first years events!

✨🍿🌍📺🌿🌊🌲☀️🌱🍄🌳

youtube.com/@climateaino...
November 26, 2025 at 2:06 PM
Reposted by lebellig
@neuripsconf.bsky.social is two weeks away!

📢 Stop missing great workshop speakers just because the workshop wasn’t on your radar. Browse them all in one place:
robinhesse.github.io/workshop_spe...

(also available for @euripsconf.bsky.social)

#NeurIPS #EurIPS
November 19, 2025 at 8:00 PM
Calling it for today... I tried using the gemini 3 Pro preview to build some js animations, and it went well
November 18, 2025 at 8:17 PM
Interpolation between two gaussian distributions on a flat torus (my personal benchmark for new llms)
November 18, 2025 at 6:43 PM
It may not top ImageNet benchmarks, but honestly, that hardly matters... the removal of the VAE component is a huge relief and makes it much easier to apply diffusion models to domain-specific datasets that lack large-scale VAEs.
"Back to Basics: Let Denoising Generative Models Denoise" by
Tianhong Li & Kaiming He arxiv.org/abs/2511.13720
Diffusion models in pixel-space, without VAE, with clean image prediction = nice generation results. Not a new framework but a nice exploration of the design space of the diffusion models.
November 18, 2025 at 5:10 PM
"Back to Basics: Let Denoising Generative Models Denoise" by
Tianhong Li & Kaiming He arxiv.org/abs/2511.13720
Diffusion models in pixel-space, without VAE, with clean image prediction = nice generation results. Not a new framework but a nice exploration of the design space of the diffusion models.
November 18, 2025 at 5:05 PM
Reposted by lebellig
We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.
November 10, 2025 at 9:10 AM
Reposted by lebellig
We created a 1-hour live-coding tutorial to get started in imaging problems with AI, using the deepinverse library

youtu.be/YRJRgmXV8_I?...
DeepInverse tutorial - computational imaging with AI
YouTube video by DeepInverse
youtu.be
November 13, 2025 at 3:24 PM
I’ll be at EurIPS in Copenhagen in early December ! Always up for chats about diffusion, flow matching, Earth observation, AI4climate, etc... Ping me if you’re going! 🇩🇰🌍
November 12, 2025 at 9:07 PM
I first came across the idea of learning curved interpolants in "Branched Schrödinger Bridge Matching" arxiv.org/abs/2506.09007. I liked it, but I’m curious how well it scales to high-dim settings and how challenging it is to learn sufficiently good interpolants to train the diffusion bridge
November 12, 2025 at 8:57 PM
"Curly Flow Matching for Learning Non-gradient Field Dynamics" @kpetrovvic.bsky.social et al. arxiv.org/pdf/2510.26645
Solving the Schrödinger bridge pb with a non-zero drift ref. process: learn curved interpolants, apply minibatch OT with the induced metric, learn the mixture of diffusion bridges.
November 12, 2025 at 8:09 PM
Reposted by lebellig
I'm on my way to @caltech.edu for an AI + Science conference. Looking forward to seeing some friends and meeting new ones. There will be a livestream.
aiscienceconference.caltech.edu
November 9, 2025 at 8:41 PM