Sander Dieleman
sedielem.bsky.social
Sander Dieleman
@sedielem.bsky.social
Blog: https://sander.ai/
🐦: https://x.com/sedielem
Research Scientist at Google DeepMind (WaveNet, Imagen 3, Veo, ...). I tweet about deep learning (research + software), music, generative models (personal account).
Pinned
New blog post: let's talk about latents!
sander.ai/2025/04/15/l...
Generative modelling in latent space
Latent representations for generative models.
sander.ai
Great blog post on rotary position embeddings (RoPE) in more than one dimension, with interactive visualisations, a bunch of experimental results, and code!
On N-dimensional Rotary Positional Embeddings
An exploration of N-dimensional rotary positional embeddings (RoPE) for vision transformers.
jerryxio.ng
July 28, 2025 at 2:51 PM
... also very honoured and grateful to see my blog linked in the video description! 🥹🙏🙇
July 26, 2025 at 9:59 PM
I blog and give talks to help build people's intuition for diffusion models. YouTubers like @3blue1brown.com and Welch Labs have been a huge inspiration: their ability to make complex ideas in maths and physics approachable is unmatched. Really great to see them tackle this topic!
New video on the details of diffusion models: youtu.be/iv-5mZ_9CPY

Produced by Welch Labs, this is the first in a short series of 3b1b this summer. I enjoyed providing editorial feedback throughout the last several months, and couldn't be happier with the result.
But how do AI videos actually work? | Guest video by @WelchLabsVideo
YouTube video by 3Blue1Brown
youtu.be
July 26, 2025 at 9:59 PM
Everyone is welcome!
July 15, 2025 at 9:38 PM
Hello #ICML2025👋, anyone up for a diffusion circle? We'll just sit down somewhere and talk shop.

🕒Join us at 3PM on Thursday July 17. We'll meet here (see photo, near the west building's west entrance), and venture out from there to find a good spot to sit. Tell your friends!
July 15, 2025 at 9:34 PM
Diffusion models have analytical solutions, but they involve sums over the entire training set, and they don't generalise at all. They are mainly useful to help us understand how practical diffusion models generalise.

Nice blog + code by Raymond Fan: rfangit.github.io/blog/2025/op...
July 5, 2025 at 4:01 PM
Note also that getting this number slightly wrong isn't that big a deal. Even if you make it 100k instead of 10k, it's not going to change the granularity of the high frequencies that much because of the logarithmic frequency spacing.
June 24, 2025 at 11:39 PM
The frequencies are log-spaced, so historically, 10k was plenty to ensure that all positions can be uniquely distinguished. Nowadays of course sequences can be quite a bit longer.
June 24, 2025 at 11:39 PM
Here's the third and final part of Slater Stich's "History of diffusion" interview series!

The other two interviewees' research played a pivotal role in the rise of diffusion models, whereas I just like to yap about them 😬 this was a wonderful opportunity to do exactly that!
History of Diffusion - Sander Dieleman
YouTube video by Bain Capital Ventures
www.youtube.com
May 14, 2025 at 4:11 PM
The ML for audio 🗣️🎵🔊 workshop is back at ICML 2025 in Vancouver! It will take place on Saturday, July 19. Featuring invited talks from Dan Ellis, Albert Gu, James Betker, Laura Laurenti and Pratyusha Sharma.

Submission deadline: May 23 (Friday next week)
mlforaudioworkshop.github.io
[“Machine Learning for Audio Workshop”]
[“Discover the harmony of AI and sound.”]
mlforaudioworkshop.github.io
May 14, 2025 at 12:16 PM
Reposted by Sander Dieleman
I am very happy to share our latest work on the information theory of generative diffusion:

"Entropic Time Schedulers for Generative Diffusion Models"

We find that the conditional entropy offers a natural data-dependent notion of time during generation

Link: arxiv.org/abs/2504.13612
April 29, 2025 at 1:17 PM
One weird trick for better diffusion models: concatenate some DINOv2 features to your latent channels!

Combining latents with PCA components extracted from DINOv2 features yields faster training and better samples. Also enables a new guidance strategy. Simple and effective!
1/n Introducing ReDi (Representation Diffusion): a new generative approach that leverages a diffusion model to jointly capture
– Low-level image details (via VAE latents)
– High-level semantic features (via DINOv2)🧵
April 25, 2025 at 1:03 PM
New blog post: let's talk about latents!
sander.ai/2025/04/15/l...
Generative modelling in latent space
Latent representations for generative models.
sander.ai
April 15, 2025 at 9:40 AM
Amazing interview with Yang Song, one of the key researchers we have to thank for diffusion models.

The most important lesson: be fearless! The community's view on score matching was quite pessimistic at the time, he went against the grain and made it work at scale!

www.youtube.com/watch?v=ud6z...
History of Diffusion - Yang Song
YouTube video by Bain Capital Ventures
www.youtube.com
April 14, 2025 at 4:47 PM
Reposted by Sander Dieleman
🥁Introducing Gemini 2.5, our most intelligent model with impressive capabilities in advanced reasoning and coding.

Now integrating thinking capabilities, 2.5 Pro Experimental is our most performant Gemini model yet. It’s #1 on the LM Arena leaderboard. 🥇
March 25, 2025 at 5:25 PM
We are hiring on the Generative Media team in London: boards.greenhouse.io/deepmind/job...

We work on Imagen, Veo, Lyria and all that good stuff. Come work with us! If you're interested, apply before Feb 28.
Research Scientist, Generative Media
London, UK
boards.greenhouse.io
February 21, 2025 at 7:00 PM
Great interview with @jascha.sohldickstein.com about diffusion models! This is the first in a series: similar interviews with Yang Song and yours truly will follow soon.

(One of these is not like the others -- both of them basically invented the field, and I occasionally write a blog post 🥲)
History of Diffusion - Jascha Sohl-Dickstein
YouTube video by Bain Capital Ventures
www.youtube.com
February 10, 2025 at 10:28 PM
Yes! Also listen to this and contemplate the universe: grumusic.bandcamp.com/album/cosmog...
Cosmogenesis, by grumusic
8 track album
grumusic.bandcamp.com
January 28, 2025 at 11:53 PM
This is just a tiny fraction of what's available, check out the schedule for more: neurips.cc/virtual/2024...
NeurIPS 2024 Schedule
neurips.cc
January 22, 2025 at 9:04 PM
10. Last but not least (😎), here's my own workshop talk about multimodal iterative refinement: the methodological tension between language and perceptual modalities, autoregression and diffusion, and how to bring these together 🍸 neurips.cc/virtual/2024...
NeurIPS Multimodal Iterative RefinementNeurIPS 2024
neurips.cc
January 22, 2025 at 9:04 PM
9. A great overview of various strategies for merging multiple models together by Colin Raffel 🪿 neurips.cc/virtual/2024...
NeurIPS Colin RaffleNeurIPS 2024
neurips.cc
January 22, 2025 at 9:04 PM
8. Ishan Misra gives a nice overview of Meta's Movie Gen model 📽️ (I have some questions about the diffusion vs. flow matching comparison though😁) neurips.cc/virtual/2024...
NeurIPS Invited Talk 4 (Speker: Ishan Misra)NeurIPS 2024
neurips.cc
January 22, 2025 at 9:04 PM
7. More on test-time scaling from @tomgoldstein.bsky.social, using a different approach based on recurrence 🐚 neurips.cc/virtual/2024... (some interesting comments on the link with diffusion models in the questions at the end!)
NeurIPS Tom Goldstein: Can transformers solve harder problems than they were trained on? Scaling up test-time computation via recurrenceNeurIPS 2024
neurips.cc
January 22, 2025 at 9:04 PM
6. @polynoamial.bsky.social talks about scaling compute at inference time, and the trade-offs involved -- in language models, but also in other settings 🧮 neurips.cc/virtual/2024...
NeurIPS Invited Speaker: Noam Brown, OpenAINeurIPS 2024
neurips.cc
January 22, 2025 at 9:04 PM
5. Sparse autoencoders were in vogue well over a decade ago, back when I was doing my PhD. They've recently been revived in the context of mechanistic interpretability of LLMs 🔍 @neelnanda.bsky.social gives a nice overview: neurips.cc/virtual/2024...
NeurIPS Neel Nanda: Sparse Autoencoders - Assessing the evidenceNeurIPS 2024
neurips.cc
January 22, 2025 at 9:04 PM