Florentin Guth
@florentinguth.bsky.social
160 followers 110 following 17 posts
Postdoc at NYU CDS and Flatiron CCN. Wants to understand why deep learning works.
Posts Media Videos Starter Packs
Reposted by Florentin Guth
unireps.bsky.social
🔥 Mark your calendars for the next session of the @ellis.eu x UniReps Speaker Series!

🗓️ When: 31th July – 16:00 CEST
📍 Where: ethz.zoom.us/j/66426188160
🎙️ Speakers: Keynote by
@pseudomanifold.topology.rocks & Flash Talk by @florentinguth.bsky.social
Reposted by Florentin Guth
unireps.bsky.social
Next appointment: 31st July 2025 – 16:00 CEST on Zoom with 🔵Keynote: @pseudomanifold.topology.rocks (University of Fribourg) 🔴 @florentinguth.bsky.social (NYU & Flatiron)
florentinguth.bsky.social
What I meant is that there are generalizations of the CLT to infinite variance. The limit is then an alpha stable distribution (includes Gaussian, Cauchy, but not Gumbel). Also, even if x is heavy tailed then log p(x) is typically not. So a product of Cauchy distributions has a Gaussian log p(x)!
florentinguth.bsky.social
At the same time, there are simple distributions that have Gumbel-distributed log probabilities. The simplest example I could find is a Gaussian scale mixture where the variance is distributed like an exponential variable. So it is not clear if we will be able to say something more about this! 2/2
florentinguth.bsky.social
If you have independent components, even if heavy-tailed, then log p(x) is a sum of iid variables and is thus distributed according to a (sum) stable law. A conjecture is that the minimum comes from a logsumexp, so a mixture distribution (sum of p) rather than a product (sum of log p). 1/2
florentinguth.bsky.social
For a more in-depth discussion of the approach and results (and more!): arxiv.org/pdf/2506.05310
arxiv.org
florentinguth.bsky.social
Finally, we test the manifold hypothesis: what is the local dimensionality around an image? We find that this depends both on the image and the size of the local neighborhood, and there exists images with both large full-dimensional and small low-dimensional neighborhoods.
florentinguth.bsky.social
High probability ≠ typicality: very high-probability images are rare. This is not a contradiction: frequency = probability density *multiplied by volume*, and volume is weird in high dimensions! Also, the log probabilities are Gumbel-distributed, and we don't know why!
florentinguth.bsky.social
These are the highest and lowest probability images in ImageNet64. An interpretation is that -log2 p(x) is the size in bits of the optimal compression of x: higher probability images are more compressible. Also, the probability ratio between these is 10^14,000! 🤯
florentinguth.bsky.social
But how do we know our probability model is accurate on real data?
In addition to computing cross-entropy/NLL, we show *strong* generalization: models trained on *disjoint* subsets of the data predict the *same* probabilities if the training set is large enough!
florentinguth.bsky.social
We call this approach "dual score matching". The time derivative constrains the learned energy to satisfy the diffusion equation, which enables recovery of accurate and *normalized* log probability values, even in high-dimensional multimodal distributions.
florentinguth.bsky.social
We also propose a simple procedure to obtain good network architectures for the energy U: choose any pre-existing score network s and simply take the inner product with the input image y! We show that this preserves the inductive biases of the base score network: grad_y U ≈ s.
florentinguth.bsky.social
How do we train an energy model?
Inspired by diffusion models, we learn the energy of both clean and noisy images along a diffusion. It is optimized via a sum of two score matching objectives, which constrain its derivatives with both the image (space) and the noise level (time).
florentinguth.bsky.social
What is the probability of an image? What do the highest and lowest probability images look like? Do natural images lie on a low-dimensional manifold?
In a new preprint with Zahra Kadkhodaie and @eerosim.bsky.social, we develop a novel energy-based model in order to answer these questions: 🧵
florentinguth.bsky.social
🌈 I'll be presenting our JMLR paper "A rainbow in deep network black boxes" today at 3pm at @iclr-conf.bsky.social!
Come to poster #334 if you're interested, I'll be happy to chat
More details in the threads on the other website: x.com/FlorentinGut...
x.com
florentinguth.bsky.social
This also manifests in what operator space and norm you're considering. Here you have bounded operators with operator norm or trace-class operators with nuclear norm. This matters a lot in infinite dimensions but also in finite but large dimensions!
Reposted by Florentin Guth
spmontecarlo.bsky.social
A loose thought that's been bubbling around for me recently: when you think of a 'generic' big matrix, you might think of it as being close to low-rank (e.g. kernel matrices), or very far from low-rank (e.g. the typical scope of random matrix theory). Intuition ought to be quite different in each.
florentinguth.bsky.social
Absolutely! Their behavior is quite different (e.g., consistency of eigenvalues and eigenvectors in the proportional asymptotic regime). You also want to use different objects to describe them: eigenvalues should be thought either as a non-increasing sequence or as samples from a distribution.
Reposted by Florentin Guth
suryaganguli.bsky.social
Speaking at this #NeurIPS2024 workshop on a new analytic theory of creativity in diffusion models that predicts what new images they will create and explains how these images are constructed as patch mosaics of the training data. Great work by @masonkamb.bsky.social
scienceofdlworkshop.github.io
SciForDL'24
scienceofdlworkshop.github.io
Reposted by Florentin Guth
tedyerxa.bsky.social
Excited to present work with @jfeather.bsky.social @eerosim.bsky.social and @sueyeonchung.bsky.social today at Neurips!

May do a proper thread later on, but come by or shoot me a message if you are in Vancouver and want to chat :)

Brief details in post below
florentinguth.bsky.social
Some more random conversation topics:
- what we should do to improve/replace these huge conferences
- replica method and other statphys-inspired high-dim probability (finally trying to understand what the fuss is about)
- textbooks that have been foundational/transformative for your work
florentinguth.bsky.social
I'll be at @neuripsconf.bsky.social from Tuesday to Sunday!

Feel free to reach out (Whova, email, DM) if you want to chat about scientific/theoretical understanding of deep learning, diffusion models, or more! (see below)

And check out our Sci4DL workshop on Sunday: scienceofdlworkshop.github.io