Tam Le
@ntamle.bsky.social
28 followers 31 following 8 posts
Assistant professor at Université Paris Cité - LPSM. Working on optimization and machine learning.
Posts Media Videos Starter Packs
ntamle.bsky.social
I will be at ICCOPT (USC Los Angeles) next week to present this work. This will be on Wednesday July 23, alongside other nice talks on first-order methods: heavy ball ODE, optimal smoothing and nonsmoothness.
ntamle.bsky.social
We studied both continuous time and discretized dynamics. The paper also contains other results, on complexity in the convex case, on limit of limit points for discretized set-valued dynamics ...
ntamle.bsky.social
For instance, if a critical point is flat, it may be more sensible to errors, since the vanishing gradient cannot compensate the perturbations. We thus obtain an estimate (rho) of the fluctuations around the critical set, depending on the coefficients theta and beta.
ntamle.bsky.social
The idea of the analysis was to quantify how much critical points are flat or sharp. So we relied on KL inequality and a metric subregularity condition. They are satisfied for a large class of functions called "definable" or semialgebraic ones (say, piecewise polynomial).
ntamle.bsky.social
🎉🎉🎉Our paper "Inexact subgradient methods for semialgebraic
functions" is accepted at Mathematical Programming !! This is a joint work with Jerome Bolte, Eric Moulines and Edouard Pauwels where we study a subgradient method with errors for nonconvex nonsmooth functions.

arxiv.org/pdf/2404.19517
arxiv.org
ntamle.bsky.social
If it went down, then it must be a definable function, and I know you used a conservative gradient.
ntamle.bsky.social
I find Coste definitely more accessible to learn the topic, but when it comes to find/cite a specific property I prefer Van den dries!
ntamle.bsky.social
Our paper "Universal generalization guarantees for Wasserstein distributionally robust models" with Jérôme Malick is accepted at ICLR 2025!!!! I'm so happy about this one, we really improved the presentation since the first submission. arxiv.org/abs/2402.11981
Universal generalization guarantees for Wasserstein distributionally robust models
Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have prov...
arxiv.org