Konstantin Mishchenko
@konstmish.bsky.social
300 followers 110 following 5 posts
Research Scientist at Meta Paris Code generation, math, optimization
Posts Media Videos Starter Packs
konstmish.bsky.social
I think it's much more important we get a better scoring system for matching reviewers and papers. High affinity scores on OpenReview are often misleading. A lot of reviewers complained to me they get random papers from TMLR, and they don't enjoy reviewing as a consequence.
konstmish.bsky.social
Cool new result: random arcsine stepsize schedule accelerates gradient descent (no momentum!) on separable problems. The separable class is clearly very limited, and it remains unclear if acceleration using stepsizes is possible on general convex problems.
arxiv.org/abs/2412.05790
konstmish.bsky.social
The idea that one needs to know a lot of advanced math to start doing research in ML seems so wrong to me. Instead of reading books for weeks and forgetting most of them a year later, I think it's much better to try do things, see what knowledge gaps prevent you from doing them, and only then read.
konstmish.bsky.social
It's a bit hard to say because this kind of results are still quite new, but one of the most recent papers on the topic, arxiv.org/abs/2410.16249, mentions a conjecture on the optimality of its 1/n^{log₂(1+√ 2)} (not for the last iterate though).
konstmish.bsky.social
Gradient Descent with large stepsizes converges faster than O(1/T) but it was only shown for the *best* iterate before. Cool to see new results showing we can also get an improvement for the last iterate:
arxiv.org/abs/2411.17668
I am still waiting to see a version with adaptive stepsizes though 👀