Moritz Haas
@mohaas.bsky.social
250 followers 150 following 11 posts
IMPRS-IS PhD student @ University of Tübingen with Ulrike von Luxburg and Bedartha Goswami. Mostly thinking about deep learning theory. Also interested in ML for climate science. mohawastaken.github.io
Posts Media Videos Starter Packs
mohaas.bsky.social
This is joint work with wonderful collaborators @leenacvankadara.bsky.social , @cevherlions.bsky.social and Jin Xu during our time at Amazon.

🧵 10/10
arxiv.org
mohaas.bsky.social

How achieve correct scaling with arbitrary gradient-based perturbation rules? 🤔

✨In 𝝁P, scale perturbations like updates in every layer.✨

💡Gradients and incoming activations generally scale LLN-like, as they are correlated.

➡️ Perturbations and updates have similar scaling properties.

🧵 9/10
mohaas.bsky.social

In experiments across MLPs and ResNets on CIFAR10 and ViTs on ImageNet1K, we show that 𝝁P² indeed jointly transfers optimal learning rate and perturbation radius across model scales and can improve training stability and generalization.

🧵 8/10
mohaas.bsky.social
... there exists a ✨unique✨ parameterization with layerwise perturbation scaling that fulfills all of our constraints:

(1) stability,
(2) feature learning in all layers,
(3) effective perturbations in all layers.

We call it the ✨Maximal Update and Perturbation Parameterization (𝝁P²)✨.

🧵 7/10
mohaas.bsky.social
Hence, we study arbitrary layerwise learning rate, initialization variance and perturbation scaling.

For us, an ideal parametrization should fulfill: updates and perturbations of all weights should have a non-vanishing and non-exploding effect on the output function. 💡

We show that ...

🧵 6/10
mohaas.bsky.social

... we show that 𝝁P is not able to consistently improve generalization or to transfer SAM's perturbation radius, because it effectively only perturbs the last layer. ❌

💡So we need to allow layerwise perturbation scaling!

🧵 5/10
mohaas.bsky.social
The maximal update parametrization (𝝁P) by arxiv.org/abs/2011.14522 is a layerwise scaling rule of learning rates and initialization variances that yields width-independent dynamics and learning rate transfer for SGD and Adam in common architectures. But for standard SAM, ...

🧵 4/10
Feature Learning in Infinite-Width Neural Networks
As its width tends to infinity, a deep neural network's behavior under gradient descent can become simplified and predictable (e.g. given by the Neural Tangent Kernel (NTK)), if it is parametrized app...
arxiv.org
mohaas.bsky.social
SAM and model scale are widely observed to improve generalization across datasets and architectures. But can we understand how to optimally scale in a principled way? 📈🤔

🧵 3/10
mohaas.bsky.social
Short thread for effective SAM scaling here.
arxiv.org/pdf/2411.00075

A thread on our Mamba scaling will be coming soon by
🔜 @leenacvankadara.bsky.social

🧵2/10
arxiv.org
mohaas.bsky.social
Stable model scaling with width-independent dynamics?

Thrilled to present 2 papers at #NeurIPS 🎉 that study width-scaling in Sharpness Aware Minimization (SAM) (Th 16:30, #2104) and in Mamba (Fr 11, #7110). Our scaling rules stabilize training and transfer optimal hyperparams across scales.

🧵 1/10
mohaas.bsky.social
I'd love to be added :)