"Dataset Distillation for Pre-Trained Self-Supervised Vision Models," set to appear at #NeurIPS 2025!
We learn 1 image per class to train linear heads for pre-trained models.
linear-gradient-matching.github.io
More in thread 🔽
"Dataset Distillation for Pre-Trained Self-Supervised Vision Models," set to appear at #NeurIPS 2025!
We learn 1 image per class to train linear heads for pre-trained models.
linear-gradient-matching.github.io
More in thread 🔽
We share findings on the iterative nature of reconstruction, the roles of cross and self-attention, and the emergence of correspondences across the network [1/8] ⬇️
Michal Stary, Julien Gaubil, Ayush Tewari, Vincent Sitzmann
arxiv.org/abs/2510.24907
Trending on www.scholar-inbox.com
We share findings on the iterative nature of reconstruction, the roles of cross and self-attention, and the emergence of correspondences across the network [1/8] ⬇️
So while I still wish this case would’ve gone to trial, and believe the amount given per work should have been EVEN higher, $1.5 billion in settlement AND the destruction of infringing datasets is still a fantastic start! Congratulations to all those involved as these things are not easy!
So while I still wish this case would’ve gone to trial, and believe the amount given per work should have been EVEN higher, $1.5 billion in settlement AND the destruction of infringing datasets is still a fantastic start! Congratulations to all those involved as these things are not easy!