https://www.fregu856.com/
I read 113 papers in 2025, complete list: github.com/fregu856/pap...
Top 25 papers that I found particularly interesting and/or well written (in alphabetical order):
I read 113 papers in 2025, complete list: github.com/fregu856/pap...
Top 25 papers that I found particularly interesting and/or well written (in alphabetical order):
Scanner-Induced Domain Shifts Undermine the Robustness of Pathology Foundation Models.
arxiv.org/abs/2601.04163
Scanner-Induced Domain Shifts Undermine the Robustness of Pathology Foundation Models.
arxiv.org/abs/2601.04163
github.com/fregu856/pap...
github.com/fregu856/pap...
The group is also recruiting multiple new postdocs, please apply before August 18:
eng.ox.ac.uk/jobs/job-det...
The group is also recruiting multiple new postdocs, please apply before August 18:
eng.ox.ac.uk/jobs/job-det...
royalsocietypublishing.org/doi/10.1098/...
royalsocietypublishing.org/doi/10.1098/...
Forward-only Diffusion Probabilistic Models.
arxiv.org/abs/2505.16733
github.com/Algolzw/FoD
algolzw.github.io/fod/
Forward-only Diffusion Probabilistic Models.
arxiv.org/abs/2505.16733
github.com/Algolzw/FoD
algolzw.github.io/fod/
I actually wrote "The one proper method change that seems to have the biggest effect is probably adding the KoLeo regularization loss term?" in my notes, so would be nice to read more about how that works.
I actually wrote "The one proper method change that seems to have the biggest effect is probably adding the KoLeo regularization loss term?" in my notes, so would be nice to read more about how that works.
Their model distillation approach is also interesting, distilling their ViT-g down to ViT-L and smaller models.
Their model distillation approach is also interesting, distilling their ViT-g down to ViT-L and smaller models.
DINOv2: Learning Robust Visual Features without Supervision (TMLR, 2024)
DINOv2 doesn't really add much methodological difference compared to iBOT, they give a good summary of what they do:
DINOv2: Learning Robust Visual Features without Supervision (TMLR, 2024)
DINOv2 doesn't really add much methodological difference compared to iBOT, they give a good summary of what they do:
(BYOL) Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (NeurIPS 2020)
(DINO) Emerging Properties in Self-Supervised Vision Transformers (ICCV 2021)
(BYOL) Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (NeurIPS 2020)
(DINO) Emerging Properties in Self-Supervised Vision Transformers (ICCV 2021)
But, 8 pages in this template seems too short. Not enough space to actually do things properly (e.g., explain the method in detail ~and~ have an extensive experimental evaluation).
But, 8 pages in this template seems too short. Not enough space to actually do things properly (e.g., explain the method in detail ~and~ have an extensive experimental evaluation).
I read 99 papers in 2024. Complete list: github.com/fregu856/pap...
Top 15 favorite papers that I found particularly interesting and/or well-written (in alphabetical order):
I read 99 papers in 2024. Complete list: github.com/fregu856/pap...
Top 15 favorite papers that I found particularly interesting and/or well-written (in alphabetical order):
arxiv.org/abs/2410.00945
arxiv.org/abs/2410.00945
arxiv.org/abs/2410.06723
arxiv.org/abs/2410.06723