Fredrik K. Gustafsson
fregu856.bsky.social
Fredrik K. Gustafsson
@fregu856.bsky.social
Postdoc at IBME in Oxford. Machine learning for healthcare.
https://www.fregu856.com/
My year of reading in 2025: www.fregu856.com/post/year_of...

I read 113 papers in 2025, complete list: github.com/fregu856/pap...

Top 25 papers that I found particularly interesting and/or well written (in alphabetical order):
January 18, 2026 at 12:09 PM
New preprint, work led together with Erik Thiringer:

Scanner-Induced Domain Shifts Undermine the Robustness of Pathology Foundation Models.

arxiv.org/abs/2601.04163
January 12, 2026 at 12:59 PM
Isn't there a lot of noise in these decisions, just like for conference papers etc?
October 31, 2025 at 4:04 PM
Grattis!
October 10, 2025 at 4:31 AM
I just reached 500 read papers on the Github repository I use to track and organize my reading:
github.com/fregu856/pap...
GitHub - fregu856/papers: I categorize, annotate and write comments for all research papers I read (500+ papers since 2018).
I categorize, annotate and write comments for all research papers I read (500+ papers since 2018). - fregu856/papers
github.com
August 7, 2025 at 8:29 AM
Very happy to have joined the group of David Clifton at IBME in Oxford as a postdoc, to work on machine learning for healthcare!

The group is also recruiting multiple new postdocs, please apply before August 18:
eng.ox.ac.uk/jobs/job-det...
July 21, 2025 at 8:00 AM
The waiting area is also quite dull and gets really crowded, probably the worst part of my entire trip.
July 16, 2025 at 10:08 AM
I think it was already in the batch of papers I was given to rate, basically no pathology-related papers, for example. ICML was definitely better in this regard.
July 4, 2025 at 4:16 AM
Not super happy with my assigned NeurIPS papers this year, I found them less interesting/relevant than I usually do. But oh well, still quite solid papers overall, and I do think it's good to be forced to read papers from slightly different areas sometimes.
July 3, 2025 at 11:48 AM
Our paper "Taming Diffusion Models for Image Restoration: A Review" has now been published, work led by Ziwei Luo:

royalsocietypublishing.org/doi/10.1098/...
June 25, 2025 at 7:02 AM
New preprint, work lead by Ziwei Luo:

Forward-only Diffusion Probabilistic Models.

arxiv.org/abs/2505.16733
github.com/Algolzw/FoD
algolzw.github.io/fod/
May 23, 2025 at 11:35 AM
Looks very useful, thanks for sharing!
April 2, 2025 at 6:57 AM
Nice, saw this on arxiv and thought it seemed interesting, might read this as well, thanks!
March 24, 2025 at 7:42 AM
Nice, thanks!

I actually wrote "The one proper method change that seems to have the biggest effect is probably adding the KoLeo regularization loss term?" in my notes, so would be nice to read more about how that works.
March 23, 2025 at 3:46 PM
The main thing definitely seems to be that they scale iBOT from ViT-L/16 trained on ImageNet-22k (14 million images) to ViT-g/14 trained on their LVD-142M dataset (142 million images).

Their model distillation approach is also interesting, distilling their ViT-g down to ViT-L and smaller models.
March 23, 2025 at 3:21 PM
"We revisit existing discriminative self-supervised approaches [...] such as iBOT, and we reconsider some of their design choices under the lens of a larger dataset. Most of our technical contributions are tailored toward stabilizing and accelerating [...] when scaling in model and data sizes"
March 23, 2025 at 3:21 PM
iBOT: Image BERT Pre-Training with Online Tokenizer (ICLR 2022)

DINOv2: Learning Robust Visual Features without Supervision (TMLR, 2024)

DINOv2 doesn't really add much methodological difference compared to iBOT, they give a good summary of what they do:
March 23, 2025 at 3:21 PM
I've been trying to properly understand how/why DINOv2 works, and I think this is a good sequence of papers to read for that:

(BYOL) Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (NeurIPS 2020)

(DINO) Emerging Properties in Self-Supervised Vision Transformers (ICCV 2021)
March 23, 2025 at 3:21 PM
I really didn't like the ICML review template though, why do we have to make things overly complicated? Please just give me some variation of Summary, Strenghts, Weaknesses, Questions, Detailed comments and Justification of rating!
March 14, 2025 at 1:20 PM
Finished my 5 #ICML reviews, and realized that I now have passed 100 reviewed papers in total during my career. Actually feels like a pretty cool milestone!
March 14, 2025 at 1:12 PM
First time I'm reviewing for MIDL. Quite interetsing papers overall, and I like the review template.

But, 8 pages in this template seems too short. Not enough space to actually do things properly (e.g., explain the method in detail ~and~ have an extensive experimental evaluation).
February 22, 2025 at 9:41 AM
My year of reading in 2024: www.fregu856.com/post/year_of...

I read 99 papers in 2024. Complete list: github.com/fregu856/pap...

Top 15 favorite papers that I found particularly interesting and/or well-written (in alphabetical order):
January 4, 2025 at 7:04 AM
Recent preprint: Evaluating Deep Regression Models for WSI-Based Gene-Expression Prediction.

arxiv.org/abs/2410.00945
November 30, 2024 at 6:28 AM
Recent preprint: Evaluating Computational Pathology Foundation Models for Prostate Cancer Grading under Distribution Shifts

arxiv.org/abs/2410.06723
November 30, 2024 at 6:27 AM