Olaf Dünkel
@oduenkel.bsky.social
43 followers 76 following 7 posts
ELLIS PhD @ MPI & Oxford - Generative Models for Vision https://odunkel.github.io/
Posts Media Videos Starter Packs
oduenkel.bsky.social
You read this? You’ll likely read the linked post too.
You like it? Your followers might see it too.
In other words: Attention here → attention to Yotam’s post.

We explore how transformer attention can be propagated—like PageRank, but for attention.

Fun work with @yotamerel.bsky.social
oduenkel.bsky.social
🔗Project page: genintel.github.io/DIY-SC
📄Paper: arxiv.org/pdf/2506.05312
💻Code: github.com/odunkel/DIY-SC
🤗Demo: huggingface.co/spaces/odunk...

Great collaboration with @wimmerthomas.bsky.social , Christian Theobalt, Christian Rupprecht, and @adamkortylewski.bsky.social ! [6/6]
oduenkel.bsky.social
DIY-SC features are more 3D-aware and stable compared to DINOv2. [5/6]
oduenkel.bsky.social
The feature refinement improves the SPair-71k performance by +18.7p for DINOv2 and by +10.4p absolute gain for SD+DINOv2.
DIY-SC sets a new SOTA on SPair-71k (75.1%, over +4p absolute gain over the previous SOTA) and is also scalable to larger datasets like ImageNet-3D. [4/6]
oduenkel.bsky.social
We improve pseudo-label quality via 3D-aware sampling, chaining with cyclic consistency, and spherical prototype constraints. No manual keypoint annotations required! [3/6]
oduenkel.bsky.social
We address the challenge of finding robust correspondences across different object instances. For this, we introduce DIY-SC, a light-weight adapter trained with pseudo-labels that were generated from SD+DINOv2. [2/6]
oduenkel.bsky.social
Are you using DINOv2 for tasks that require semantic features? DIY-SC might be the alternative!
It refines DINOv2 or SD+DINOv2 features and achieves a new SOTA on the semantic correspondence dataset SPair-71k when not relying on annotated keypoints! [1/6]
genintel.github.io/DIY-SC