Dominik Schnaus
@schnaus.bsky.social
110 followers 400 following 4 posts
PhD student @ TUM with Daniel Cremers
Posts Media Videos Starter Packs
Reposted by Dominik Schnaus
linushn.bsky.social
The code for our #CVPR2025 paper, PRaDA: Projective Radial Distortion Averaging, is now out!

Turns out distortion calibration from multiview 2D correspondences can be fully decoupled from 3D reconstruction, greatly simplifying the problem

arxiv.org/abs/2504.16499
github.com/DaniilSinits...
schnaus.bsky.social
4/4

𝐈𝐭’𝐬 𝐚 (𝐁𝐥𝐢𝐧𝐝) 𝐌𝐚𝐭𝐜𝐡! 𝐓𝐨𝐰𝐚𝐫𝐝𝐬 𝐕𝐢𝐬𝐢𝐨𝐧–𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐂𝐨𝐫𝐫𝐞𝐬𝐩𝐨𝐧𝐝𝐞𝐧𝐜𝐞 𝐰𝐢𝐭𝐡𝐨𝐮𝐭 𝐏𝐚𝐫𝐚𝐥𝐥𝐞𝐥 𝐃𝐚𝐭𝐚

@schnaus.bsky.social @neekans.bsky.social @dcremers.bsky.social

📝 Paper: arxiv.org/pdf/2503.241...
🌐 Project page: dominik-schnaus.github.io/itsamatch/
💻 Code: github.com/dominik-schn...
schnaus.bsky.social
3/4

✅ This enables unsupervised matching — finding vision-language correspondences without any paired data.

🤯 As a proof of concept, we build an unsupervised image classifier that assigns labels without seeing a single image-text pair.
schnaus.bsky.social
2/4

🔍 As models and datasets scale, distances in vision and language embeddings become similar (Platonic Representation Hypothesis).

💡 We cast the matching task as a Quadratic Assignment Problem (QAP) and propose a new heuristic solver.
schnaus.bsky.social
Can we match vision and language representations without any supervision or paired data?

Surprisingly, yes! 

Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.

⬇️ 1/4
Reposted by Dominik Schnaus
fwimbauer.bsky.social
Can you train a model for pose estimation directly on casual videos without supervision?

Turns out you can!

In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!

⬇️
Reposted by Dominik Schnaus
fwimbauer.bsky.social
Check out our latest recent #CVPR2025 paper AnyCam, a fast method for pose estimation in casual videos!

1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.

Code and more info: ⏩ fwmb.github.io/anycam/
Reposted by Dominik Schnaus
dcremers.bsky.social
We are thrilled to have 12 papers accepted to #CVPR2025. Thanks to all our students and collaborators for this great achievement!
For more details check out cvg.cit.tum.de
Reposted by Dominik Schnaus
dcremers.bsky.social
Indeed - everyone had a blast - thank you all for the great talks, discussions and Ski/snowboarding!
andreasgeiger.bsky.social
This week we had our winter retreat jointly with Daniel Cremer's group in Montafon, Austria. 46 talks, 100 Km of slopes and night sledding with some occasionally lost and found. It has been fun!