Christoph Reich
@christophreich.bsky.social
110 followers 250 following 13 posts
@ellis.eu Ph.D. Student @CVG (@dcremers.bsky.social), @visinf.bsky.social & @oxford-vgg.bsky.social | Ph.D. Scholar @zuseschooleliza.bsky.social | M.Sc. & B.Sc. @tuda.bsky.social | Prev. @neclabsamerica.bsky.social https://christophreich1996.github.io
Posts Media Videos Starter Packs
Reposted by Christoph Reich
visinf.bsky.social
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
christophreich.bsky.social
Check out our blog post about SceneDINO 🦖
For more details, check out our project page, 🤗 demo, and the hashtag #ICCV2025 paper 🚀

🌍Project page: visinf.github.io/scenedino/
🤗Demo: visinf.github.io/scenedino/
📄Paper: arxiv.org/abs/2507.06230
@jev-aleks.bsky.social
Reposted by Christoph Reich
si-cv-graphics.bsky.social
𝗙𝗲𝗲𝗱-𝗙𝗼𝗿𝘄𝗮𝗿𝗱 𝗦𝗰𝗲𝗻𝗲𝗗𝗜𝗡𝗢 𝗳𝗼𝗿 𝗨𝗻𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗦𝗰𝗲𝗻𝗲 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻
Aleksandar Jevtić, Christoph Reich, Felix Wimbauer ... Daniel Cremers
arxiv.org/abs/2507.06230
Trending on www.scholar-inbox.com
Reposted by Christoph Reich
linushn.bsky.social
The code for our #CVPR2025 paper, PRaDA: Projective Radial Distortion Averaging, is now out!

Turns out distortion calibration from multiview 2D correspondences can be fully decoupled from 3D reconstruction, greatly simplifying the problem

arxiv.org/abs/2504.16499
github.com/DaniilSinits...
christophreich.bsky.social
✅ SceneDINO offers refined, high-resolution, and multi-view consistent (rendered) 2D features.
christophreich.bsky.social
✅SceneDINO outperforms our unsupervised baseline (S4C + STEGO) in unsupervised SSC accuracy.
✅Linear probing our feature field leads to an SSC accuracy on par with 2D supervised S4C.
christophreich.bsky.social
⚗️Distilling and clustering SceneDINO's feature field in 3D results in unsupervised semantic scene completion predictions.
christophreich.bsky.social
🏋SceneDINO is trained to estimate an expressive 3D feature field using multi-view self-supervision and 2D DINO features.
christophreich.bsky.social
🚀 SceneDINO is unsupervised and infers 3D geometry and features from a single image in a feed-forward manner. Distilling and clustering SceneDINO's 3D feature field lead to unsupervised semantic scene completion predictions.
Reposted by Christoph Reich
arxiv-cs-cv.bsky.social
Aleksandar Jevti\'c, Christoph Reich, Felix Wimbauer, Oliver Hahn, Christian Rupprecht, Stefan Roth, Daniel Cremers
Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion
https://arxiv.org/abs/2507.06230
Reposted by Christoph Reich
robinhesse.bsky.social
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
Reposted by Christoph Reich
visinf.bsky.social
We had a great time at #CVPR2025 in Nashville!
Reposted by Christoph Reich
visinf.bsky.social
Scene-Centric Unsupervised Panoptic Segmentation

by @olvrhhn.bsky.social , @christophreich.bsky.social , @neekans.bsky.social , @dcremers.bsky.social, Christian Rupprecht, and @stefanroth.bsky.social

Sunday, 8:30 AM, ExHall D, Poster 330
Project Page: visinf.github.io/cups
Reposted by Christoph Reich
schnaus.bsky.social
Can we match vision and language representations without any supervision or paired data?

Surprisingly, yes! 

Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.

⬇️ 1/4
Reposted by Christoph Reich
fwimbauer.bsky.social
Can you train a model for pose estimation directly on casual videos without supervision?

Turns out you can!

In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!

⬇️
Reposted by Christoph Reich
fwimbauer.bsky.social
Check out our latest recent #CVPR2025 paper AnyCam, a fast method for pose estimation in casual videos!

1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.

Code and more info: ⏩ fwmb.github.io/anycam/
christophreich.bsky.social
Check out our recent #CVPR2025 #highlight paper on unsupervised panoptic segmentation🚀
🌍 visinf.github.io/cups/
visinf.bsky.social
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥

We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!

🌎 visinf.github.io/cups
christophreich.bsky.social
Check out the #MCML blog post on our recent #CVPR2025 #highlight paper🔥
munichcenterml.bsky.social
𝗠𝗖𝗠𝗟 𝗕𝗹𝗼𝗴: Robots & self-driving cars rely on scene understanding, but AI models for understanding these scenes need costly human annotations. Daniel Cremers & his team introduce 🥤🥤 CUPS: a scene-centric unsupervised panoptic segmentation approach to reduce this dependency. 🔗 mcml.ai/news/2025-04...
christophreich.bsky.social
Nice one! Have you tried instance segmentation?
christophreich.bsky.social
Check out the recent CVG papers at #CVPR2025, including our (@olvrhhn.bsky.social, @neekans.bsky.social, @dcremers.bsky.social, Christian Rupprecht, and @stefanroth.bsky.social) work on unsupervised panoptic segmentation. The paper will soon be available on arXiv. 🚀
dcremers.bsky.social
We are thrilled to have 12 papers accepted to #CVPR2025. Thanks to all our students and collaborators for this great achievement!
For more details check out cvg.cit.tum.de
Reposted by Christoph Reich
visinf.bsky.social
🏔️⛷️ Looking back on a fantastic week full of talks, research discussions, and skiing in the Austrian mountains!