Felix Wimbauer
@fwimbauer.bsky.social
70 followers 57 following 4 posts
ELLIS PhD Student in Computer Vision at TUM with Daniel Cremers and Christian Rupprecht (Oxford), fwmb.github.io, prev. Research Intern at Meta GenAI
Posts Media Videos Starter Packs
Pinned
fwimbauer.bsky.social
Check out our latest recent #CVPR2025 paper AnyCam, a fast method for pose estimation in casual videos!

1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.

Code and more info: ⏩ fwmb.github.io/anycam/
Reposted by Felix Wimbauer
elliottwu.bsky.social
New opening for Assistant Professor in Machine Learning at Cambridge @eng.cam.ac.uk closing on 22 Sept 2025:
www.jobs.cam.ac.uk/job/49361/
Reposted by Felix Wimbauer
linushn.bsky.social
The code for our #CVPR2025 paper, PRaDA: Projective Radial Distortion Averaging, is now out!

Turns out distortion calibration from multiview 2D correspondences can be fully decoupled from 3D reconstruction, greatly simplifying the problem

arxiv.org/abs/2504.16499
github.com/DaniilSinits...
Reposted by Felix Wimbauer
schnaus.bsky.social
Can we match vision and language representations without any supervision or paired data?

Surprisingly, yes! 

Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.

⬇️ 1/4
Reposted by Felix Wimbauer
mersault.bsky.social
We have a PhD opening in Berlin on "Responsible Data Engineering", with a focus on data preparation for ML/AI systems.

This is a fully-funded position with salary level E13 at the newly founded DEEM Lab, as part of @bifold.berlin .

Details available at deem.berlin#jobs-2225
Reposted by Felix Wimbauer
fwimbauer.bsky.social
Can you train a model for pose estimation directly on casual videos without supervision?

Turns out you can!

In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!

⬇️
Reposted by Felix Wimbauer
olvrhhn.bsky.social
Happy to be recognized as an Outstanding Reviewer at #CVPR2025 🎊
cvprconference.bsky.social
Behind every great conference is a team of dedicated reviewers. Congratulations to this year’s #CVPR2025 Outstanding Reviewers!

cvpr.thecvf.com/Conferences/...
fwimbauer.bsky.social
While recent methods like Monst3r achieve impressive results, they require datasets with camera pose labels. Such datasets are hard to collect and not available for every domain. AnyCam can directly be trained on any video dataset.

More details: fwmb.github.io/anycam
fwimbauer.bsky.social
Can you train a model for pose estimation directly on casual videos without supervision?

Turns out you can!

In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!

⬇️
Reposted by Felix Wimbauer
visinf.bsky.social
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥

We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!

🌎 visinf.github.io/cups
Reposted by Felix Wimbauer
andreasgeiger.bsky.social
🏠 Introducing DepthSplat: a framework that connects Gaussian splatting with single- and multi-view depth estimation. This enables robust depth modeling and high-quality view synthesis with state-of-the-art results on ScanNet, RealEstate10K, and DL3DV.
🔗 haofeixu.github.io/depthsplat/
Reposted by Felix Wimbauer
lu-sang.bsky.social
🤗 I’m excited to share our recent work: TwoSquared: 4D Reconstruction from 2D Image Pairs.
🔥 Our method produces geometry, texture-consistent, and physically plausible 4D reconstructions
📰 Check our project page sangluisme.github.io/TwoSquared/
❤️ @ricmarin.bsky.social @dcremers.bsky.social
Reposted by Felix Wimbauer
kashyap7x.bsky.social
Announcing the 2025 NAVSIM Challenge! What's new? We're testing not only on real recordings—but also imaginary futures generated from the real ones! 🤯

Two rounds: #CVPR2025 and #ICCV2025. $18K in prizes + several $1.5k travel grants. Submit in May for Round 1! opendrivelab.com/challenge2025/ 🧵👇
Reposted by Felix Wimbauer
andreasgeiger.bsky.social
Can we represent fuzzy geometry with meshes? "Volumetric Surfaces" uses layered meshes to represent the look of hair, fur & more without the splatting/volume overhead. Fast, pretty, and runs in real-time on your laptop!
🔗 autonomousvision.github.io/volsurfs/
📄 arxiv.org/pdf/2409.02482
fwimbauer.bsky.social
Check out our latest recent #CVPR2025 paper AnyCam, a fast method for pose estimation in casual videos!

1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.

Code and more info: ⏩ fwmb.github.io/anycam/
Reposted by Felix Wimbauer
ericzzj.bsky.social
AnyCam: Learning to Recover Camera Poses and Intrinsics from Casual Videos

@fwimbauer.bsky.social, Weirong Chen, Dominik Muhle, Christian Rupprecht, @dcremers.bsky.social

tl;dr: uncertaintybased loss+pre-trained depth and flow networks+test-time trajectory refinement

arxiv.org/abs/2503.23282
Reposted by Felix Wimbauer
askoepke.bsky.social
Our paper submission deadline for the EVAL-FoMo workshop @cvprconference.bsky.social has been extended to March 19th!
sites.google.com/view/eval-fo...
We welcome submissions (incl. published papers) on the analysis of emerging capabilities / limits in visual foundation models. #CVPR2025
Screenshot of the workshop website "Emergent Visual Abilities and Limits of Foundation Models" at CVPR 2025
Reposted by Felix Wimbauer
christophreich.bsky.social
Check out the recent CVG papers at #CVPR2025, including our (@olvrhhn.bsky.social, @neekans.bsky.social, @dcremers.bsky.social, Christian Rupprecht, and @stefanroth.bsky.social) work on unsupervised panoptic segmentation. The paper will soon be available on arXiv. 🚀
dcremers.bsky.social
We are thrilled to have 12 papers accepted to #CVPR2025. Thanks to all our students and collaborators for this great achievement!
For more details check out cvg.cit.tum.de
Reposted by Felix Wimbauer
dcremers.bsky.social
We are thrilled to have 12 papers accepted to #CVPR2025. Thanks to all our students and collaborators for this great achievement!
For more details check out cvg.cit.tum.de
Reposted by Felix Wimbauer
niessner.bsky.social
Tomorrow in our TUM AI - Lecture Series with none other than Robin Rombach, CEO Black Forest Labs.

He'll talk about "𝐅𝐋𝐔𝐗: Flow Matching for Content Creation at Scale".

Live stream: youtube.com/live/nrKKLJX...
6pm GMT+1 / 9am PST (Mon Feb 17rd)
TUM AI Lecture Series - FLUX: Flow Matching for Content Creation at Scale (Robin Rombach)
YouTube video by Matthias Niessner
youtube.com
Reposted by Felix Wimbauer
askoepke.bsky.social
Our 2nd Workshop on Emergent Visual Abilities and Limits of Foundation Models (EVAL-FoMo) is accepting submissions. We are looking forward to talks by our amazing speakers that include @saining.bsky.social, @aidanematzadeh.bsky.social, @lisadunlap.bsky.social, and @yukimasano.bsky.social. #CVPR2025
bayesiankitten.bsky.social
🔥 #CVPR2025 Submit your cool papers to Workshop on
Emergent Visual Abilities and Limits of Foundation Models 📷📷🧠🚀✨

sites.google.com/view/eval-fo...

Submission Deadline: March 12th!
EVAL-FoMo 2
A Vision workshop on Evaluations and Analysis
sites.google.com
Reposted by Felix Wimbauer
dcremers.bsky.social
Exciting discussions on the future of AI at the Paris AI Action Summit with French Minister of Science Philippe Baptiste and many leading AI researchers
Reposted by Felix Wimbauer
visinf.bsky.social
🏔️⛷️ Looking back on a fantastic week full of talks, research discussions, and skiing in the Austrian mountains!
Reposted by Felix Wimbauer
lu-sang.bsky.social
🥳Thrilled to share our work, "Implicit Neural Surface Deformation with Explicit Velocity Fields", accepted at #ICLR2025 👏
code is available at: github.com/Sangluisme/I...
😊Huge thanks to my amazing co-authors. @dongliangcao.bsky.social @dcremers.bsky.social
👏Special thanks to @ricmarin.bsky.social