The setup 👉 We use our riemannian flow matching model PLONK (CVPR25: Around the World in 80 Timesteps: A Generative Approach to Global Visual Geolocation) 🌍
We simply swap StreetCLIP with DinoV3 as a drop-in backbone, and train on OpenStreetView-5M.
And boom 💥 — DinoV3 wins.
          We simply swap StreetCLIP with DinoV3 as a drop-in backbone, and train on OpenStreetView-5M.
And boom 💥 — DinoV3 wins.
            August 18, 2025 at 3:14 PM
            
              
              Everybody can reply
            
          
        
          
          
          5 likes
          
        
        
      
    Huge shoutout to the #CVPR25 House Band for putting on an absolutely fantastic show last night! 🎶🔥You brought the energy, talent, and good vibes! What a way to unwind after a packed conference day!🎤🎸🥁 
#CVPRAfterHours
          #CVPRAfterHours
            June 15, 2025 at 3:51 PM
            
              
              Everybody can reply
            
          
        
          
          
          6 likes
          
        
        
      
    Panel talk happening right now at @vlms4all.bsky.social ! Come join us at #CVPR25 (room: 104E)
          
            June 12, 2025 at 10:38 PM
            
              
              Everybody can reply
            
          
        
          1 reposts
          
          3 likes
          
        
        
      
    Fahad Shahbaz Khan speaking about building culturally aware multilingual LMM benchmarks at 
@vlms4all.bsky.social at #CVPR25 (room: 104E)
          @vlms4all.bsky.social at #CVPR25 (room: 104E)
            June 12, 2025 at 7:51 PM
            
              
              Everybody can reply
            
          
        
          
          
          1 likes
          
        
        
      
    Day 2 at @cvprconference.bsky.social! Usual coffee, different program. Today it’s the #EGOVIS day! Join us for a day full of keynote talks, challenge presentations, oral talks, posters, etc!
We're in room B1 - last floor!
Program and info: egovis.github.io/cvpr25/#prog...
          We're in room B1 - last floor!
Program and info: egovis.github.io/cvpr25/#prog...
            June 12, 2025 at 1:52 PM
            
              
              Everybody can reply
            
          
        
          
          
          1 likes
          
        
        
      
    *3R posts are back! 🧵
Interested in SfM, RGB-SLAM or... both at the same time???
Come see MUSt3R @CVPR25 Friday morning, ExHall D Poster #82.
Jerome and Boris will be there to present how we can adapt DUSt3R to multiple views via a memory mechanism.
If you missed it earlier [...]
        
            Interested in SfM, RGB-SLAM or... both at the same time???
Come see MUSt3R @CVPR25 Friday morning, ExHall D Poster #82.
Jerome and Boris will be there to present how we can adapt DUSt3R to multiple views via a memory mechanism.
If you missed it earlier [...]
MUSt3R: Multi-view Network for Stereo 3D Reconstruction
            DUSt3R introduced a novel paradigm in geometric computer vision by proposing a model that can provide dense and unconstrained Stereo 3D Reconstruction of arbitrary image collections with no prior info...
          
            
            arxiv.org
          
        
          
            June 12, 2025 at 12:29 PM
            
              
              Everybody can reply
            
          
        
          2 reposts
          1 quotes
          16 likes
          
        
        
      
    Check out the CVPR Workshop proceeding papers presented at FGVC12:
openaccess.thecvf.com/CVPR2025_wor...
Poster session:
June 11, 4pm-6pm
ExHall D, poster boards 373-403
#CVPR25 @cvprconference.bsky.social
        
          openaccess.thecvf.com/CVPR2025_wor...
Poster session:
June 11, 4pm-6pm
ExHall D, poster boards 373-403
#CVPR25 @cvprconference.bsky.social
CVPR 2025 Open Access Repository
            
          
            
            openaccess.thecvf.com
          
        
          
            June 10, 2025 at 6:43 PM
            
              
              Everybody can reply
            
          
        
          4 reposts
          
          7 likes
          
        
        
      
    I will be at hashtag#CVPR25 in Nashville! ✨ 
Please come chat with me and Ethan Weber - during our poster session on Pippo, on Sat 5-7pm (Hall D)! 😊 👋
Web: yashkant.github.io/pippo
CC: @ethanjohnweber.bsky.social, @igilitschenski.bsky.social
          Please come chat with me and Ethan Weber - during our poster session on Pippo, on Sat 5-7pm (Hall D)! 😊 👋
Web: yashkant.github.io/pippo
CC: @ethanjohnweber.bsky.social, @igilitschenski.bsky.social
            June 10, 2025 at 2:18 AM
            
              
              Everybody can reply
            
          
        
          
          
          1 likes
          
        
        
      
    On my way to Nashville for hashtag. #CVPR25; almost ready to dance! We will present "TANGO: Training-free Embodied AI Agents for Open-world Tasks" at the main conference (June 15) and at the Embodied AI Workshop (June 12).
If you are in Nashville and want to catch up, DM me here or via email!
          If you are in Nashville and want to catch up, DM me here or via email!
            June 9, 2025 at 5:51 PM
            
              
              Everybody can reply
            
          
        
          
          
          1 likes
          
        
        
      
    Join us on June 11, 9am to discuss all things fine-grained!
We are looking forward to a series of talks on semantic granularity, covering topics such as machine teaching, interpretability and much more!
Room 104 E
Schedule & details: sites.google.com/view/fgvc12
@cvprconference.bsky.social #CVPR25
          We are looking forward to a series of talks on semantic granularity, covering topics such as machine teaching, interpretability and much more!
Room 104 E
Schedule & details: sites.google.com/view/fgvc12
@cvprconference.bsky.social #CVPR25
            June 8, 2025 at 11:19 PM
            
              
              Everybody can reply
            
          
        
          6 reposts
          1 quotes
          10 likes
          
        
        
      
    🚀 Going to #CVPR2025? 
Want to create 3D garments faster and at scale?
Catch Maria Korosteleva at the Virtual Try-On Workshop on June 12, 2:20PM CDT (Room 105B) to learn how synthetic data is changing the game!
🔗 vto-at-cvpr25.github.io
#SMPL #AI #3DGarment #SyntheticData
          Want to create 3D garments faster and at scale?
Catch Maria Korosteleva at the Virtual Try-On Workshop on June 12, 2:20PM CDT (Room 105B) to learn how synthetic data is changing the game!
🔗 vto-at-cvpr25.github.io
#SMPL #AI #3DGarment #SyntheticData
            June 6, 2025 at 1:32 PM
            
              
              Everybody can reply
            
          
        
          2 reposts
          
          1 likes
          
        
        
      
    Check out one of the latest works of our Lab! AnyCam @CVPR25
          
      Can you train a model for pose estimation directly on casual videos without supervision?
Turns out you can!
In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!
⬇️
        Turns out you can!
In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!
⬇️
            May 13, 2025 at 8:32 AM
            
              
              Everybody can reply
            
          
        
          
          
          6 likes
          
        
        
      
    Glad to be selected as Outstanding Reviewer for CVPR25!
          
      Behind every great conference is a team of dedicated reviewers.  Congratulations to this year’s #CVPR2025 Outstanding Reviewers!
cvpr.thecvf.com/Conferences/...
  cvpr.thecvf.com/Conferences/...
            May 12, 2025 at 5:11 AM
            
              
              Everybody can reply
            
          
        
          
          
          1 likes
          
        
        
      
    Code is open-sourced at https://github.com/gimpong/CVPR25-Condenser. [6/6 of https://arxiv.org/abs/2504.21263v1]
          
            May 1, 2025 at 5:56 AM
            
              
              Everybody can reply
            
          
        📣 New #CVPR25 Paper: UrbanCAD builds photorealistic and highly controllable hybrid digital twins from a single urban image and a large collection of 3D CAD models, supporting various editing operations. xdimlab.github.io/UrbanCAD/
          
            April 30, 2025 at 12:42 PM
            
              
              Everybody can reply
            
          
        
          4 reposts
          
          21 likes
          
        
        
      
    Do you want to present your recently accepted or ongoing work @cvprconference.bsky.social  #CVPR2025 EgoVis workshop?
Submit your abstract before DL of Fri 2 May,
egovis.github.io/cvpr25/#cfp
        
            Submit your abstract before DL of Fri 2 May,
egovis.github.io/cvpr25/#cfp
a blue and white penguin is sitting on a yellow origami crane
            ALT: a blue and white penguin is sitting on a yellow origami crane
          
            
            media.tenor.com
          
        
          
            April 29, 2025 at 6:30 PM
            
              
              Everybody can reply
            
          
        
          2 reposts
          
          5 likes
          
        
        
      
    🚗🌆 We introduce EVolSplat — a feed-forward 3D Gaussian Splatting model that enables real-time, photorealistic rendering without per-scene optimization.
Trained on KITTI-360 & Waymo, it sets a new SOTA for autonomous driving applications. #CVPR25. Paper & Code: xdimlab.github.io/EVolSplat/
          Trained on KITTI-360 & Waymo, it sets a new SOTA for autonomous driving applications. #CVPR25. Paper & Code: xdimlab.github.io/EVolSplat/
            April 29, 2025 at 7:38 AM
            
              
              Everybody can reply
            
          
        
          5 reposts
          
          24 likes
          
        
        
      
    Desired for accurate camera poses for long sequences in dynamic scenes, and also dense 3D reconstruction?
Check out our #CVPR25 paper WildGS-SLAM – bridging robust SLAM and high-fidelity 3D Gaussians in the wild!
          Check out our #CVPR25 paper WildGS-SLAM – bridging robust SLAM and high-fidelity 3D Gaussians in the wild!
      🥳Excited to share our latest work, WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments, accepted to #CVPR2025 🌐
We present a robust monocular RGB SLAM system that uses uncertainty-aware tracking and mapping to handle dynamic scenes.
        We present a robust monocular RGB SLAM system that uses uncertainty-aware tracking and mapping to handle dynamic scenes.
            April 10, 2025 at 11:11 PM
            
              
              Everybody can reply
            
          
        
          
          
          6 likes
          
        
        
      
    Extensive experiments demonstrate that AutoSSVH achieves superior retrieval efficacy and efficiency compared to state-of-the-art approaches. Code is available at https://github.com/EliSpectre/CVPR25-AutoSSVH. [5/5 of https://arxiv.org/abs/2504.03587v1]
          
            April 7, 2025 at 6:08 AM
            
              
              Everybody can reply
            
          
        Generalized Recorrupted-to-Recorrupted is accepted at CVPR25 🚀
See a thread below:
          See a thread below:
      🔉New paper "Generalized Recorrupted-to-Recorrupted" 🔉
with @bemc22.bsky.social and Jorge Bacca
- Generalizes R2R self-supervised loss for noise belonging to the exponential family.
- Shows asymptotic equivalence with SURE.
paper: arxiv.org/abs/2412.04648
code: github.com/bemc22/Gener...
      
          with @bemc22.bsky.social and Jorge Bacca
- Generalizes R2R self-supervised loss for noise belonging to the exponential family.
- Shows asymptotic equivalence with SURE.
paper: arxiv.org/abs/2412.04648
code: github.com/bemc22/Gener...
Generalized Recorrupted-to-Recorrupted: Self-Supervised Learning Beyond Gaussian Noise
          Recorrupted-to-Recorrupted (R2R) has emerged as a methodology for training deep networks for image restoration in a self-supervised manner from noisy measurement data alone, demonstrating equivalence ...
        
          
          arxiv.org
        
      
  
            February 27, 2025 at 10:16 AM
            
              
              Everybody can reply
            
          
        
          
          
          2 likes
          
        
        
      
    Here the official answer: 
„We haven't invited reviewers yet. The screenshot seems to cut off before showing that was the CVPR25 email.“
You‘re sure that this was ICCV?
          „We haven't invited reviewers yet. The screenshot seems to cut off before showing that was the CVPR25 email.“
You‘re sure that this was ICCV?
            February 1, 2025 at 1:47 PM
            
              
              Everybody can reply
            
          
        
          
          
          2 likes
          
        
        
      
    4 papers assigned for #CVPR25 and all super well aligned with my research. Good job ACs! @cvprconference.bsky.social
          
            December 15, 2024 at 4:22 AM
            
              
              Everybody can reply
            
          
        
          
          
          3 likes
          
        
        
      
    Would it be possible to add more reviewers for #CVPR25? Some of the PhD students I advise would be experts, and trustworthy, while it's educational.
Unfortunately, as an AC, I can't add them as a reviewer to a paper.
In CMT I think this was possible 🤔
          Unfortunately, as an AC, I can't add them as a reviewer to a paper.
In CMT I think this was possible 🤔
            December 3, 2024 at 11:35 PM
            
              
              Everybody can reply
            
          
         
        