Sergio Izquierdo
@sizquierdo.bsky.social
52 followers 66 following 11 posts
PhD candidate at University of Zaragoza. Previously intern at Niantic Labs and Skydio. Working on 3D reconstruction and Deep Learning. serizba.github.io
Posts Media Videos Starter Packs
Reposted by Sergio Izquierdo
bernhard-jaeger.bsky.social
One concern that I have as an AI researcher when publishing code is that it can potentially be used in dual-use applications.
To solve this, we propose Civil Software Licenses. They prevent dual-use while being minimal in the restrictions they impose:

civil-software-licenses.github.io
Civil Software Licenses
civil-software-licenses.github.io
sizquierdo.bsky.social
Presenting today at #CVPR poster 81.

Code is available at github.com/nianticlabs/...

Want to try it on an iPhone video? On Android? On any other sequence you have? We got you covered. Check the repo.
sizquierdo.bsky.social
Presenting it now at #CVPR
oisinmacaodha.bsky.social
MVSAnywhere: Zero-Shot Multi-View Stereo

Looking for a multi-view stereo depth estimation model which works anywhere, in any scene, with any range of depths?

If so, stop by our poster #81 today in the morning session (10:30 to 12:20) at #CVPR2025.
sizquierdo.bsky.social
Happy to be one of them
cvprconference.bsky.social
Behind every great conference is a team of dedicated reviewers. Congratulations to this year’s #CVPR2025 Outstanding Reviewers!

cvpr.thecvf.com/Conferences/...
sizquierdo.bsky.social
We focused on depth from videos and as you pointed we didn't train on datasets with different captures per scene.
sizquierdo.bsky.social
💡Use case:

We show how the accurate and robust depths from MVSAnywhere serve to regularize gaussian splats, obtaining much cleaner scene reconstructions.

As MVSAnywhere is agnostic to the scene scale, this is plug-and-play for your splats!
sizquierdo.bsky.social
🏆Results:

MVSAnywhere achieves state-of-the-art results on the Robust Multi-View Depth Benchmark, showing its strong generalization performance.
Quantitative results of MVSAnywhere
sizquierdo.bsky.social
🧩Challenge: Varying Depth Scales & Unknown Ranges

🔹Most models require a known depth range to estimate the cost volume.
✅MVSAnywhere estimates an initial range based on camera scale and setup and refines it. It predicts at the same scale as the input cameras!
sizquierdo.bsky.social
🧩Challenge: Domain Generalization

🔹Previous models struggle across different domains ( indoor🏠 vs outdoor🏞️).
✅MVSAnywhere uses a transformer architecture and is trained on a large array of varied synthetic datasets
Qualitative results of mvsanywhere
sizquierdo.bsky.social
🧩Challenge: Robustness to casually captured videos

🔹MVS methods completely rely on the matches of the cost volume (not working for low overlap & dynamic)
✅MVSAnywhere successfully combines strong single-view image priors with multi-view information from our cost volume
MVSAnywhere works with dynamic objects and casually captured videos.
sizquierdo.bsky.social
🔍Looking for a multi-view depth method that just works?

We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.

More info:
nianticlabs.github.io/mvsanywhere/
Reposted by Sergio Izquierdo
rmurai0610.bsky.social
MASt3R-SLAM code release!
github.com/rmurai0610/M...

Try it out on videos or with a live camera

Work with
@ericdexheimer.bsky.social*,
@ajdavison.bsky.social (*Equal Contribution)
rmurai0610.bsky.social
Introducing MASt3R-SLAM, the first real-time monocular dense SLAM with MASt3R as a foundation.

Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map.

With @ericdexheimer.bsky.social* @ajdavison.bsky.social (*Equal Contribution)
Reposted by Sergio Izquierdo
ducha-aiki.bsky.social
MegaLoc: One Retrieval to Place Them All
@berton-gabri.bsky.social Carlo Masone

tl;dr: DINOv2-SALAD, trained on all available VPR datasets works very well.
Code should at github.com/gmberton/Meg..., but not yet
arxiv.org/abs/2502.17237