MVSAnywhere

Zero-Shot Multi-View Stereo

CVPR 2025

Sergio Izquierdo3 Mohamed Sayed1 Michael Firman1 Guillermo Garcia-Hernando1 Daniyar Turmukhambetov1 Javier Civera3 Oisin Mac Aodha2 Gabriel Brostow1,4 Jamie Watson1, 4

1Niantic   2University of Edinburgh   3Universidad de Zaragoza   4University College London   

Coming soon!

teaser.jpg

Abstract


Computing accurate depth from multiple views is a fundamental and longstanding challenge in computer vision. However, most existing approaches do not generalize well across different domains and scene types (e.g. indoor vs. outdoor). Training a general-purpose multi-view stereo model is challenging and raises several questions, e.g. how to best make use of transformer-based architectures, how to incorporate additional metadata when there is a variable number of input views, and how to estimate the range of valid depths which can vary considerably across different scenes and is typically not known a priori? To address these issues, we introduce MVSA, a novel and versatile Multi-View Stereo architecture that aims to work Anywhere by generalizing across diverse domains and depth ranges. MVSA combines monocular and multi-view cues with an adaptive cost volume to deal with scale-related issues. We demonstrate state-of-the-art zero-shot depth estimation on the Robust Multi-View Depth Benchmark, surpassing existing multi-view stereo and monocular baselines.



Our model produces depth maps in the input camera pose scale. Here we show it estimating view consistent depths for a drone video.



Approach



method

We construct a general-purpose multi-view depth estimation model. We start with a cost-volume based architecture, which matches deep features between views at different hypothesized depths. Key for performance are our Cost Volume Patchifier and Mono/Multi Cue Combiner. These also fuse single-view information coming from the Reference Image Encoder and source views.

Cost Volume Patchifier

patchifier.jpg


Our cost volume patchifier enables high-quality infor- mation to be extracted from a |D| × H/4 × W/4 cost volume, ready for input to the Mono/Multi Cue Combiner ViT. (a) Shows the naive approach to patchification. (b) Our approach makes better use of the reference image features.




Results


Tanks and Temples - "Barn"


Mesh Reconstruction via Depth TSDF Fusion


3D Gaussian Splatting Depth Regularization

3DGS

3DGS w/ MVSAnywhere

3DGS w/ Metric3D

Mesh from above 3DGS.

tree_tree2_None tree_tree2_None
tree_tree2_None tree_tree2_None
tree_tree2_None tree_tree2_None

3D Gaussian Splatting Depth Regularization - Mesh

3DGS

3DGS w/ MVSAnywhere

3DGS w/ Metric3D




Resources


Paper and Supplemental

Paper

If you find this work useful for your research, please cite:

@inproceedings{izquierdo2025mvsanywhere,
  title={{MVSAnywhere}: Zero Shot Multi-View Stereo},
  author={Izquierdo, Sergio and Sayed, Mohamed and Firman, Michael and Garcia-Hernando, Guillermo and Turmukhambetov, Daniyar and Civera, Javier and Mac Aodha, Oisin and Brostow, Gabriel J. and Watson, Jamie},
  booktitle={CVPR},
  year={2025}
}
        

© This webpage was in part inspired from this template.