Heightfields for Efficient Scene Reconstruction for AR

Jamie Watson1,2 Sara Vicente1 Oisin Mac Aodha3 Clément Godard4 Gabriel Brostow1,2 Michael Firman1

1Niantic   2University College London   3University of Edinburgh   4Google   

Abstract


3D scene reconstruction from a sequence of posed RGB images is a cornerstone task for computer vision and augmented reality (AR). While depth-based fusion is the foundation of most real-time approaches for 3D reconstruction, recent learning based methods that operate directly on RGB images can achieve higher quality reconstructions, but at the cost of increased runtime and memory requirements, making them unsuitable for AR applications. We propose an efficient learning-based method that refines the 3D reconstruction obtained by a traditional fusion approach. By leveraging a top-down heightfield representation, our method remains real-time while approaching the quality of other learning-based methods. Despite being a simplification, our heightfield is perfectly appropriate for robotic path planning or augmented reality character placement. We outline several innovations that push the performance beyond existing top-down prediction baselines, and we present an evaluation framework on the challenging ScanNetV2 dataset, targeting AR tasks.

Results





Resources


BibTeX

If you find this work useful for your research, please cite:

@inproceedings{watson-2023-heightfields,
  title = {Heightfields for Efficient Scene Reconstruction for {AR}},
  author = {Jamie Watson and
            Sara Vicente and
            Oisin Mac Aodha and
            Clément Godard and
            Gabriel J. Brostow and
            Michael Firman},
  booktitle = {Winter Conference on Applications of Computer Vision ({WACV})},
  year = {2023}
}

© This webpage was in part inspired by this template.