Removing Objects From Neural Radiance Fields

CVPR 2023

Abstract


Neural Radiance Fields (NeRFs) are emerging as a ubiquitous scene representation that allows for novel view synthesis. Increasingly, NeRFs will be shareable with other people. Before sharing a NeRF, though, it might be desirable to remove personal information or unsightly objects. Such removal is not easily achieved with the current NeRF editing frameworks. We propose a framework to remove objects from a NeRF representation created from an RGB-D sequence. Our NeRF inpainting method leverages recent work in 2D image inpainting and is guided by a user-provided mask. Our algorithm is underpinned by a confidence based view selection procedure. It chooses which of the individual 2D inpainted images to use in the creation of the NeRF, so that the resulting inpainted NeRF is 3D consistent. We show that our method for NeRF editing is effective for synthesizing plausible inpaintings in a multi-view coherent manner. We validate our approach using a new and still-challenging dataset for the task of NeRF inpainting.

Overview





We take a sequence of posed RGB-D images together with corresponding 2D masks as input. The 2D frames are inpainted using LaMa and then used to optimize a neural radiance field. During optimization, our confidence-based view selection automatically removes inconsistent views from the optimization preventing unwanted artefacts in the final result. Finally, novel views can be rendered from the scene, where the object has been removed.

Results


Training sequences








Test sequences








BibTeX


If you find this work useful for your research, please cite:

@inproceedings{Weder2023Removing,
    title={Removing Objects From Neural Radiance Fields},
    author={Weder, Silvan and Garcia-Hernando, Guillermo and Monszpart, {\'{A}}ron and Pollefeys, Marc and Brostow, Gabriel and Firman,
    Michael and Vicente, Sara},
    booktitle={CVPR},
    year={2023},
}

Acknowledgements


We thank Jamie Watson for fruitful discussions about the project, Paul-Edouard Sarlin for constructive feedback on an early draft, and Zuoyue Li for kindly running CompNVS on our dataset. We also thank Galen Han, Jakub Powierza and Stanimir Vichev for their help with MLOps.

© This webpage was in part inspired by this template.