overview

Abstract

In recent years, reconstructing indoor scene geometry from multi-view images has achieved encouraging accomplishments. Current methods incorporate monocular priors into neural implicit surface models to achieve high-quality reconstructions. However, these methods require hundreds of images for scene reconstruction. When only a limited number of views are available as input, the performance of monocular priors deteriorates due to scale ambiguity, leading to the collapse of the reconstructed scene geometry. In this paper, we propose a new method, named Sparis, for indoor surface reconstruction from sparse views. Specifically, we investigate the impact of monocular priors on sparse scene reconstruction, introducing a novel prior based on inter-image matching information. Our prior offers more accurate depth information while ensuring cross-view matching consistency. Additionally, we employ an angular filter strategy and an epipolar matching weight function, aiming to reduce errors due to view matching inaccuracies, thereby refining the inter-image prior for improved reconstruction accuracy. The experiments conducted on widely used benchmarks demonstrate superior performance in sparse-view scene reconstruction.

Method

overview

Overview of Sparis. Given sparse indoor images, the reconstruction of 3D surfaces is achieved via a 2-stage process: (1) Pre-processing: estimated normal maps and matching pixel pairs are derived respectively using a pre-trained normal prediction network \(f_\theta\) and a feature matching network \(f_\phi\); (2) Training with priors: the neural rendering procedure is optimized with inter-image depth priors, cross-view reprojection and monocular normal priors, generating complete and detailed geometry.

BibTeX

@inproceedings{wu2025sparis,
    title={Sparis: Neural Implicit Surface Reconstruction of Indoor Scenes from Sparse Views},
    author={Yulun Wu and Han Huang and Wenyuan Zhang and Chao Deng and Ge Gao and Ming Gu and Yu-Shen Liu},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    year={2025}
}

Acknowledgements

The website template was borrowed from Ref-NeRF and Michaël Gharbi.