光辉
计算机科学
人工智能
视图合成
计算机视觉
纹理(宇宙学)
先验概率
纹理合成
图像(数学)
模式识别(心理学)
图像纹理
图像分割
遥感
地质学
贝叶斯概率
渲染(计算机图形)
作者
Zheng Chen,Chen Wang,Yuan-Chen Guo,Song-Hai Zhang
标识
DOI:10.1109/tpami.2023.3305295
摘要
Neural Radiance Fields (NeRF) achieve photo-realistic view synthesis with densely captured input images. However, the geometry of NeRF is extremely under-constrained given sparse views, resulting in significant degradation of novel view synthesis quality. Inspired by self-supervised depth estimation methods, we propose StructNeRF, a solution to novel view synthesis for indoor scenes with sparse inputs. StructNeRF leverages the structural hints naturally embedded in multi-view inputs to handle the unconstrained geometry issue in NeRF. Specifically, it tackles the texture and non-texture regions respectively: a patch-based multi-view consistent photometric loss is proposed to constrain the geometry of textured regions; for non-textured ones, we explicitly restrict them to be 3D consistent planes. Through the dense self-supervised depth constraints, our method improves both the geometry and the view synthesis performance of NeRF without any additional training on external data. Extensive experiments on several real-world datasets demonstrate that StructNeRF shows superior or comparable performance compared to state-of-the-art methods (e.g. NeRF, DSNeRF, RegNeRF, Dense Depth Priors, MonoSDF, etc.) for indoor scenes with sparse inputs both quantitatively and qualitatively.
科研通智能强力驱动
Strongly Powered by AbleSci AI