极线几何
光场
人工智能
基本事实
计算机科学
水准点(测量)
领域(数学)
无监督学习
计算机视觉
单眼
深度学习
模式识别(心理学)
图像(数学)
数学
地质学
大地测量学
纯数学
作者
Wenhui Zhou,Enci Zhou,Gaomin Liu,Lili Lin,Andrew Lumsdaine
标识
DOI:10.1109/tip.2019.2944343
摘要
Learning based depth estimation from light field has made significant progresses in recent years. However, most existing approaches are under the supervised framework, which requires vast quantities of ground-truth depth data for training. Furthermore, accurate depth maps of light field are hardly available except for a few synthetic datasets. In this paper, we exploit the multi-orientation epipolar geometry of light field and propose an unsupervised monocular depth estimation network. It predicts depth from the central view of light field without any ground-truth information. Inspired by the inherent depth cues and geometry constraints of light field, we then introduce three novel unsupervised loss functions: photometric loss, defocus loss and symmetry loss. We have evaluated our method on a public 4D light field synthetic dataset. As the first unsupervised method published in the 4D Light Field Benchmark website, our method can achieve satisfactory performance in most error metrics. Comparison experiments with two state-of-the-art unsupervised methods demonstrate the superiority of our method. We also prove the effectiveness and generality of our method on real-world light-field images.
科研通智能强力驱动
Strongly Powered by AbleSci AI