人工智能
计算机科学
单眼
点云
计算机视觉
公制(单位)
深度图
比例(比率)
图像(数学)
模式识别(心理学)
运营管理
物理
量子力学
经济
作者
Wei Yin,Jianming Zhang,Oliver Wang,Simon Niklaus,Simon Chen,Yifan Liu,Chunhua Shen
标识
DOI:10.1109/tpami.2022.3209968
摘要
Despite significant progress made in the past few years, challenges remain for depth estimation using a single monocular image. First, it is nontrivial to train a metric-depth prediction model that can generalize well to diverse scenes mainly due to limited training data. Thus, researchers have built large-scale relative depth datasets that are much easier to collect. However, existing relative depth estimation models often fail to recover accurate 3D scene shapes due to the unknown depth shift caused by training with the relative depth data. We tackle this problem here and attempt to estimate accurate scene shapes by training on large-scale relative depth data, and estimating the depth shift. To do so, we propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image, and then exploits 3D point cloud data to predict the depth shift and the camera's focal length that allow us to recover 3D scene shapes. As the two modules are trained separately, we do not need strictly paired training data. In addition, we propose an image-level normalized regression loss and a normal-based geometry loss to improve training with relative depth annotation. We test our depth model on nine unseen datasets and achieve state-of-the-art performance on zero-shot evaluation. Code is available at: https://github.com/aim-uofa/depth/.
科研通智能强力驱动
Strongly Powered by AbleSci AI