计算机科学
人工智能
单发
计算机视觉
光学
计算
深度图
航程(航空)
迭代重建
编码
散粒噪声
卷积神经网络
算法
物理
图像(数学)
探测器
材料科学
电信
基因
复合材料
化学
生物化学
作者
Dhruvjyoti Bagadthey,Sanjana Prabhu,Salman S. Khan,D Tony Fredrick,Vivek Boominathan,Ashok Veeraraghavan,Kaushik Mitra
摘要
Lensless cameras are ultra-thin imaging systems that replace the lens with a thin passive optical mask and computation. Passive mask-based lensless cameras encode depth information in their measurements for a certain depth range. Early works have shown that this encoded depth can be used to perform 3D reconstruction of close-range scenes. However, these approaches for 3D reconstructions are typically optimization based and require strong hand-crafted priors and hundreds of iterations to reconstruct. Moreover, the reconstructions suffer from low resolution, noise, and artifacts. In this work, we propose FlatNet3D —a feed-forward deep network that can estimate both depth and intensity from a single lensless capture. FlatNet3D is an end-to-end trainable deep network that directly reconstructs depth and intensity from a lensless measurement using an efficient physics-based 3D mapping stage and a fully convolutional network. Our algorithm is fast and produces high-quality results, which we validate using both simulated and real scenes captured using PhlatCam.
科研通智能强力驱动
Strongly Powered by AbleSci AI