人工智能
计算机科学
机器人
计算机视觉
内窥镜检查
机制(生物学)
比例(比率)
深度图
图像(数学)
外科
医学
哲学
物理
认识论
量子力学
作者
Ruyu Liu,Zhengzhe Liu,Haoyu Zhang,Guodao Zhang,Zhigui Zuo,Weiguo Sheng
标识
DOI:10.1109/icra48891.2023.10161549
摘要
Doctors perform limited one-way intestine endoscopy, in which advanced surgical robots with depth sensors, such as stereo and ToF endoscopes, can only provide sparse and incomplete depth information. However, dense, accurate and instant depth estimation during endoscopy is vital for doctors to judge the 3D location and shape of intestinal tissues, which affects the human-robot interaction between doctors and surgical robots, such as the operation on the subsequent moving of the probe. In this paper, we present a deep learning-based dense depth completion method for intestine endoscopy. We utilize the scattered depth information from depth sensors to make up for the deficiency of features in the intestine and design a multi-scale confidence prediction network to extract dense geometric depth features. Then, we introduce the structure awareness module based on the self-attention mechanism in the depth completion network to enhance the geometry and texture features of the intestine. We also present a virtual multi-modal RGBD intestine dataset and conduct comprehensive experiments on a total of three intestine datasets. The experimental results clearly demonstrate that our method achieves better results in all metrics in all intestinal environments compared to state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI