计算机科学
人工智能
点云
RGB颜色模型
计算机视觉
变压器
情态动词
特征提取
融合
姿势
像素
模式识别(心理学)
工程类
语言学
化学
哲学
电压
高分子化学
电气工程
作者
Junying Hong,Hong-Bo Zhang,Jinghua Liu,Qing Lei,Lijie Yang,Ji-Xiang Du
标识
DOI:10.1016/j.inffus.2024.102227
摘要
6D pose estimation has garnered significant attention and research. RGB images and point clouds converted from RGB-D images provide complementary color and geometry information, making them the mainstream data sources for object 6D pose estimation. However, due to the fact that RGB image and point cloud belong to different dimensional spaces and have different distribution characteristics, the fusion of these two complementary data sources remains a key technical challenge for 6D pose estimation. In contrast to prior approaches that simply concatenate separately processed RGB images and point clouds, this work introduces a Transformer-based multi-modal fusion network to address this challenge. More precisely, We build a Transformer architecture based pixel-wise feature extraction to optimize feature extraction from RGB images and point clouds. Subsequently, we investigate various multi-modal feature fusion methods to process these features, enabling deeper fusion of complementary data. Additionally, during the experimental phase, we design a 6D pose estimation network based on depth prediction to assess the impact of point cloud accuracy on the multi-modal fusion module. Finally, the proposed method is verified on four datasets: LineMOD, Occlusion Linemod, MP6D and YCB-Video. Experimental results show that the proposed method outperforms similar methods on these datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI