RGB颜色模型
人工智能
计算机科学
机器人
分割
计算机视觉
利用
机器人学
像素
特征(语言学)
计算机安全
语言学
哲学
作者
Hongkun Tian,Kechen Song,Ling Tong,Yi Man,Yunhui Yan
出处
期刊:IEEE-ASME Transactions on Mechatronics
[Institute of Electrical and Electronics Engineers]
日期:2023-11-14
卷期号:29 (3): 2032-2043
被引量:2
标识
DOI:10.1109/tmech.2023.3327865
摘要
Unknown objects instance-aware segmentation (UOIS) is crucial for the operation of autonomous robots, especially in unstructured scenes with unknown objects. As the primary data source types for robots, RGB and Depth are not fully exploited by existing studies due to the inherent information differences between RGB (2-D appearance) and Depth (3-D geometry). Therefore, it is challenging to fully exploit the features of both modalities to achieve segmentation for instances of unknown objects. This article proposes a collaborative weight assignment (CWA) fusion strategy for fusing RGB and Depth (RGB-D). It contains three carefully designed modules, motivational pixel weight assignment(MPWA) module, dual-direction spatial weight assignment (DSWA) module, and stepwise global feature aggregation (SGFA) module. Our method aims to adaptively assign fusion weights between two modalities to exploit RGB-D features from multiple dimensions better. On the popular Graspnet-1Bilion and WISDOM RGB-D robot operation datasets, the proposed method achieves competitive performance with state-of-the-art techniques, proving our approach can make good use of the information between the two modalities. Furthermore, we have deployed the fusion model on the AUBO i5 robotic manipulation platform to test its segmentation and grasping optimization effects oriented to unknown objects. The proposed method achieves robust performance through qualitative and quantitative analysis experiments.
科研通智能强力驱动
Strongly Powered by AbleSci AI