已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Deep Fusion for Multi-Modal 6D Pose Estimation

人工智能 模式 RGB颜色模型 计算机科学 姿势 模态(人机交互) 计算机视觉 点云 融合机制 情态动词 特征(语言学) 模式识别(心理学) 融合 社会学 高分子化学 化学 哲学 脂质双层融合 语言学 社会科学
作者
Shifeng Lin,Zunran Wang,Shenghao Zhang,Yonggen Ling,Chenguang Yang
出处
期刊:IEEE Transactions on Automation Science and Engineering [Institute of Electrical and Electronics Engineers]
卷期号:21 (4): 6540-6549 被引量:7
标识
DOI:10.1109/tase.2023.3327772
摘要

6D pose estimation with individual modality encounters difficulties due to the limitations of modalities, such as RGB information on textureless objects and depth on reflective objects. This can be improved by exploiting the complementarity between modalities. Most of the previous methods only consider the correspondence between point clouds and RGB images and directly extract the features of the corresponding two modalities for fusion, which ignore the information of the modality itself and are negatively affected by erroneous background information when introducing more features for fusion. To enhance the complementarities between multiple modalities, we propose a neighbor-based cross-modalities attention mechanism for multi-modal 6D pose estimation. Neighbors represent that the RGB features of multiple neighbor are applied for fusion, which expands the receptive field. The cross-modalities attention mechanism leverages the similarities between the different modal features to help modal feature fusion, which reduces the negative impact of incorrect background information. Moreover, we design some features between the rendered image and the original image to obtain the confidence of pose estimation results. Experimental results on LM, LM-O and YCB-V datasets demonstrate the effectiveness of our methods. Video is available at https://www.youtube.com/watch?v=ApNBcX6NEGs. Note to Practitioners —Introducing the information of surrounding points during multi-modal fusion improves the performance of 6D pose estimation. For example, the RGB image corresponding to some point clouds on the object may lack rich texture features while the neighbors exist. However, most methods of modal fusion based on RGBD for 6D pose estimation only simply consider the corresponding between RGB images and point clouds for feature fusion, which may bring redundant information or the wrong background information when introducing neighbor information. In this paper, we propose a cross-modal attention mechanism based on neighbor information. By introducing the information of the modality itself to obtain the weight of the neighbor information of another modality in the encoding and decoding stages, the receptive field is expanded and the complementarities between different modalities are enhanced. The experiment shows our effectiveness. In addition, we provide a pose confidence estimator for predicted pose results. Specifically, the rendered image with the predicted pose and the real image are applied to extract features for the decision tree. The experimental results show that the result of the wrong estimation can be eliminated with high accuracy and recall. The 6D pose confidence can provide a reference for real-world grasping. However, the current method can only estimate objects with known models. In the future, we will consider applying the method to unseen objects.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
ltt完成签到 ,获得积分10
2秒前
wunai012321完成签到,获得积分20
2秒前
3秒前
这学真难读下去完成签到,获得积分10
4秒前
4秒前
怕黑山柏发布了新的文献求助10
5秒前
英姑应助失眠的大侠采纳,获得10
5秒前
corleeang完成签到 ,获得积分10
5秒前
求求了给篇文献完成签到,获得积分10
6秒前
6秒前
vida完成签到 ,获得积分10
7秒前
闪闪黎昕发布了新的文献求助10
10秒前
石子完成签到 ,获得积分10
10秒前
wunai012321发布了新的文献求助10
10秒前
qiaojiahou完成签到,获得积分10
10秒前
davidzheng完成签到,获得积分10
11秒前
12秒前
da49完成签到,获得积分10
12秒前
12秒前
13秒前
愤怒的苗条完成签到 ,获得积分10
15秒前
spicyfish完成签到,获得积分10
15秒前
benlaron完成签到,获得积分10
15秒前
王哈哈哈哈哈哈哈完成签到,获得积分10
15秒前
汉堡包应助六六采纳,获得30
15秒前
liuliuliu完成签到,获得积分10
16秒前
活力的香芦完成签到,获得积分10
17秒前
wang050604发布了新的文献求助10
18秒前
阳光的Kelly完成签到 ,获得积分10
18秒前
满意的念柏完成签到,获得积分0
20秒前
21秒前
美好傲蕾完成签到,获得积分10
22秒前
22秒前
Tbin完成签到,获得积分10
23秒前
航煜完成签到,获得积分10
23秒前
义气幼珊完成签到 ,获得积分10
23秒前
AX完成签到,获得积分10
23秒前
du完成签到,获得积分10
23秒前
24秒前
航煜发布了新的文献求助10
25秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Picture this! Including first nations fiction picture books in school library collections 2000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1500
Cowries - A Guide to the Gastropod Family Cypraeidae 1200
ON THE THEORY OF BIRATIONAL BLOWING-UP 666
Signals, Systems, and Signal Processing 610
Chemistry and Physics of Carbon Volume 15 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6388986
求助须知:如何正确求助?哪些是违规求助? 8203308
关于积分的说明 17357899
捐赠科研通 5442552
什么是DOI,文献DOI怎么找? 2877984
邀请新用户注册赠送积分活动 1854352
关于科研通互助平台的介绍 1697854