Data-efficient multimodal human action recognition for proactive human–robot collaborative assembly: A cross-domain few-shot learning approach

动作识别 计算机科学 领域(数学分析) 人机交互 人工智能 弹丸 动作(物理) 人机交互 机器人 计算机视觉 机器学习 数学 有机化学 化学 班级(哲学) 数学分析 物理 量子力学
作者
Tianyu Wang,Zhihao Liu,Lihui Wang,Mian Li,Xi Vincent Wang
出处
期刊:Robotics and Computer-integrated Manufacturing [Elsevier BV]
卷期号:89: 102785-102785 被引量:4
标识
DOI:10.1016/j.rcim.2024.102785
摘要

With the recent vision of Industry 5.0, the cognitive capability of robots plays a crucial role in advancing proactive human–robot collaborative assembly. As a basis of the mutual empathy, the understanding of a human operator's intention has been primarily studied through the technique of human action recognition. Existing deep learning-based methods demonstrate remarkable efficacy in handling information-rich data such as physiological measurements and videos, where the latter category represents a more natural perception input. However, deploying these methods in new unseen assembly scenarios requires first collecting abundant case-specific data. This leads to significant manual effort and poor flexibility. To deal with the issue, this paper proposes a novel cross-domain few-shot learning method for data-efficient multimodal human action recognition. A hierarchical data fusion mechanism is designed to jointly leverage the skeletons, RGB images and depth maps with complementary information. Then a temporal CrossTransformer is developed to enable the action recognition with very limited amount of data. Lightweight domain adapters are integrated to further improve the generalization with fast finetuning. Extensive experiments on a real car engine assembly case show the superior performance of proposed method over state-of-the-art regarding both accuracy and finetuning efficiency. Real-time demonstrations and ablation study further indicate the potential of early recognition, which is beneficial for the robot procedures generation in practical applications. In summary, this paper contributes to the rarely explored realm of data-efficient human action recognition for proactive human–robot collaboration.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
伍教授发布了新的文献求助10
2秒前
郦映秋完成签到 ,获得积分10
2秒前
yaaabo完成签到,获得积分10
2秒前
小璐璐呀发布了新的文献求助10
3秒前
zhanghaha完成签到,获得积分10
4秒前
4秒前
慕青应助一只小鲨鱼采纳,获得10
4秒前
所所应助科研通管家采纳,获得10
5秒前
Lucas应助科研通管家采纳,获得10
5秒前
Gauss应助科研通管家采纳,获得30
5秒前
5秒前
7秒前
星辰大海应助JUZI采纳,获得10
9秒前
yz完成签到,获得积分10
9秒前
Hello应助伍教授采纳,获得10
10秒前
Neon完成签到,获得积分10
11秒前
火火火完成签到 ,获得积分10
12秒前
呆萌冷风完成签到,获得积分10
12秒前
abcd_1067发布了新的文献求助10
13秒前
13秒前
chenqi完成签到,获得积分10
13秒前
GYYYYYYYYYYY完成签到,获得积分10
13秒前
15秒前
15秒前
33发布了新的文献求助10
15秒前
16秒前
缥缈纲应助舒适路人采纳,获得10
16秒前
17秒前
18秒前
19秒前
华123完成签到,获得积分20
19秒前
19秒前
ldy发布了新的文献求助10
19秒前
雪妮儿完成签到,获得积分10
20秒前
范垂钦发布了新的文献求助10
20秒前
星海种花发布了新的文献求助10
21秒前
麻瓜完成签到,获得积分10
21秒前
ycccccc完成签到 ,获得积分10
21秒前
Garfield发布了新的文献求助10
22秒前
高分求助中
Les Mantodea de Guyane Insecta, Polyneoptera 2500
Technologies supporting mass customization of apparel: A pilot project 450
A Field Guide to the Amphibians and Reptiles of Madagascar - Frank Glaw and Miguel Vences - 3rd Edition 400
Brain and Heart The Triumphs and Struggles of a Pediatric Neurosurgeon 400
Cybersecurity Blueprint – Transitioning to Tech 400
Mixing the elements of mass customisation 400
Периодизация спортивной тренировки. Общая теория и её практическое применение 310
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3785970
求助须知:如何正确求助?哪些是违规求助? 3331421
关于积分的说明 10251186
捐赠科研通 3046849
什么是DOI,文献DOI怎么找? 1672227
邀请新用户注册赠送积分活动 801155
科研通“疑难数据库(出版商)”最低求助积分说明 759994