Lifelong robotic visual-tactile perception learning

模态(人机交互) 人工智能 计算机科学 感知 终身学习 模式 特征(语言学) 视觉感受 编码器 计算机视觉 心理学 哲学 社会学 操作系统 神经科学 语言学 社会科学 教育学
作者
Jiahua Dong,Yang Cong,Gan Sun,Tao Zhang
出处
期刊:Pattern Recognition [Elsevier]
卷期号:121: 108176-108176 被引量:8
标识
DOI:10.1016/j.patcog.2021.108176
摘要

Lifelong machine learning can learn a sequence of consecutive robotic perception tasks via transferring previous experiences. However, 1) most existing lifelong learning based perception methods only take advantage of visual information for robotic tasks, while neglecting another important tactile sensing modality to capture discriminative material properties; 2) Meanwhile, they cannot explore the intrinsic relationships across different modalities and the common characterization among different tasks of each modality, due to the distinct divergence between heterogeneous feature distributions. To address above challenges, we propose a new Lifelong Visual-Tactile Learning (LVTL) model for continuous robotic visual-tactile perception tasks, which fully explores the latent correlations in both intra-modality and cross-modality aspects. Specifically, a modality-specific knowledge library is developed for each modality to explore common intra-modality representations across different tasks, while narrowing intra-modality mapping divergence between semantic and feature spaces via an auto-encoder mechanism. Moreover, a sparse constraint based modality-invariant space is constructed to capture underlying cross-modality correlations and identify the contributions of each modality for new coming visual-tactile tasks. We further propose a modality consistency regularizer to efficiently align the heterogeneous visual and tactile samples, which ensures the semantic consistency between different modality-specific knowledge libraries. After deriving an efficient model optimization strategy, we conduct extensive experiments on several representative datasets to demonstrate the superiority of our LVTL model. Evaluation experiments show that our proposed model significantly outperforms existing state-of-the-art methods with about 1.16%∼15.36% improvement under different lifelong visual-tactile perception scenarios.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
phase完成签到 ,获得积分10
2秒前
动听的靖琪完成签到,获得积分10
3秒前
酷波er应助科研通管家采纳,获得10
4秒前
完美世界应助科研通管家采纳,获得10
4秒前
丘比特应助科研通管家采纳,获得10
4秒前
9秒前
lknzzz发布了新的文献求助10
9秒前
Dailei发布了新的文献求助10
14秒前
友好的hh完成签到,获得积分10
14秒前
14秒前
gjww应助yw采纳,获得10
17秒前
gjww应助yw采纳,获得10
17秒前
俊逸书琴完成签到 ,获得积分10
17秒前
Clark完成签到,获得积分10
18秒前
领导范儿应助任性的败采纳,获得10
19秒前
黑苹果完成签到,获得积分10
20秒前
AslenK完成签到,获得积分10
20秒前
超勍发布了新的文献求助10
21秒前
Shiny完成签到 ,获得积分10
22秒前
时鹏飞完成签到 ,获得积分10
22秒前
闪亮的蘑菇完成签到 ,获得积分10
27秒前
吃书的猪完成签到,获得积分10
28秒前
29秒前
所所应助leslieo3o采纳,获得10
30秒前
33秒前
皮卡噼里啪啦完成签到 ,获得积分10
33秒前
行者+完成签到,获得积分10
34秒前
ZY完成签到 ,获得积分10
34秒前
有机小鸟发布了新的文献求助10
35秒前
秋雪瑶应助哇丢你蕾姆采纳,获得10
35秒前
123~!完成签到,获得积分10
39秒前
phy完成签到,获得积分10
40秒前
CGBY完成签到 ,获得积分10
42秒前
feijelly完成签到,获得积分10
43秒前
Lucas应助有机小鸟采纳,获得10
44秒前
花花完成签到 ,获得积分10
44秒前
yw发布了新的文献求助10
45秒前
Yupa完成签到,获得积分10
46秒前
超勍完成签到,获得积分10
47秒前
kuer333完成签到,获得积分10
50秒前
高分求助中
The Illustrated History of Gymnastics 800
The Bourse of Babylon : market quotations in the astronomical diaries of Babylonia 680
Division and square root. Digit-recurrence algorithms and implementations 500
機能營養學前瞻(3 Ed.) 300
Problems of transcultural communication 300
Zwischen Selbstbestimmung und Selbstbehauptung 300
Physics of semiconductor devices 200
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2506281
求助须知:如何正确求助?哪些是违规求助? 2158068
关于积分的说明 5523943
捐赠科研通 1878735
什么是DOI,文献DOI怎么找? 934382
版权声明 564027
科研通“疑难数据库(出版商)”最低求助积分说明 499117