Lifelong robotic visual-tactile perception learning

模态(人机交互) 人工智能 计算机科学 感知 终身学习 模式 特征(语言学) 视觉感受 编码器 刺激形态 计算机视觉 心理学 教育学 社会科学 神经科学 社会学 语言学 哲学 操作系统
作者
Jiahua Dong,Yang Cong,Gan Sun,Tao Zhang
出处
期刊:Pattern Recognition [Elsevier]
卷期号:121: 108176-108176 被引量:26
标识
DOI:10.1016/j.patcog.2021.108176
摘要

Lifelong machine learning can learn a sequence of consecutive robotic perception tasks via transferring previous experiences. However, 1) most existing lifelong learning based perception methods only take advantage of visual information for robotic tasks, while neglecting another important tactile sensing modality to capture discriminative material properties; 2) Meanwhile, they cannot explore the intrinsic relationships across different modalities and the common characterization among different tasks of each modality, due to the distinct divergence between heterogeneous feature distributions. To address above challenges, we propose a new Lifelong Visual-Tactile Learning (LVTL) model for continuous robotic visual-tactile perception tasks, which fully explores the latent correlations in both intra-modality and cross-modality aspects. Specifically, a modality-specific knowledge library is developed for each modality to explore common intra-modality representations across different tasks, while narrowing intra-modality mapping divergence between semantic and feature spaces via an auto-encoder mechanism. Moreover, a sparse constraint based modality-invariant space is constructed to capture underlying cross-modality correlations and identify the contributions of each modality for new coming visual-tactile tasks. We further propose a modality consistency regularizer to efficiently align the heterogeneous visual and tactile samples, which ensures the semantic consistency between different modality-specific knowledge libraries. After deriving an efficient model optimization strategy, we conduct extensive experiments on several representative datasets to demonstrate the superiority of our LVTL model. Evaluation experiments show that our proposed model significantly outperforms existing state-of-the-art methods with about 1.16%∼15.36% improvement under different lifelong visual-tactile perception scenarios.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
hjh完成签到,获得积分10
刚刚
如意的松鼠完成签到,获得积分10
1秒前
1秒前
有你就足够完成签到,获得积分10
3秒前
内向问旋发布了新的文献求助10
3秒前
kk发布了新的文献求助10
5秒前
5秒前
yznfly举报量子星尘求助涉嫌违规
8秒前
好旺完成签到,获得积分10
8秒前
guajiguaji完成签到,获得积分10
8秒前
禹映安发布了新的文献求助10
10秒前
YG完成签到,获得积分10
10秒前
10秒前
憨憨完成签到 ,获得积分10
12秒前
飞快的薯片完成签到,获得积分10
13秒前
14秒前
OMR123完成签到,获得积分10
14秒前
汉堡包应助虚心的垣采纳,获得10
15秒前
16秒前
WU完成签到,获得积分10
16秒前
量子星尘发布了新的文献求助50
16秒前
grzzz发布了新的文献求助10
18秒前
可恶地发布了新的文献求助10
18秒前
19秒前
Yimi发布了新的文献求助10
20秒前
bkagyin应助科研通管家采纳,获得10
20秒前
20秒前
20秒前
共享精神应助科研通管家采纳,获得10
20秒前
充电宝应助科研通管家采纳,获得10
20秒前
loong应助科研通管家采纳,获得30
20秒前
20秒前
20秒前
斯文败类应助科研通管家采纳,获得10
20秒前
共享精神应助科研通管家采纳,获得10
20秒前
20秒前
充电宝应助科研通管家采纳,获得10
20秒前
田様应助科研通管家采纳,获得10
20秒前
斯文败类应助科研通管家采纳,获得10
20秒前
21秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Encyclopedia of Quaternary Science Reference Third edition 6000
Encyclopedia of Forensic and Legal Medicine Third Edition 5000
Introduction to strong mixing conditions volume 1-3 5000
Aerospace Engineering Education During the First Century of Flight 3000
Electron Energy Loss Spectroscopy 1500
sQUIZ your knowledge: Multiple progressive erythematous plaques and nodules in an elderly man 1000
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5798154
求助须知:如何正确求助?哪些是违规求助? 5789111
关于积分的说明 15496331
捐赠科研通 4924804
什么是DOI,文献DOI怎么找? 2651068
邀请新用户注册赠送积分活动 1598241
关于科研通互助平台的介绍 1553128