计算机科学
人工智能
人机交互
模仿
机器人
任务(项目管理)
多任务学习
强化学习
人机交互
机器人学习
任务分析
机器人学
简单(哲学)
序列学习
在线学习
控制(管理)
主动学习(机器学习)
学习曲线
夹持器
计算机视觉
机器学习
机械手
有线手套
仿人机器人
作者
Qi Ye,Qingtao Liu,Siyun Wang,Yihui Mao,Yihui Mao,Yu Cui,Ke Jin,H. Chen,Xuan Cai,Gaofeng Li,Jiming Chen,Jiming Chen
出处
期刊:Science robotics
[American Association for the Advancement of Science (AAAS)]
日期:2026-01-28
卷期号:11 (110): eady2869-eady2869
被引量:1
标识
DOI:10.1126/scirobotics.ady2869
摘要
Achieving humanlike dexterity with anthropomorphic multifingered robotic hands requires precise finger coordination. However, dexterous manipulation remains highly challenging because of high-dimensional action-observation spaces, complex hand-object contact dynamics, and frequent occlusions. To address this, we drew inspiration from the human learning paradigm of observation and practice and propose a two-stage learning framework by learning visual-tactile integration representations via self-supervised learning from human demonstrations. We trained a unified multitask policy through reinforcement learning and online imitation learning. This decoupled learning enabled the robot to acquire generalizable manipulation skills using only monocular images and simple binary tactile signals. With the unified policy, we built a multifingered hand manipulation system that performs multiple complicated tasks with low-cost sensing. It achieved an 85% success rate across five complex tasks and 25 objects and further generalized to three unseen tasks that share similar hand-object coordination patterns with the training tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI