机器人
任务(项目管理)
计算机科学
过程(计算)
人机交互
人机交互
人工智能
机器人学习
特征(语言学)
工程类
移动机器人
系统工程
语言学
操作系统
哲学
作者
Yi Sun,Weitian Wang,Yi Chen,Yunyi Jia
标识
DOI:10.1109/tsmc.2020.3005340
摘要
Human–robot collaborative assembly has been one of the next-generation manufacturing paradigms in which superiorities of humans and robots can be fully leveraged. To enable robots effectively collaborate with humans, similar to human–human collaboration, robot learning from human demonstrations has been adopted to learn the assembly tasks. However, existing feature-based approaches require critical feature design and extraction process and are usually complex to incorporate task contexts. Existing learning-based approaches usually require a large amount of manual effort for data labeling and also rarely consider task contexts. This article proposes a dual-input deep learning approach to incorporate task contexts into the robot learning from human demonstration process to assist human in assembly. In addition, online automated data labeling during human demonstration is proposed to reduce the training efforts for learning. The experimental validations on a realistic human–robot model car assembly task with safety-concerned execution designs demonstrate the effectiveness and advantages of the proposed approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI