计算机科学
感知
机器人
人工智能
人机交互
人机交互
计算机视觉
心理学
神经科学
作者
Sepehr Valipour,Camilo Perez,Martin Jagersand
出处
期刊:Intelligent Robots and Systems
日期:2017-09-01
被引量:21
标识
DOI:10.1109/iros.2017.8206106
摘要
Visual scene understanding is a crucial skill for robots, yet difficult to achieve. Recently, Convolutional Neural Networks (CNN), have shown success in this task. However, there is still a gap between their performance on image datasets and real-world robotics scenarios. In particular, a-priori training is on a bounded set of object categories, while in many unstructured tasks new objects are encountered. We present a novel paradigm for incrementally improving a robot's visual perception through active human-robot interaction. In this paradigm, the user introduces novel objects to the robot by means of pointing and voice commands. Given this information, the robot visually explores the object and adds images from it to re-train the perception module. Our method leverages state of the art Convolutional Neutal Networks — CNNs from offline batch learning, human guidance, robot exploration and incremental on-line learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI