抓住
人工智能
计算机科学
卷积神经网络
杂乱
成交(房地产)
计算机视觉
集合(抽象数据类型)
计算
机器人
对象(语法)
生成模型
生成语法
算法
雷达
电信
程序设计语言
法学
政治学
作者
Douglas Morrison,Peter Corke,Jürgen Leitner
标识
DOI:10.15607/rss.2018.xiv.021
摘要
This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping. Our proposed Generative Grasping Convolutional Neural Network (GG-CNN) predicts the quality and pose of grasps at every pixel. This one-to-one mapping from a depth image overcomes limitations of current deep-learning grasping techniques by avoiding discrete sampling of grasp candidates and long computation times. Additionally, our GG-CNN is orders of magnitude smaller while detecting stable grasps with equivalent performance to current state-of-the-art techniques. The light-weight and single-pass generative nature of our GG-CNN allows for closed-loop control at up to 50Hz, enabling accurate grasping in non-static environments where objects move and in the presence of robot control inaccuracies. In our real-world tests, we achieve an 83% grasp success rate on a set of previously unseen objects with adversarial geometry and 88% on a set of household objects that are moved during the grasp attempt. We also achieve 81% accuracy when grasping in dynamic clutter.
科研通智能强力驱动
Strongly Powered by AbleSci AI