抓住
人工智能
计算机科学
夹持器
计算机视觉
像素
稳健性(进化)
基本事实
RGB颜色模型
工程类
生物化学
机械工程
基因
化学
程序设计语言
作者
Dexin Wang,Chunsheng Liu,Faliang Chang,Nanjun Li,Guangxin Li
出处
期刊:IEEE Transactions on Industrial Electronics
[Institute of Electrical and Electronics Engineers]
日期:2022-11-01
卷期号:69 (11): 11611-11621
被引量:11
标识
DOI:10.1109/tie.2021.3120474
摘要
Machine vision-based planar grasping detection is challenging due to uncertainty about object shape, pose, size, etc. Previous methods mostly focus on predicting discrete gripper configurations, and may miss some ground-truth grasp postures. In this article, a pixel-level grasp detection method is proposed, which uses deep neural network to predict pixel-level gripper configurations on RGB images. First, a novel oriented arrow representation model (OAR-model) is introduced to represent the gripper configuration of parallel-jaw and three-fingered gripper, which can partly improve the applicability to different grippers. Then, the adaptive grasping attribute model is proposed to adaptively represent the grasping attribute of objects, for resolving angle conflicts in training and simplifying pixel-level labeling. Lastly, the adaptive feature fusion and grasp-aware network (AFFGA-Net) is proposed to predict pixel-level OAR-models on RGB images. AFFGA-Net improves the robustness in unstructured scenarios by using hybrid atrous spatial pyramid and adaptive decoder connected in sequence. On the public Cornell dataset and actual objects, our structure achieves 99.09% and 98.0% grasp detection accuracy, respectively. In over 2400 robotic grasp trials, our structure achieves an average success rate of 98.77% in single-object scenarios and 93.69% in cluttered scenarios. Moreover, AFFGA-Net completes a grasp detection pipeline within 15 ms.
科研通智能强力驱动
Strongly Powered by AbleSci AI