抓住
人工智能
计算机视觉
机器人
计算机科学
变压器
对象(语法)
目标检测
机械手
工程类
模式识别(心理学)
电压
电气工程
程序设计语言
作者
Zhixuan Liu,Zibo Chen,Shangjin Xie,Wei‐Shi Zheng
标识
DOI:10.1109/icra46639.2022.9812001
摘要
Robotic grasping pose detection that predicts the configuration of the robotic gripper for object grasping is fundamental in robot manipulation. Based on point clouds, most of the existing methods predict grasp pose with the hierarchical PointNet++ backbone, while the non-local geometric information is underexplored. In this work, we address the 7-DoF (6- DoF with the grasp width) grasp detection by introducing a one- stage Transformer-based hierarchical multi-scale model dubbed TransGrasp. Empowered by TransGrasp, the point features are enhanced via acquiring multi-scale shape awareness in the whole scene. By directly modeling the long-range relevance, our pipeline is aware of object contour to avoid collisions and able to apply analogy reasoning for long-distance geometric structures. The evaluation results on the large scale GraspNet- 1Billion dataset demonstrate the effectiveness of the proposed TransGrasp. The real robot experiments on an ABB YUMI robot with an Azure Kinect DK camera and an ABB Smart two-finger gripper show high success rates in both single object and cluttered scenes.
科研通智能强力驱动
Strongly Powered by AbleSci AI