抓住
人工智能
计算机科学
计算机视觉
机器人
阶段(地层学)
深度学习
地质学
古生物学
程序设计语言
作者
Dujia Wei,Jianmin Cao,Ye Gu
出处
期刊:IEEE robotics and automation letters
日期:2024-05-28
卷期号:9 (7): 6512-6519
被引量:1
标识
DOI:10.1109/lra.2024.3406191
摘要
Object grasping in cluttered scene is a practical robotic skill which has a wide range of applications. In this paper, we propose a novel maximum graspness metric which can help extract high-quality scene grasp points effectively. The graspness scores of a single-view point cloud are generated using the proposed interpolation approach. The graspness model is implemented using a compact encoder-decoder model which takes a depth image as input. On the other hand, the grasp point features are extracted. They are further grouped and sampled to predict approaching vectors and in-plane rotations of the grasp poses using residual point blocks. The proposed model is evaluated using a large scale benchmark GraspNet-1Billion dataset and can outperform prior state-of-the-art method by a margin (+4.91 AP) on all camera types. Through real-world cluttered scenario testing, our approach achieves grasping successful rate of 89.60% using a UR-5 robotic arm and a RealSense camera.
科研通智能强力驱动
Strongly Powered by AbleSci AI