计算机视觉
人工智能
计算机科学
灵活性(工程)
移动机械手
对象(语法)
单眼
职位(财务)
单目视觉
任务(项目管理)
移动机器人
钥匙(锁)
机器人
工程类
数学
统计
经济
计算机安全
系统工程
财务
作者
Zelin Shi,Yue Zhang,Chungang Zhuang
标识
DOI:10.1109/iccar52225.2021.9463428
摘要
In recent years, mobile manipulators have been widely used in industry and services due to their flexibility and efficiency. However, as the key tasks of the mobile manipulators grasping, object recognition and localization in unstructured environments are still challenging. In this paper, a monocular vision based grasping approach for a mobile manipulator is proposed. This method obtains the optimal grasping pose of the robot arm by locating the marker, thereby avoiding the restrictions on the shape and texture of the object, simplifying the complexity and improving locating accuracy. Our method has three main contributions. First, we calculate the optimal grasping pose of the robot arm by locating the markers and presetting grasping position. Further, we divide the grasping task into 2D plane grasping and 3D grasping, and establish a calculation model for each part. Finally, we propose a method to improve the accuracy. The experimental results show that the 3D grasping error is less than 4mm and the 2D plane grasping error is less than 1mm.
科研通智能强力驱动
Strongly Powered by AbleSci AI