机制(生物学)
人工智能
对偶(语法数字)
计算机科学
计算机视觉
双层
图层(电子)
材料科学
艺术
物理
纳米技术
文学类
量子力学
作者
Jianguo Duan,Liwen Zhuang,Qinglei Zhang,Jiyun Qin,Ying Zhou
标识
DOI:10.1177/09544054241249217
摘要
Visual grasping technology plays a crucial role in various robotic applications, such as industrial automation, warehousing, and logistics. However, current visual grasping methods face limitations when applied in industrial scenarios. Focusing solely on the workspace where the grasping target is located restricts the camera’s ability to provide additional environmental information. On the other hand, monitoring the entire working area introduces irrelevant data and hinders accurate grasping pose estimation. In this paper, we propose a novel approach that combines a global camera and a depth camera to enable efficient target grasping. Specifically, we introduce a dual-layer detection mechanism based on Faster R-CNN–GRCNN. By enhancing the Faster R-CNN with attention mechanisms, we focus the global camera on the workpiece placement area and detect the target object within that region. When the robot receives the command to grasp the workpiece, the improved Faster R-CNN recognizes the workpiece and guides the robot towards the target location. Subsequently, the depth camera on the robot determines the grasping pose using Generative Residual Convolutional Neural Network and performs the grasping action. We validate the feasibility and effectiveness of our proposed framework through experiments involving collaborative assembly tasks using two robotic arms.
科研通智能强力驱动
Strongly Powered by AbleSci AI