人工智能
计算机视觉
计算机科学
对象(语法)
触觉传感器
机器人
卷积神经网络
职位(财务)
财务
经济
作者
Shoujie Li,Haixin Yu,Wenbo Ding,Houde Liu,Linqi Ye,Chongkun Xia,Xueqian Wang,Xiao–Ping Zhang
标识
DOI:10.1109/tro.2023.3286071
摘要
The grasping of transparent objects is challenging but of significance to robots. In this article, a visual–tactile fusion framework for transparent object grasping in complex backgrounds is proposed, which synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects. First, we propose a multiscene synthetic grasping dataset named SimTrans12 K together with a Gaussian-mask annotation method. Next, based on the TaTa gripper, we propose a grasping network named transparent object-grasping convolutional neural network for grasping position detection, which shows good performance in both synthetic and real scenes. Inspired by human grasping, a tactile calibration method and a visual–tactile fusion classification method are designed, which improve the grasping success rate by 36.7% compared with direct grasping and the classification accuracy by 39.1%. Furthermore, the tactile height sensing module and the tactile position exploration module are added to solve the problem of grasping transparent objects in irregular and visually undetectable scenes. The experimental results demonstrate the validity of the framework.
科研通智能强力驱动
Strongly Powered by AbleSci AI