计算机视觉
人工智能
计算机科学
稳健性(进化)
触觉传感器
解码方法
编码(社会科学)
视觉对象识别的认知神经科学
机器视觉
目标检测
感知
对象(语法)
机器人
模式识别(心理学)
数学
算法
统计
神经科学
基因
化学
生物
生物化学
作者
Hongxiang Xue,Fuchun Sun,Haoqiang Yu
标识
DOI:10.1109/tim.2023.3301893
摘要
Perceiving accurate 3D object shape is an essential and challenging task for robotic manipulation, which is commonly based on vision systems. However, vision perception suffers from several limitations especially in manipulation tasks where objects are often occluded by the robotic hand. Alternativaly, tactile perception attracts lots of attentions. Due to the low resolution, the density and efficiency of existing tactile-based 3D reconstructions are limited. In order to solve the above problems, this paper describes a vision-based tactile sensor with coded markers. By combining the neighborhood structure coding method and U-net-based decoding algorithm, the sensor can reconstruct high-density 3D object shapes efficiently. Extensive experimental results show the promising sensitivity, accuracy, stability and robustness of our proposed sensor.
科研通智能强力驱动
Strongly Powered by AbleSci AI