触觉传感器
计算机视觉
人工智能
计算机科学
机器人
触觉显示器
触觉知觉
机械手
人机交互
心理学
神经科学
感知
作者
Pengwen Xiong,Yuxuan Huang,Yifan Yin,Yu Zhang,Aiguo Song
出处
期刊:Robotica
[Cambridge University Press]
日期:2024-03-05
卷期号:42 (5): 1420-1435
被引量:2
标识
DOI:10.1017/s0263574724000286
摘要
Abstract Robots with multi-sensors always have a problem of weak pairing among different modals of the collected information produced by multi-sensors, which leads to a bad perception performance during robot interaction. To solve this problem, this paper proposes a Force Vision Sight (FVSight) sensor, which utilizes a distributed flexible tactile sensing array integrated with a vision unit. This innovative approach aims to enhance the overall perceptual capabilities for object recognition. The core idea is using one perceptual layer to trigger both tactile images and force-tactile arrays. It allows the two heterogeneous tactile modal information to be consistent in the temporal and spatial dimensions, thus solving the problem of weak pairing between visual and tactile data. Two experiments are specially designed, namely object classification and slip detection. A dataset containing 27 objects with deep presses and shallow presses is collected for classification, and then 20 slip experiments on three objects are conducted. The determination of slip and stationary state is accurately obtained by covariance operation on the tactile data. The experimental results show the reliability of generated multimodal data and the effectiveness of our proposed FVSight sensor.
科研通智能强力驱动
Strongly Powered by AbleSci AI