神经形态工程学
尖峰神经网络
计算机科学
人工神经网络
人工智能
图形
神经科学
计算机视觉
心理学
理论计算机科学
作者
Peiliang Wu,Haozhe Zhang,Yao Li,Wenbai Chen,Guowei Gao
标识
DOI:10.1109/toh.2024.3449411
摘要
Current issues with neuromorphic visual-tactile perception include limited training network representation and inadequate cross-modal fusion. To address these two issues, we proposed a dual network called visual-tactile spiking graph neural network (VT-SGN) that combines graph neural networks and spiking neural networks to jointly utilize the neuromorphic visual and tactile source data. First, the neuromorphic visual-tactile data were expanded spatiotemporally to create a taxel-based tactile graph in the spatial domain, enabling the complete exploitation of the irregular spatial structure properties of tactile information. Subsequently, a method for converting images into graph structures was proposed, allowing the vision to be trained alongside graph neural networks and extracting graph-level features from the vision for fusion with tactile data. Finally, the data were expanded into the time domain using a spiking neural network to train the model and propagate it backwards. This framework effectively utilizes the structural differences between sample instances in the spatial dimension to improve the representational power of spiking neurons, while preserving the biodynamic mechanism of the spiking neural network. Additionally, it effectively solves the morphological variance between the two perceptions and further uses complementary data between visual and tactile. To demonstrate that our approach can improve the learning of neuromorphic perceptual information, we conducted comprehensive comparative experiments on three datasets to validate the benefits of the proposed VT-SGN framework by comparing it with state-of-the-art studies.
科研通智能强力驱动
Strongly Powered by AbleSci AI