计算机科学
可穿戴计算机
传感器融合
人工智能
变压器
稳健性(进化)
卷积神经网络
机器学习
特征提取
模式识别(心理学)
工程类
化学
电压
嵌入式系统
电气工程
基因
生物化学
作者
Zhenzhen Quan,Qingshan Chen,Wei Wang,Moyan Zhang,Xiang Li,Yujun Li,Zhi Liu
标识
DOI:10.1109/jsen.2023.3337367
摘要
Multimodal sensors, including vision sensors and wearable sensors, offer valuable complementary information for accurate recognition tasks. Nonetheless, the heterogeneity among sensor data from different modalities presents a formidable challenge in extracting robust multimodal information amidst noise. In this paper, we propose an innovative approach, named semantic-aware multimodal transformer fusion decoupled knowledge distillation method (SMTDKD), which not only guides video data recognition through the information interaction between different wearable-sensor data, but also through the information interaction between visual sensor data and wearable-sensor data, improving the robustness of the model. To preserve the temporal relationship within wearable-sensor data, the SMTDKD method converts them into 2D image data. Furthermore, a transformer-based multimodal fusion module is designed to capture diverse feature information from distinct wearable-sensor modalities. To mitigate modality discrepancies and encourage similar semantic features, graph cross-view attention maps are constructed across various convolutional layers to facilitate feature alignment. Additionally, semantic information is exchanged among the teacher-student network, the student network, and BERT-encoded labels. To obtain more comprehensive knowledge transfer, the decoupled knowledge distillation loss is utilized, thereby enhancing the generalization of the network. Experimental evaluations conducted on three multimodal datasets, namely UTD-MHAD, Berkeley-MHAD, and MMAct, demonstrate the superior performance of the proposed SMTDKD method over the state-of-the-art action human recognition methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI