计算机科学
情绪识别
人工智能
模式识别(心理学)
语音识别
作者
Weiguang Wang,Jian Lian,Chuanjie Xu
标识
DOI:10.1142/s0129065725500698
摘要
This study aims to develop a multimodal driver emotion recognition system that accurately identifies a driver’s emotional state during the driving process by integrating facial expressions, ElectroCardioGram (ECG) and ElectroEncephaloGram (EEG) signals. Specifically, this study proposes a model that employs a Conformer for analyzing facial images to extract visual cues related to the driver’s emotions. Additionally, two Autoformers are utilized to process ECG and EEG signals. The embeddings from these three modalities are then fused using a cross-attention mechanism. The integrated features from the cross-attention mechanism are passed through a fully connected layer and classified to determine the driver’s emotional state. The experimental results demonstrate that the fusion of visual, physiological and neurological modalities significantly improves the reliability and accuracy of emotion detection. The proposed approach not only offers insights into the emotional processes critical for driver assistance systems and vehicle safety but also lays the foundation for further advancements in emotion recognition area.
科研通智能强力驱动
Strongly Powered by AbleSci AI