脑电图
计算机科学
模式识别(心理学)
人工智能
特征(语言学)
特征提取
语音识别
情绪识别
融合
神经科学
心理学
语言学
哲学
作者
Xiaoman Wang,Jianwen Zhang,Chunhua He,Heng Wu,Lianglun Cheng
标识
DOI:10.1109/jiot.2023.3320269
摘要
Emotions are complex, and people vary greatly in their accuracy in recognizing their own emotions and those of others. With advances in computer science and neuroscience, there is a desire to use automated techniques to help people identify emotions. Bio-electrical signals have been proven effective for emotion detection, but the acquisition of conventional electrocardiogram (ECG) and EEG requires medical-specific equipment, which is very expensive, uncomfortable, and inconvenient due to the large number of electrodes and the hair-covered scalp. In this article, a novel emotion recognition method based on the feature fusion of single-lead EEG and ECG signals is proposed, using the long short term memory (LSTM)-MLP-based model and the CNN-based model for feature fusion and classification, respectively, with fivefold cross-validation for validation. The ECG and EEG signals of 15 participants were collected in five states: 1) happy; 2) relaxed; 3) calm; 4) sad; and 5) afraid, each of which was stimulated using the participants' own proposed music. Various time-domain features, frequency-domain features, and nonlinear features were extracted from the ECG and EEG signals. Experimental results demonstrate that the accuracy of emotion recognition and classification of signals captured by the proposed device can reach 92.08% using the CNN model. While using the LSTM-MLP feature fusion model, the accuracy figure can be improved to 95.07%. The results of the ablation experiment indicate that the feature fusion approach does improve the accuracy of recognition. It is demonstrated that the proposed device and emotional recognition approach are effective and feasible.
科研通智能强力驱动
Strongly Powered by AbleSci AI