计算机科学
脑电图
Softmax函数
卷积神经网络
语音识别
残余物
模式识别(心理学)
人工智能
情绪分类
支持向量机
特征提取
多元统计
机器学习
心理学
算法
精神科
作者
V. Padhmashree,Abhijit Bhattacharyya
标识
DOI:10.1016/j.knosys.2021.107867
摘要
Understanding the expression of human emotional states plays a prominent role in interactive multimodal interfaces, affective computing, and the healthcare sector. Emotion recognition through electroencephalogram (EEG) signals is a simple, inexpensive, compact, and precise solution. This paper proposes a novel four-stage method for human emotion recognition using multivariate EEG signals. In the first stage, multivariate variational mode decomposition (MVMD) is employed to extract an ensemble of multivariate modulated oscillations (MMOs) from multichannel EEG signals. In the second stage, multivariate time–frequency (TF) images are generated using joint instantaneous amplitude (JIA), and joint instantaneous frequency (JIF) functions computed from the extracted MMOs. In the next stage, deep residual convolutional neural network ResNet-18 is customized to extract hidden features from the TF images. Finally, the classification is performed by the softmax layer. To further evaluate the performance of the model, various machine learning (ML) classifiers are employed. The feasibility and validity of the proposed method are verified using two different public emotion EEG datasets. The experimental results demonstrate that the proposed method outperforms the state-of-the-art emotion recognition methods with the best accuracy of 99.03, 97.59, and 97.75 percent for classifying arousal, dominance, and valence emotions, respectively. Our study reveals that TF-based multivariate EEG signal analysis using a deep residual network achieves superior performance in human emotion recognition.
科研通智能强力驱动
Strongly Powered by AbleSci AI