计算机科学
判别式
情绪识别
人工智能
保险丝(电气)
模式识别(心理学)
语音识别
面部表情
融合
可靠性(半导体)
传感器融合
机器学习
工程类
语言学
哲学
电气工程
功率(物理)
物理
量子力学
作者
N. Priyadarshini,J Aravinth
标识
DOI:10.1109/icsccc58608.2023.10176510
摘要
Emotion recognition has become an important research topic to solve the practical problems faced by humans. The traditional method of Emotion recognition using facial expressions entails social issues such as privacy threats and reliability. The state of the person's real emotion can be reflected through physiological signals which are considered to be time series data. Emotion recognition using Multi-modal physiological signals gives better discriminative information when compared to information provided by the unimodal physiological signal. In this method, various physiological signals such as ECG, EEG, Respiration, and Temperature are segmented, fused and classified using Gated Recurrent Unit (GRU) and Long-Short Term Memory (LSTM). A multimodal fusion network is designed to fuse the features of four physiological signals. These features are classified into three classes namely sad, neutral and happy. The model designed is evaluated using three emotion datasets such as SEED, DREAMER and WESAD datasets respectively. From the results obtained it was observed that the proposed method achieves an average accuracy of 74% for multi-modal fusion using LSTM and 73% using GRU while 1DCNN acquired an accuracy of 61% for multi-model fusion.
科研通智能强力驱动
Strongly Powered by AbleSci AI