计算机科学
脑电图
人工智能
情绪分类
认知心理学
情绪识别
情感计算
语音识别
心理学
神经科学
作者
Yuzhe Zhang,Huan Liu,Dalin Zhang,Xuxu Chen,Tao Qin,Qinghua Zheng
标识
DOI:10.1109/taffc.2022.3145623
摘要
Emotion recognition based on electroencephalography (EEG) has attracted significant attention due to its wide range of applications, especially in Human-Computer Interaction(HCI). Previous research treats different segments of EEG signals uniformly, ignoring the fact that emotions are unstable and discrete during an extended period. In this paper, we propose a novel two-step spatial-temporal emotion recognition framework. First, considering that the human emotion has not only "short-term continuity" but also "long-term similarity", we propose a hierarchical self-attention network to jointly model local and global temporal information, so as to localize most related segments and reduce the influence of noise at the temporal level. Second, in order to extract discriminative features at the spatial level to enhance the emotion recognition performance, we further employ the squeeze-and-excitation module (SE module) along with the channel correlation loss (CC-Loss) to select the most task-related channels. We also define a new task called emotion localization , which aims to localize fragments with stronger emotions. We evaluate the proposed method on the proposed emotion localization task and typical emotion recognition task with three publicly available datasets, i.e., SEED, DEAP, and MAHNOB-HCI. The experimental results demonstrate that the proposed approach outperforms state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI