Comprehensive Multisource Learning Network for Cross-Subject Multimodal Emotion Recognition

主题(文档) 计算机科学 情绪识别 人工智能 万维网
作者
Chuangquan Chen,Zhencheng Li,Kit Ian Kou,Jie Du,Chen Li,Hongtao Wang,Chi‐Man Vong
出处
期刊:IEEE transactions on emerging topics in computational intelligence [Institute of Electrical and Electronics Engineers]
卷期号:9 (1): 365-380 被引量:22
标识
DOI:10.1109/tetci.2024.3406422
摘要

Electroencephalography (EEG) signals and eye movement signals, which represent internal physiological responses and external subconscious behaviors, respectively, have been shown to be reliable indicators for recognizing emotions. However, integrating these two modalities across multiple subjects presents several challenges: 1) designing a robust consistency metric that balances the consistency and divergences between heterogeneous modalities across multiple subjects; 2) simultaneously considering intra-modality and inter-modality information across multiple subjects; and 3) overcoming individual differences among multiple subjects and generating subject-invariant representations of the multimodal fused features. To address these challenges associated with multisource data (i.e., multiple modalities and subjects), we propose a novel comprehensive multisource learning network (CMSLNet) for cross-subject multimodal emotion recognition. Specifically, an instance-level adaptive robust consistency metric is first designed to better align the information between EEG signals and eye movement signals, identifying their consistency and divergences across various emotions. Subsequently, an attentive low-rank multimodal fusion (Att-LMF) method is developed to account for individual differences and dynamically learn intra-modality and inter-modality information, resulting in highly discriminative fused features. Finally, domain generalization is utilized to extract subject-invariant representations of the fused features, thus adapting to new subjects and enhancing the model's generalization. Through these elaborate designs, CMSLNet effectively incorporates the information from multisource data, thus significantly improving the accuracy and reliability of emotion recognition. Extensive experiments on two public datasets demonstrate the superior performance of CMSLNet. CMSLNet achieves high accuracies of 83.15% on the SEED-IV dataset and 87.32% on the SEED-V dataset, surpassing the state-of-the-art methods by 3.62% and 4.60%, respectively.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
zh4men9完成签到,获得积分10
刚刚
徐华应助辛勤钧采纳,获得10
1秒前
1秒前
玉米发布了新的文献求助10
2秒前
Cassie发布了新的文献求助10
2秒前
大方岩完成签到,获得积分10
3秒前
3秒前
zhangzhang发布了新的文献求助10
4秒前
ddd发布了新的文献求助10
4秒前
斯文远望完成签到,获得积分10
5秒前
积极以云发布了新的文献求助10
5秒前
橘子七个七完成签到,获得积分10
6秒前
attilio完成签到,获得积分10
6秒前
7秒前
7秒前
attilio发布了新的文献求助10
8秒前
楠楠完成签到 ,获得积分10
10秒前
积极以云完成签到,获得积分10
12秒前
文艺的熠彤完成签到,获得积分10
12秒前
1l发布了新的文献求助10
12秒前
李sir发布了新的文献求助10
13秒前
玉米完成签到,获得积分10
13秒前
老何发布了新的文献求助10
14秒前
roy_chiang完成签到,获得积分10
15秒前
16秒前
Muth完成签到,获得积分10
16秒前
靓丽渊思完成签到,获得积分10
16秒前
审核中完成签到,获得积分10
16秒前
研友_VZG7GZ应助Cassie采纳,获得10
18秒前
sora完成签到,获得积分10
18秒前
梓言发布了新的文献求助10
19秒前
bcsunny2022完成签到,获得积分10
20秒前
小脚丫完成签到 ,获得积分10
20秒前
Liuhui完成签到,获得积分10
20秒前
qqq完成签到 ,获得积分10
21秒前
NNi完成签到,获得积分20
21秒前
兰亭序完成签到 ,获得积分10
21秒前
行知发布了新的文献求助10
23秒前
77完成签到 ,获得积分10
24秒前
ddd完成签到,获得积分10
25秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Emmy Noether's Wonderful Theorem 1200
Leading Academic-Practice Partnerships in Nursing and Healthcare: A Paradigm for Change 800
基于非线性光纤环形镜的全保偏锁模激光器研究-上海科技大学 800
Signals, Systems, and Signal Processing 610
Research Methods for Business: A Skill Building Approach, 9th Edition 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6410798
求助须知:如何正确求助?哪些是违规求助? 8230031
关于积分的说明 17464253
捐赠科研通 5463763
什么是DOI,文献DOI怎么找? 2886993
邀请新用户注册赠送积分活动 1863440
关于科研通互助平台的介绍 1702532