亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Emotion Dictionary Learning With Modality Attentions for Mixed Emotion Exploration

情绪分类 模式 任务(项目管理) 模态(人机交互) 计算机科学 集合(抽象数据类型) 情绪识别 认知心理学 光学(聚焦) 情态动词 语音识别 人工智能 心理学 经济 社会学 化学 管理 高分子化学 程序设计语言 物理 光学 社会科学
作者
Fang Liu,Pei Yang,Yezhi Shu,Fei Yan,Guanhua Zhang,Yong‐Jin Liu
出处
期刊:IEEE Transactions on Affective Computing [Institute of Electrical and Electronics Engineers]
卷期号:15 (3): 1289-1302 被引量:1
标识
DOI:10.1109/taffc.2023.3334520
摘要

Multi-modal emotion analysis, as an important direction in affective computing, has attracted increasing attention in recent years. Most existing multi-modal emotion recognition studies are targeted at a classification task that aims to assign a specific emotion category to a combination of several heterogeneous input data, including multimedia signals and physiological signals. Compared to single-class emotion recognition, a growing number of recent psychological evidence suggests that different discrete emotions may co-exist at the same time, which promotes the development of mixed-emotion recognition to identify a mixture of basic emotions. Although most current studies treat it as a multi-label classification task, in this work, we focus on a challenging situation where both positive and negative emotions are presented simultaneously, and propose a multi-modal mixed emotion recognition framework, namely EmotionDict. The key characteristics of our EmotionDict include the following. (1) Inspired by the psychological evidence that such a mixed state can be represented by combinations of basic emotions, we address mixed emotion recognition as a label distribution learning task. An emotion dictionary has been designed to disentangle the mixed emotion representations into a weighted sum of a set of basic emotion elements in a shared latent space and their corresponding weights. (2) While many existing emotion distribution studies are built on a single type of multimedia signal (such as text, image, audio, and video), we incorporate physiological and overt behavioral multi-modal signals, including electroencephalogram (EEG), peripheral physiological signals, and facial videos, which directly display the subjective emotions. These modalities have diverse characteristics given that they are related to the central or peripheral nervous system, and the motor cortex. (3) We further design auxiliary tasks to learn modality attentions for modality integration. Experiments on two datasets show that our method outperforms existing state-of-the-art approaches on mixed-emotion recognition.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
慕青应助亚黑采纳,获得10
6秒前
16秒前
木斗斗完成签到,获得积分10
20秒前
亚黑发布了新的文献求助10
21秒前
可爱的函函应助木斗斗采纳,获得10
26秒前
亚黑完成签到,获得积分10
30秒前
leonardo125发布了新的文献求助10
1分钟前
鸭鸭完成签到 ,获得积分10
1分钟前
打打应助阿布采纳,获得10
1分钟前
2分钟前
2分钟前
3分钟前
3分钟前
阿布发布了新的文献求助10
3分钟前
这学真难读下去完成签到,获得积分10
3分钟前
4分钟前
FashionBoy应助阿布采纳,获得10
4分钟前
frl完成签到,获得积分10
4分钟前
frl发布了新的文献求助10
4分钟前
星辰大海应助酷酷白萱采纳,获得10
5分钟前
Eva完成签到,获得积分10
5分钟前
甜蜜外套完成签到 ,获得积分10
5分钟前
英姑应助直角圆圈采纳,获得10
6分钟前
心随以动完成签到 ,获得积分10
6分钟前
6分钟前
直角圆圈完成签到,获得积分10
6分钟前
冬去春来完成签到 ,获得积分10
6分钟前
直角圆圈发布了新的文献求助10
6分钟前
修辛完成签到 ,获得积分10
6分钟前
Jasper应助直角圆圈采纳,获得10
6分钟前
cy0824完成签到 ,获得积分10
6分钟前
谭平完成签到 ,获得积分10
7分钟前
7分钟前
酷酷白萱发布了新的文献求助10
7分钟前
8分钟前
武玉坤完成签到,获得积分10
8分钟前
9分钟前
今后应助自由的哑铃采纳,获得10
9分钟前
科研通AI2S应助科研通管家采纳,获得10
10分钟前
10分钟前
高分求助中
(应助此贴封号)【重要!!请各位详细阅读】【科研通的精品贴汇总】 10000
Pediatric Injectable Drugs 500
Instant Bonding Epoxy Technology 500
Methodology for the Human Sciences 500
ASHP Injectable Drug Information 2025 Edition 400
DEALKOXYLATION OF β-CYANOPROPIONALDEYHDE DIMETHYL ACETAL 400
March's Advanced Organic Chemistry: Reactions, Mechanisms, and Structure 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4377052
求助须知:如何正确求助?哪些是违规求助? 3872769
关于积分的说明 12068169
捐赠科研通 3515838
什么是DOI,文献DOI怎么找? 1929368
邀请新用户注册赠送积分活动 970965
科研通“疑难数据库(出版商)”最低求助积分说明 869650