Disentangled Representation Learning for Multimodal Emotion Recognition

模式 计算机科学 模态(人机交互) 人工智能 线性子空间 特征学习 多模式学习 编码器 冗余(工程) 代表(政治) 特征(语言学) 机器学习 子空间拓扑 自然语言处理 数学 政治 政治学 法学 社会科学 语言学 哲学 几何学 社会学 操作系统
作者
Dingkang Yang,Shuai Huang,Haopeng Kuang,Yangtao Du,Lihua Zhang
标识
DOI:10.1145/3503161.3547754
摘要

Multimodal emotion recognition aims to identify human emotions from text, audio, and visual modalities. Previous methods either explore correlations between different modalities or design sophisticated fusion strategies. However, the serious problem is that the distribution gap and information redundancy often exist across heterogeneous modalities, resulting in learned multimodal representations that may be unrefined. Motivated by these observations, we propose a Feature-Disentangled Multimodal Emotion Recognition (FDMER) method, which learns the common and private feature representations for each modality. Specifically, we design the common and private encoders to project each modality into modality-invariant and modality-specific subspaces, respectively. The modality-invariant subspace aims to explore the commonality among different modalities and reduce the distribution gap sufficiently. The modality-specific subspaces attempt to enhance the diversity and capture the unique characteristics of each modality. After that, a modality discriminator is introduced to guide the parameter learning of the common and private encoders in an adversarial manner. We achieve the modality consistency and disparity constraints by designing tailored losses for the above subspaces. Furthermore, we present a cross-modal attention fusion module to learn adaptive weights for obtaining effective multimodal representations. The final representation is used for different downstream tasks. Experimental results show that the FDMER outperforms the state-of-the-art methods on two multimodal emotion recognition benchmarks. Moreover, we further verify the effectiveness of our model via experiments on the multimodal humor detection task.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
一只想做科研的狗完成签到,获得积分10
刚刚
Lucas应助SHY采纳,获得10
刚刚
1秒前
bjyx完成签到,获得积分10
1秒前
齐羽完成签到,获得积分10
1秒前
曾经不评完成签到,获得积分10
1秒前
冷静煎饼发布了新的文献求助10
3秒前
一叶不柳晴完成签到,获得积分20
3秒前
4秒前
4秒前
Rainy发布了新的文献求助200
4秒前
5秒前
江凡儿发布了新的文献求助50
6秒前
byby完成签到,获得积分10
7秒前
7秒前
标致如之发布了新的文献求助10
8秒前
8秒前
9秒前
弦歌发布了新的文献求助10
9秒前
默然回首发布了新的文献求助10
9秒前
Lelym驳回了十三应助
9秒前
Martin发布了新的文献求助10
9秒前
Ava应助科研通管家采纳,获得10
10秒前
丘比特应助科研通管家采纳,获得10
10秒前
cdercder应助科研通管家采纳,获得20
10秒前
bkagyin应助科研通管家采纳,获得10
10秒前
852应助科研通管家采纳,获得10
11秒前
半柚应助科研通管家采纳,获得10
11秒前
后来应助科研通管家采纳,获得10
11秒前
11秒前
慕青应助科研通管家采纳,获得10
11秒前
11秒前
六个核桃发布了新的文献求助10
11秒前
11秒前
Cuillli完成签到,获得积分10
12秒前
在水一方应助月明风清采纳,获得10
13秒前
WK完成签到,获得积分10
14秒前
nixx发布了新的文献求助10
15秒前
谦让秋天发布了新的文献求助10
16秒前
jue123发布了新的文献求助10
16秒前
高分求助中
Encyclopedia of Mathematical Physics 2nd edition 888
Introduction to Strong Mixing Conditions Volumes 1-3 500
Tip60 complex regulates eggshell formation and oviposition in the white-backed planthopper, providing effective targets for pest control 400
Optical and electric properties of monocrystalline synthetic diamond irradiated by neutrons 320
共融服務學習指南 300
Essentials of Pharmacoeconomics: Health Economics and Outcomes Research 3rd Edition. by Karen Rascati 300
Peking Blues // Liao San 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3803200
求助须知:如何正确求助?哪些是违规求助? 3348381
关于积分的说明 10338132
捐赠科研通 3064392
什么是DOI,文献DOI怎么找? 1682571
邀请新用户注册赠送积分活动 808249
科研通“疑难数据库(出版商)”最低求助积分说明 764034