模式
计算机科学
模态(人机交互)
人工智能
线性子空间
特征学习
多模式学习
编码器
冗余(工程)
代表(政治)
特征(语言学)
机器学习
自然语言处理
模式识别(心理学)
数学
政治
政治学
法学
社会科学
语言学
哲学
几何学
社会学
操作系统
作者
Dingkang Yang,Shuai Huang,Haopeng Kuang,Yangtao Du,Lihua Zhang
标识
DOI:10.1145/3503161.3547754
摘要
Multimodal emotion recognition aims to identify human emotions from text, audio, and visual modalities. Previous methods either explore correlations between different modalities or design sophisticated fusion strategies. However, the serious problem is that the distribution gap and information redundancy often exist across heterogeneous modalities, resulting in learned multimodal representations that may be unrefined. Motivated by these observations, we propose a Feature-Disentangled Multimodal Emotion Recognition (FDMER) method, which learns the common and private feature representations for each modality. Specifically, we design the common and private encoders to project each modality into modality-invariant and modality-specific subspaces, respectively. The modality-invariant subspace aims to explore the commonality among different modalities and reduce the distribution gap sufficiently. The modality-specific subspaces attempt to enhance the diversity and capture the unique characteristics of each modality. After that, a modality discriminator is introduced to guide the parameter learning of the common and private encoders in an adversarial manner. We achieve the modality consistency and disparity constraints by designing tailored losses for the above subspaces. Furthermore, we present a cross-modal attention fusion module to learn adaptive weights for obtaining effective multimodal representations. The final representation is used for different downstream tasks. Experimental results show that the FDMER outperforms the state-of-the-art methods on two multimodal emotion recognition benchmarks. Moreover, we further verify the effectiveness of our model via experiments on the multimodal humor detection task.
科研通智能强力驱动
Strongly Powered by AbleSci AI