计算机科学
模式
人工智能
对话
图形
冗余(工程)
保险丝(电气)
背景(考古学)
人机交互
机器学习
理论计算机科学
工程类
古生物学
社会科学
语言学
哲学
社会学
电气工程
生物
操作系统
作者
Dou Hu,Xiaolong Hou,Lingwei Wei,Lianxin Jiang,Yang Mo
标识
DOI:10.1109/icassp43922.2022.9747397
摘要
Emotion Recognition in Conversations (ERC) has considerable prospects for developing empathetic machines. For multimodal ERC, it is vital to understand context and fuse modality information in conversations. Recent graph-based fusion methods generally aggregate multimodal information by exploring unimodal and cross-modal interactions in a graph. However, they accumulate redundant information at each layer, limiting the context understanding between modalities. In this paper, we propose a novel Multimodal Dynamic Fusion Network (MM-DFN) to recognize emotions by fully understanding multimodal conversational context. Specifically, we design a new graph-based dynamic fusion module to fuse multimodal context features in a conversation. The module reduces redundancy and enhances complementarity between modalities by capturing the dynamics of contextual information in different semantic spaces. Extensive experiments on two public benchmark datasets demonstrate the effectiveness and superiority of the proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI