计算机科学
情绪分析
情态动词
任务(项目管理)
人工智能
标杆管理
接头(建筑物)
特征(语言学)
领域(数学)
自然语言处理
机器学习
语言学
工程类
哲学
业务
数学
经济
营销
建筑工程
化学
管理
高分子化学
纯数学
作者
Yazhou Zhang,Lu Rong,Xiang Li,Rui Chen
标识
DOI:10.1007/978-3-030-99736-6_35
摘要
Emotion is seen as the external expression of sentiment, while sentiment is the essential nature of emotion. They are tightly entangled with each other in that one helps the understanding of the other, leading to a new research topic, i.e., multi-modal sentiment and emotion joint analysis. There exists two key challenges in this field, i.e., multi-modal fusion and multi-task interaction. Most of the recent approaches treat them as two independent tasks, and fail to model the relationships between them. In this paper, we propose a novel multi-modal multi-task learning model, termed MMT, to generically address such issues. Specially, two attention mechanisms, i.e., cross-modal and cross-task attentions are designed. Cross-modal attention is proposed to model multi-modal feature fusion, while cross-task attention is to capture the interaction between sentiment analysis and emotion recognition. Finally, we empirically show that this method alleviates such problems on two benchmarking datasets, while getting better performance for the main task, i.e., sentiment analysis with the help of the secondary emotion recognition task.
科研通智能强力驱动
Strongly Powered by AbleSci AI