情绪分析
计算机科学
任务(项目管理)
人工智能
判决
自然语言处理
编码
特征(语言学)
模式
任务分析
机器学习
经济
社会科学
管理
化学
生物化学
社会学
哲学
基因
语言学
标识
DOI:10.1145/3614008.3614055
摘要
Multimodal sentiment analysis aims to predict the overall sentiment polarity from multimodal signals, an essential task for many applications. A central part of this task is designing a suitable fusion model to integrate heterogeneous information from different modalities. In this paper, we propose a multi-task learning-based approach to multimodal sentiment analysis and emotion recognition, where previous research approaches treat sentiment analysis and emotion recognition as two separate tasks, ignoring the correlation between sentiment and emotion. In this paper, the two tasks are modeled using a shared-private model in a multi-task framework. First, sentiment analysis and emotion recognition extract their respective features using separate private layers. Secondly, the feature-sharing layer uses a shared Bi-LSTM network and an inter-sentence attention network to encode each sentence to obtain a sentence-level semantic representation. Finally, private and fused features are used for sentiment analysis and recognition to enhance multimodal sentiment analysis results. Experimental results on the dataset CMU-MOSEI show that the model performs well and demonstrates the effectiveness of multi-task learning in multimodal sentiment analysis.
科研通智能强力驱动
Strongly Powered by AbleSci AI