情绪分析
计算机科学
水准点(测量)
人气
情态动词
人工智能
模式
相关性(法律)
自然语言处理
多模态
机器学习
万维网
心理学
社会学
化学
社会心理学
高分子化学
法学
社会科学
地理
政治学
大地测量学
作者
Ayush Kumar,Jithendra Vepa
标识
DOI:10.1109/icassp40776.2020.9053012
摘要
Multimodal sentiment analysis has recently gained popularity because of its relevance to social media posts, customer service calls and video blogs. In this paper, we address three aspects of multimodal sentiment analysis; 1. Cross modal interaction learning, i.e. how multiple modalities contribute to the sentiment, 2. Learning long-term dependencies in multimodal interactions and 3. Fusion of unimodal and cross modal cues. Out of these three, we find that learning cross modal interactions is beneficial for this problem. We perform experiments on two benchmark datasets, CMU Multimodal Opinion level Sentiment Intensity (CMU-MOSI) and CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) corpus. Our approach on both these tasks yields accuracies of 83.9% and 81.1% respectively, which is 1.6% and 1.34% absolute improvement over current state-of-the-art.
科研通智能强力驱动
Strongly Powered by AbleSci AI