计算机科学
情绪分析
领域(数学)
集合(抽象数据类型)
情报检索
利用
人工智能
社会化媒体
阅读(过程)
图像(数学)
万维网
计算机安全
程序设计语言
法学
纯数学
数学
政治学
作者
Wenya Guo,Ying Zhang,Xiangrui Cai,Lei Meng,Jufeng Yang,Xiaojie Yuan
标识
DOI:10.1109/tmm.2020.3003648
摘要
The prevailing use of both images and text to express opinions on the web leads to the need for multimodal sentiment recognition. Some commonly used social media data containing short text and few images, such as tweets and product reviews, have been well studied. However, it is still challenging to predict the readers' sentiment after reading online news articles, since news articles often have more complicated structures, e.g., longer text and more images. To address this problem, we propose a layout-driven multimodal attention network (LD-MAN) to recognize news sentiment in an end-to-end manner. Rather than modeling text and images individually, LD-MAN uses the layout of online news to align images with the corresponding text. Specifically, it exploits a set of distance-based coefficients to model the image locations and measure the contextual relationship between images and text. LD-MAN then learns the affective representations of the articles from the aligned text and images using a multimodal attention mechanism. Considering the lack of relevant datasets in this field, we collect two multimodal online news datasets, containing a total of 14,566 articles with 56,260 images and 251,202 words. Experimental results demonstrate that the proposed method performs favorably compared with state-of-the-art approaches. We will release all the codes, models and datasets to the community.
科研通智能强力驱动
Strongly Powered by AbleSci AI