计算机科学
情绪分析
背景(考古学)
人工智能
任务(项目管理)
又称作
模态(人机交互)
机制(生物学)
社会化媒体
自然语言处理
实体链接
代表(政治)
情报检索
万维网
知识库
哲学
法学
管理
经济
图书馆学
古生物学
认识论
政治
生物
政治学
作者
Jianfei Yu,Jing Jiang,Rui Xia
标识
DOI:10.1109/taslp.2019.2957872
摘要
Entity-level (aka target-dependent) sentiment analysis of social media posts has recently attracted increasing attention, and its goal is to predict the sentiment orientations over individual target entities mentioned in users' posts. Most existing approaches to this task primarily rely on the textual content, but fail to consider the other important data sources (e.g., images, videos, and user profiles), which can potentially enhance these text-based approaches. Motivated by the observation, we study entity-level multimodal sentiment classification in this article, and aim to explore the usefulness of images for entity-level sentiment detection in social media posts. Specifically, we propose an Entity-Sensitive Attention and Fusion Network (ESAFN) for this task. First, to capture the intra-modality dynamics, ESAFN leverages an effective attention mechanism to generate entity-sensitive textual representations, followed by aggregating them with a textual fusion layer. Next, ESAFN learns the entity-sensitive visual representation with an entity-oriented visual attention mechanism, followed by a gated mechanism to eliminate the noisy visual context. Moreover, to capture the inter-modality dynamics, ESAFN further fuses the textual and visual representations with a bilinear interaction layer. To evaluate the effectiveness of ESAFN, we manually annotate the sentiment orientation over each given entity based on two recently released multimodal NER datasets, and show that ESAFN can significantly outperform several highly competitive unimodal and multimodal methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI