计算机科学
情绪分析
透视图(图形)
构造(python库)
人工智能
情报检索
图形
对偶(语法数字)
自然语言处理
理论计算机科学
艺术
文学类
程序设计语言
作者
Di Wang,Changxu Tian,Liang Xiao,Zhao Lin,Lihuo He,Quan Wang
标识
DOI:10.1109/tmm.2023.3321435
摘要
Aspect-based multimodal sentiment analysis (ABMSA) is an important sentiment analysis task that analyses aspect-specific sentiment in data with different modalities (usually multimodal data with text and images). Previous works usually ignore the overall sentiment tendency when analyzing the sentiment of each aspect term. However, the overall sentiment tendency is highly correlated with aspect-specific sentiment. In addition, existing methods neglect to explore and make full use of the fine-grained multimodal information closely related to aspect terms. To address these limitations, we propose a dual-perspective fusion network (DPFN) that considers both global and local fine-grained sentiment information in multimodal data. From the global perspective, we use text-image caption pairs to obtain a global representation containing information about the overall sentiment tendencies. From the local fine-grained perspective, we construct two graph structures to explore the fine-grained information in texts and images. Finally, aspect-level sentiment polarities can be obtained by analyzing the combination of global and local fine-grained sentiment information. Experimental results on two multimodal Twitter datasets show that the proposed DPFN model outperforms state-of-the-art methods. The source code is publicly available at https://github.com/cntian0/DPFN .
科研通智能强力驱动
Strongly Powered by AbleSci AI