计算机科学
情态动词
变压器
判决
情绪分析
人工智能
端到端原则
多任务学习
自然语言处理
任务(项目管理)
代表(政治)
词(群论)
语音识别
语言学
法学
政治学
政治
哲学
经济
高分子化学
电压
量子力学
管理
物理
化学
作者
Li Yang,Jin‐Cheon Na,Jianfei Yu
标识
DOI:10.1016/j.ipm.2022.103038
摘要
As an emerging task in opinion mining, End-to-End Multimodal Aspect-Based Sentiment Analysis (MABSA) aims to extract all the aspect-sentiment pairs mentioned in a pair of sentence and image. Most existing methods of MABSA do not explicitly incorporate aspect and sentiment information in their textual and visual representations and fail to consider the different contributions of visual representations to each word or aspect in the text. To tackle these limitations, we propose a multi-task learning framework named Cross-Modal Multitask Transformer (CMMT), which incorporates two auxiliary tasks to learn the aspect/sentiment-aware intra-modal representations and introduces a Text-Guided Cross-Modal Interaction Module to dynamically control the contributions of the visual information to the representation of each word in the inter-modal interaction. Experimental results demonstrate that CMMT consistently outperforms the state-of-the-art approach JML by 3.1, 3.3, and 4.1 absolute percentage points on three Twitter datasets for the End-to-End MABSA task, respectively. Moreover, further analysis shows that CMMT is superior to comparison systems in both aspect extraction (AE) and sentiment classification (SC), which would move the development of multimodal AE and SC algorithms forward with improved performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI