计算机科学
光学(聚焦)
任务(项目管理)
对象(语法)
情报检索
情绪分析
人工智能
社会化媒体
自然语言处理
万维网
光学
物理
经济
管理
作者
Hanqian Wu,Siliang Cheng,Jingjing Wang,Shoushan Li,Lian Chi
标识
DOI:10.1007/978-3-030-60450-9_12
摘要
Fueled by the rise of social media, documents on these platforms (e.g., Twitter, Weibo) are increasingly multimodal in nature, with images in addition to text. To well automatically analyze the opinion information inside multimodal data, it’s crucial to perform aspect term extraction (ATE) on them. However, until now, the researches focus on multimodal ATE are rare. In this study, we take a step further than previous studies by proposing a Region-aware Alignment Network (RAN) that aligns text with object regions that show in an image for the multimodal ATE task. Experiments on the Twitter dataset showcase the effectiveness of our proposed model. Further researches prove that our model has better performance when extracting emotion polarized aspect terms.
科研通智能强力驱动
Strongly Powered by AbleSci AI