人工智能
自然语言处理
情绪分析
模态(人机交互)
任务(项目管理)
可视化
情报检索
答疑
语音识别
判决
特征(语言学)
模式
视觉注意
语义学(计算机科学)
作者
Xuelin Zhu,Biwei Cao,Shuai Xu,Bo Liu,Jiuxin Cao
标识
DOI:10.1007/978-3-030-05710-7_22
摘要
Recently, many researchers have focused on the joint visual-textual sentiment analysis since it can better extract user sentiments toward events or topics. In this paper, we propose that visual and textual information should differ in their contribution to sentiment analysis. Our model learns a robust joint visual-textual representation by incorporating a cross-modality attention mechanism and semantic embedding learning based on bidirectional recurrent neural network. Experimental results show that our model outperforms existing the state-of-the-art models in sentiment analysis under real datasets. In addition, we also investigate different proposed model’s variants and analyze the effects of semantic embedding learning and cross-modality attention mechanism in order to provide deeper insight on how these two techniques help the learning of joint visual-textual sentiment classifier.
科研通智能强力驱动
Strongly Powered by AbleSci AI