计算机科学
答疑
人工智能
判决
对偶(语法数字)
卷积神经网络
任务(项目管理)
词(群论)
注意力网络
模式识别(心理学)
特征(语言学)
自然语言处理
机器学习
经济
管理
艺术
文学类
语言学
哲学
作者
Yun Liu,Xiaoming Zhang,Qianyun Zhang,Chaozhuo Li,Feiran Huang,Xianghong Tang,Zhoujun Li
标识
DOI:10.1016/j.patcog.2021.107956
摘要
Visual Question Answering (VQA) as an important task in understanding vision and language has been proposed and aroused wide interests. In previous VQA methods, Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are generally used to extract visual and textual features respectively, and then the correlation between these two features is explored to infer the answer. However, CNN mainly focuses on extracting local spatial information and RNN pays more attention on exploiting sequential architecture and long-range dependencies. It is difficult for them to integrate the local features with their global dependencies to learn more effective representations of the image and question. To address this problem, we propose a novel model, i.e., Dual Self-Attention with Co-Attention networks (DSACA), for VQA. It aims to model the internal dependencies of both the spatial and sequential structure respectively by using the newly proposed self-attention mechanism. Specifically, DSACA mainly contains three submodules. The visual self-attention module selectively aggregates the visual features at each region by a weighted sum of the features at all positions. The textual self-attention module automatically emphasizes the interdependent word features by integrating associated features among the sentence words. Besides, the visual-textual co-attention module explores the close correlation between visual and textual features learned from self-attention modules. The three modules are integrated into an end-to-end framework to infer the answer. Extensive experiments performed on three generally used VQA datasets confirm the favorable performance of DSACA compared with state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI