Beyond Bilinear: Generalized Multimodal Factorized High-Order Pooling for Visual Question Answering

联营 计算机科学 判别式 答疑 人工智能 情态动词 特征(语言学) 双线性插值 分歧(语言学) 模式识别(心理学) 图像(数学) 功能(生物学) 机器学习 计算机视觉 哲学 生物 化学 高分子化学 进化生物学 语言学
作者
Yu Zhou,Jun Yu,Chenchao Xiang,Jianping Fan,Dacheng Tao
出处
期刊:IEEE transactions on neural networks and learning systems [Institute of Electrical and Electronics Engineers]
卷期号:29 (12): 5947-5959 被引量:509
标识
DOI:10.1109/tnnls.2018.2817340
摘要

Visual question answering (VQA) is challenging because it requires a simultaneous understanding of both visual content of images and textual content of questions. To support the VQA task, we need to find good solutions for the following three issues: 1) fine-grained feature representations for both the image and the question; 2) multi-modal feature fusion that is able to capture the complex interactions between multi-modal features; 3) automatic answer prediction that is able to consider the complex correlations between multiple diverse answers for the same question. For fine-grained image and question representations, a `co-attention' mechanism is developed by using a deep neural network architecture to jointly learn the attentions for both the image and the question, which can allow us to reduce the irrelevant features effectively and obtain more discriminative features for image and question representations. For multi-modal feature fusion, a generalized Multi-modal Factorized High-order pooling approach (MFH) is developed to achieve more effective fusion of multi-modal features by exploiting their correlations sufficiently, which can further result in superior VQA performance as compared with the state-of-the-art approaches. For answer prediction, the KL (Kullback-Leibler) divergence is used as the loss function to achieve precise characterization of the complex correlations between multiple diverse answers with the same or similar meaning, which can allow us to achieve faster convergence rate and obtain slightly better accuracy on answer prediction. A deep neural network architecture is designed to integrate all these aforementioned modules into a unified model for achieving superior VQA performance. With an ensemble of our MFH models, we achieve the state-of-the-art performance on the large-scale VQA datasets and win the runner-up in VQA Challenge 2017.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
MR_芝欧完成签到,获得积分10
刚刚
刚刚
刚刚
LY发布了新的文献求助10
1秒前
里埃尔塞因斯完成签到 ,获得积分10
1秒前
1秒前
2秒前
将来将去应助易达采纳,获得10
2秒前
CAOHOU应助这次会赢吗采纳,获得10
3秒前
3秒前
查查发布了新的文献求助10
4秒前
willa完成签到 ,获得积分10
4秒前
Lucas应助hhhhhh采纳,获得10
4秒前
机灵的雪糕完成签到,获得积分10
4秒前
4秒前
打打应助Kaysen92采纳,获得10
4秒前
lalala发布了新的文献求助10
5秒前
隐形曼青应助古哥采纳,获得10
6秒前
6秒前
7秒前
7秒前
8秒前
研友_VZG7GZ应助kingdomjust采纳,获得10
8秒前
英俊白莲发布了新的文献求助10
8秒前
隐形曼青应助健壮聪展采纳,获得10
8秒前
淡然水绿完成签到,获得积分10
8秒前
褪山海发布了新的文献求助10
8秒前
gaga应助无限的凡波采纳,获得10
8秒前
mjx完成签到,获得积分10
9秒前
万能图书馆应助quan采纳,获得10
11秒前
11秒前
11秒前
Ksa发布了新的文献求助10
13秒前
wanci应助zlenetr采纳,获得10
13秒前
将来将去应助复活采纳,获得10
13秒前
14秒前
lalala完成签到,获得积分20
14秒前
JY完成签到,获得积分20
14秒前
褪山海完成签到,获得积分10
14秒前
15秒前
高分求助中
【重要!!请各位用户详细阅读此贴】科研通的精品贴汇总(请勿应助) 10000
Semantics for Latin: An Introduction 1055
Plutonium Handbook 1000
Three plays : drama 1000
Robot-supported joining of reinforcement textiles with one-sided sewing heads 600
北师大毕业论文 基于可调谐半导体激光吸收光谱技术泄漏气体检测系统的研究 510
Cochrane Handbook for Systematic Reviews ofInterventions(current version) 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4102751
求助须知:如何正确求助?哪些是违规求助? 3640470
关于积分的说明 11536624
捐赠科研通 3349475
什么是DOI,文献DOI怎么找? 1840384
邀请新用户注册赠送积分活动 907376
科研通“疑难数据库(出版商)”最低求助积分说明 824522