Cross-modal knowledge reasoning for knowledge-based visual question answering

计算机科学 答疑 语义记忆 视觉推理 可解释性 人工智能 图形 传递关系 认知 理论计算机科学 神经科学 组合数学 数学 生物
作者
Jing Yu,Zihao Zhu,Yujing Wang,Weifeng Zhang,Yue Hu,Jianlong Tan
出处
期刊:Pattern Recognition [Elsevier BV]
卷期号:108: 107563-107563 被引量:92
标识
DOI:10.1016/j.patcog.2020.107563
摘要

• Using multiple knowledge graphs from the visual, semantic and factual views to depict the multimodal knowledge. • A memory-based recurrent model for multi-step knowledge reasoning over graphstructured multimodal knowledge. • Good interpretability to reveal the knowledge selection mode from different modalities. • Significant improvement over state-of-the-art approaches on three benchmark datasets. Knowledge-based Visual Question Answering (KVQA) requires external knowledge beyond the visible content to answer questions about an image. This ability is challenging but indispensable to achieve general VQA. One limitation of existing KVQA solutions is that they jointly embed all kinds of information without fine-grained selection, which introduces unexpected noises for reasoning the correct answer. How to capture the question-oriented and information-complementary evidence remains a key challenge to solve the problem. Inspired by the human cognition theory, in this paper, we depict an image by multiple knowledge graphs from the visual, semantic and factual views. Thereinto, the visual graph and semantic graph are regarded as image-conditioned instantiation of the factual graph. On top of these new representations, we re-formulate Knowledge-based Visual Question Answering as a recurrent reasoning process for obtaining complementary evidence from multimodal information. To this end, we decompose the model into a series of memory-based reasoning steps, each performed by a G raph-based R ead, U pdate, and C ontrol ( GRUC ) module that conducts parallel reasoning over both visual and semantic information. By stacking the modules multiple times, our model performs transitive reasoning and obtains question-oriented concept representations under the constrain of different modalities. Finally, we perform graph neural networks to infer the global-optimal answer by jointly considering all the concepts. We achieve a new state-of-the-art performance on three popular benchmark datasets, including FVQA, Visual7W-KB and OK-VQA, and demonstrate the effectiveness and interpretability of our model with extensive experiments. The source code is available at: https://github.com/astro-zihao/gruc
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
雨曦完成签到,获得积分10
3秒前
陈永伟发布了新的文献求助10
4秒前
5秒前
6秒前
7秒前
qq.com发布了新的文献求助10
12秒前
木木三发布了新的文献求助10
12秒前
18秒前
x跳完成签到,获得积分10
19秒前
冰魂应助Cc8采纳,获得10
20秒前
l127完成签到,获得积分20
22秒前
Aowu应助西乡塘塘主采纳,获得10
24秒前
24秒前
Wuhuijing完成签到,获得积分10
24秒前
zlimu发布了新的文献求助10
25秒前
今夕何夕完成签到,获得积分20
29秒前
上官若男应助苏雨康采纳,获得10
30秒前
31秒前
32秒前
西乡塘塘主完成签到,获得积分10
34秒前
彩色黑米发布了新的文献求助10
38秒前
猪猪hero应助科研通管家采纳,获得10
39秒前
39秒前
彭于彦祖应助科研通管家采纳,获得20
39秒前
大个应助科研通管家采纳,获得10
39秒前
科研通AI5应助科研通管家采纳,获得10
39秒前
雨夜星空应助科研通管家采纳,获得10
39秒前
丘比特应助科研通管家采纳,获得10
39秒前
酷波er应助科研通管家采纳,获得10
39秒前
桐桐应助科研通管家采纳,获得10
40秒前
星辰大海应助科研通管家采纳,获得10
40秒前
猪猪hero应助科研通管家采纳,获得10
40秒前
爆米花应助科研通管家采纳,获得10
40秒前
aprilvanilla应助科研通管家采纳,获得10
40秒前
顾矜应助科研通管家采纳,获得10
40秒前
猪猪hero应助科研通管家采纳,获得10
40秒前
40秒前
雨夜星空应助科研通管家采纳,获得10
40秒前
猪猪hero应助科研通管家采纳,获得10
40秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
ISCN 2024 – An International System for Human Cytogenomic Nomenclature (2024) 3000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
Maneuvering of a Damaged Navy Combatant 650
Mindfulness and Character Strengths: A Practitioner's Guide to MBSP 380
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3776768
求助须知:如何正确求助?哪些是违规求助? 3322170
关于积分的说明 10209141
捐赠科研通 3037424
什么是DOI,文献DOI怎么找? 1666679
邀请新用户注册赠送积分活动 797625
科研通“疑难数据库(出版商)”最低求助积分说明 757944