UniRaG: Unification, Retrieval, and Generation for Multimodal Question Answering With Pre-Trained Language Models

计算机科学 答疑 自然语言处理 背景(考古学) 相关性(法律) 人工智能 隐藏字幕 情报检索 自然语言 代表(政治) 语言模型 知识库 古生物学 法学 图像(数学) 政治 生物 政治学
作者
Qi Zhi Lim,Chin Poo Lee,Kian Ming Lim,Ahmad Kamsani Samingan
出处
期刊:IEEE Access [Institute of Electrical and Electronics Engineers]
卷期号:12: 71505-71519
标识
DOI:10.1109/access.2024.3403101
摘要

Multimodal Question Answering (MMQA) has emerged as a challenging frontier at the intersection of natural language processing (NLP) and computer vision, demanding the integration of diverse modalities for effective comprehension and response. While pre-trained language models (PLMs) exhibit impressive performance across a range of NLP tasks, the investigation of text-based approaches to address MMQA represents a compelling and promising avenue for further research and advancement in the field. Although recent research has delved into text-based approaches for MMQA, the attained results have been unsatisfactory, which could be attributed to potential information loss during the knowledge transformation processes. In response, a novel three-stage framework named UniRaG is proposed for tackling MMQA, which encompasses unified knowledge representation, context retrieval, and answer generation. At the initial stage, advanced techniques are employed for unified knowledge representation, including LLaVA for image captioning and table linearization for tabular data, facilitating seamless integration of visual and tabular information into textual representation. For context retrieval, a cross-encoder trained on sequence classification is utilized to predict relevance scores for question-document pairs, and a top- k retrieval strategy is employed to retrieve the documents with the highest relevance scores as the contexts for answer generation. Finally, the answer generation stage is facilitated by a text-to-text PLM, Flan-T5-Base, which follows the encoder-decoder architecture with attention mechanisms. During this stage, uniform prefix conditioning is applied to the input text for enhanced adaptability and generalizability. Moreover, contextual diversity training is introduced to improve model robustness by including distractor documents as negative contexts during training. Experimental results on the MultimodalQA dataset demonstrate the superior performance of UniRaG, surpassing the existing state-of-the-art methods across all scenarios with 67.4% EM and 71.3% F 1 . Overall, UniRaG showcases robustness and reliability in MMQA, heralding significant advancements in multimodal comprehension and question answering research.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
落叶发布了新的文献求助50
刚刚
桐桐应助茶荼采纳,获得10
1秒前
李二牛发布了新的文献求助10
1秒前
阿琪完成签到,获得积分10
3秒前
一片叶子完成签到,获得积分10
4秒前
5秒前
5秒前
6秒前
华仔应助prim采纳,获得10
7秒前
Tayzon发布了新的文献求助10
9秒前
10秒前
木木三发布了新的文献求助10
10秒前
小王完成签到,获得积分10
11秒前
12秒前
彭于晏应助萨芬撒采纳,获得10
14秒前
传奇3应助萨芬撒采纳,获得10
14秒前
田様应助萨芬撒采纳,获得10
14秒前
SciGPT应助萨芬撒采纳,获得10
14秒前
科研通AI2S应助萨芬撒采纳,获得10
14秒前
相宜完成签到 ,获得积分10
14秒前
无辜的蜗牛完成签到 ,获得积分10
15秒前
Serena发布了新的文献求助10
15秒前
CR7完成签到,获得积分10
17秒前
Guai完成签到 ,获得积分10
18秒前
冰魂应助糕手采纳,获得10
18秒前
wsfwsf01完成签到,获得积分10
18秒前
Hello应助萨芬撒采纳,获得10
19秒前
隐形曼青应助积极的明天采纳,获得30
19秒前
今后应助科研小豪采纳,获得10
22秒前
希望天下0贩的0应助云泥采纳,获得10
23秒前
科研通AI2S应助章鱼采纳,获得10
23秒前
对方正在长头发完成签到 ,获得积分10
26秒前
斯文败类应助Serena采纳,获得10
26秒前
123完成签到 ,获得积分10
26秒前
思源应助腼腆的白开水采纳,获得10
27秒前
852应助orange采纳,获得10
27秒前
orixero应助木木三采纳,获得10
28秒前
29秒前
29秒前
今后应助苏雨康采纳,获得10
31秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
ISCN 2024 – An International System for Human Cytogenomic Nomenclature (2024) 3000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
Maneuvering of a Damaged Navy Combatant 650
China—Art—Modernity: A Critical Introduction to Chinese Visual Expression from the Beginning of the Twentieth Century to the Present Day 360
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3776812
求助须知:如何正确求助?哪些是违规求助? 3322237
关于积分的说明 10209395
捐赠科研通 3037506
什么是DOI,文献DOI怎么找? 1666749
邀请新用户注册赠送积分活动 797656
科研通“疑难数据库(出版商)”最低求助积分说明 757976