Relation-Aggregated Cross-Graph Correlation Learning for Fine-Grained Image–Text Retrieval

计算机科学 关系(数据库) 图形 人工智能 特征(语言学) 情报检索 特征学习 编码器 光学(聚焦) 模式识别(心理学) 自然语言处理 数据挖掘 理论计算机科学 物理 光学 操作系统 语言学 哲学
作者
Shu‐Juan Peng,Yi He,Xin Liu,Yiu‐ming Cheung,Xing Xu,Zhen Cui
出处
期刊:IEEE transactions on neural networks and learning systems [Institute of Electrical and Electronics Engineers]
卷期号:35 (2): 2194-2207 被引量:13
标识
DOI:10.1109/tnnls.2022.3188569
摘要

Fine-grained image-text retrieval has been a hot research topic to bridge the vision and languages, and its main challenge is how to learn the semantic correspondence across different modalities. The existing methods mainly focus on learning the global semantic correspondence or intramodal relation correspondence in separate data representations, but which rarely consider the intermodal relation that interactively provide complementary hints for fine-grained semantic correlation learning. To address this issue, we propose a relation-aggregated cross-graph (RACG) model to explicitly learn the fine-grained semantic correspondence by aggregating both intramodal and intermodal relations, which can be well utilized to guide the feature correspondence learning process. More specifically, we first build semantic-embedded graph to explore both fine-grained objects and their relations of different media types, which aim not only to characterize the object appearance in each modality, but also to capture the intrinsic relation information to differentiate intramodal discrepancies. Then, a cross-graph relation encoder is newly designed to explore the intermodal relation across different modalities, which can mutually boost the cross-modal correlations to learn more precise intermodal dependencies. Besides, the feature reconstruction module and multihead similarity alignment are efficiently leveraged to optimize the node-level semantic correspondence, whereby the relation-aggregated cross-modal embeddings between image and text are discriminatively obtained to benefit various image-text retrieval tasks with high retrieval performance. Extensive experiments evaluated on benchmark datasets quantitatively and qualitatively verify the advantages of the proposed framework for fine-grained image-text retrieval and show its competitive performance with the state of the arts.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
王十贰完成签到,获得积分10
1秒前
8秒前
8秒前
科研小陈完成签到,获得积分10
12秒前
yue发布了新的文献求助10
13秒前
华仔应助kmzzy采纳,获得10
13秒前
南星完成签到 ,获得积分10
16秒前
19秒前
20秒前
葡萄完成签到,获得积分10
22秒前
林莹发布了新的文献求助30
27秒前
yyyyyingX发布了新的文献求助10
32秒前
wanci应助完美无敌采纳,获得10
34秒前
科研通AI2S应助林莹采纳,获得10
37秒前
pluto应助SWEETYXY采纳,获得10
37秒前
李爱国应助yue采纳,获得10
39秒前
情怀应助简单刺猬采纳,获得10
39秒前
40秒前
42秒前
依依完成签到 ,获得积分10
43秒前
45秒前
45秒前
科研通AI2S应助科研通管家采纳,获得10
45秒前
bc应助科研通管家采纳,获得30
45秒前
bc应助科研通管家采纳,获得30
45秒前
bc应助科研通管家采纳,获得30
45秒前
Akim应助科研通管家采纳,获得10
45秒前
芷荷发布了新的文献求助10
47秒前
47秒前
48秒前
完美无敌发布了新的文献求助10
48秒前
认真的汉堡完成签到,获得积分20
49秒前
乐天林完成签到 ,获得积分10
50秒前
简单刺猬发布了新的文献求助10
53秒前
53秒前
瘦瘦瘦完成签到 ,获得积分10
55秒前
56秒前
tim完成签到,获得积分10
57秒前
Ava应助keock采纳,获得10
57秒前
57秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
Maneuvering of a Damaged Navy Combatant 650
Периодизация спортивной тренировки. Общая теория и её практическое применение 310
Mixing the elements of mass customisation 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3778595
求助须知:如何正确求助?哪些是违规求助? 3324214
关于积分的说明 10217326
捐赠科研通 3039397
什么是DOI,文献DOI怎么找? 1668059
邀请新用户注册赠送积分活动 798482
科研通“疑难数据库(出版商)”最低求助积分说明 758385