Cross-Modal Adaptive Dual Association for Text-to-Image Person Retrieval

计算机科学 联想(心理学) 对偶(语法数字) 模态(人机交互) 模式 人工智能 鉴定(生物学) 特征(语言学) 图像(数学) 失真(音乐) 钥匙(锁) 情态动词 情报检索 模式识别(心理学) 自然语言处理 语言学 化学 计算机网络 社会科学 计算机安全 带宽(计算) 放大器 高分子化学 认识论 社会学 植物 哲学 生物
作者
D. M. Lin,Yi-Xing Peng,Jingke Meng,Wei‐Shi Zheng
出处
期刊:IEEE Transactions on Multimedia [Institute of Electrical and Electronics Engineers]
卷期号:26: 6609-6620 被引量:24
标识
DOI:10.1109/tmm.2024.3355644
摘要

Text-to-image person re-identification (ReID) aims to retrieve images of a person based on a given textual description. The key challenge is to learn the relations between detailed information from visual and textual modalities. Existing work focuses on learning a latent space to narrow the modality gap and further build local correspondences between two modalities. However, these methods assume that image-to-text and text-to-image associations are modality-agnostic, resulting in suboptimal associations. In this work, we demonstrate the discrepancy between image-to-text association and text-to-image association and proposecross-modal adaptive dual association (CADA) to build fine bidirectional image-text detailed associations. Our approach features a decoder-based adaptive dual association module that enables full interaction between visual and textual modalities, enabling bidirectional and adaptive cross-modal correspondence associations. Specifically, this paper proposes a bidirectional association mechanism: Association of text Tokens to image Patches (ATP) and Association of image Regions to text Attributes (ARA). We adaptively model the ATP based on the fact that aggregating cross-modal features based on mistaken associations will lead to feature distortion. For modeling the ARA, since attributes are typically the first distinguishing cues of a person, we explore attribute-level associations by predicting the masked text phrase using the related image region. Finally, we learn the dual associations between texts and images, and the experimental results demonstrate the superiority of our dual formulation. The code used in this article will be made publicly available at https://github.com/LinDixuan/CADA .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
Isaiah发布了新的文献求助10
1秒前
喵脆角发布了新的文献求助10
1秒前
2秒前
SQ完成签到 ,获得积分10
2秒前
情怀应助务实海豚采纳,获得10
3秒前
3秒前
小郭发布了新的文献求助30
5秒前
爆米花应助小米采纳,获得10
5秒前
星辰大海应助跨材料采纳,获得10
6秒前
mof完成签到,获得积分10
7秒前
8秒前
8秒前
Dylan发布了新的文献求助30
10秒前
lizishu应助martain采纳,获得20
10秒前
王泰一发布了新的文献求助10
11秒前
12秒前
李健应助XUXU采纳,获得10
12秒前
Yuan完成签到 ,获得积分10
13秒前
Victor陈发布了新的文献求助10
13秒前
打打应助白三烯小童鞋采纳,获得10
14秒前
14秒前
14秒前
Srishti完成签到,获得积分10
15秒前
16秒前
17秒前
乐乐应助yue采纳,获得10
17秒前
wanci应助碳酸芙兰采纳,获得10
17秒前
coco完成签到,获得积分10
17秒前
小米发布了新的文献求助10
18秒前
情怀应助635266采纳,获得10
18秒前
汉堡包应助ADDDGDD采纳,获得10
18秒前
领导范儿应助ADDDGDD采纳,获得10
18秒前
Orange应助ADDDGDD采纳,获得10
18秒前
星辰大海应助ADDDGDD采纳,获得10
18秒前
CipherSage应助ADDDGDD采纳,获得10
18秒前
丘比特应助ADDDGDD采纳,获得10
19秒前
慕青应助ADDDGDD采纳,获得10
19秒前
大模型应助ADDDGDD采纳,获得10
19秒前
领导范儿应助ADDDGDD采纳,获得10
19秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Leading Academic-Practice Partnerships in Nursing and Healthcare: A Paradigm for Change 800
基于非线性光纤环形镜的全保偏锁模激光器研究-上海科技大学 800
Pulse width control of a 3-phase inverter with non sinusoidal phase voltages 777
Signals, Systems, and Signal Processing 610
Research Methods for Business: A Skill Building Approach, 9th Edition 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6409789
求助须知:如何正确求助?哪些是违规求助? 8228965
关于积分的说明 17459327
捐赠科研通 5462727
什么是DOI,文献DOI怎么找? 2886436
邀请新用户注册赠送积分活动 1862919
关于科研通互助平台的介绍 1702275