BDNet: A BERT-based dual-path network for text-to-image cross-modal person re-identification

计算机科学 特征(语言学) 特征学习 人工智能 公制(单位) 鉴定(生物学) 代表(政治) 模式识别(心理学) 联营 特征提取 路径(计算) 机器学习 哲学 语言学 植物 生物 运营管理 政治 政治学 法学 经济 程序设计语言
作者
Qiang Liu,Xiaohai He,Qizhi Teng,Linbo Qing,Honggang Chen
出处
期刊:Pattern Recognition [Elsevier]
卷期号:141: 109636-109636 被引量:24
标识
DOI:10.1016/j.patcog.2023.109636
摘要

Text-to-image person re-identification (TI-ReID) aims to provide a descriptive sentence to find a specific person in the gallery. The task is very challenging due to the huge feature differences between both image and text descriptions. Currently, most approaches use the idea of combining global and local features to get more fine-grained features. However, these methods usually acquire local features with the help of human pose or segmentation models, which makes it difficult to use in realistic scenarios due to the introduction of additional models or complex training evaluation strategies. To facilitate practical applications, we propose a BERT-based framework for dual-path TI-ReID. Without the help of additional models, our approach directly employs visual attention in the global feature extraction network to allow the network to adaptively learn to focus on salient local features in image and text descriptions, which enhances the network’s attention to local information through a visual attention mechanism, thus strengthening the global feature representation and effectively improving the global feature representation. In addition, to learn text and image modality invariant feature representations, we propose a convolutional shared network (CSN) to learn image and text features together. To optimize cross-modal feature distances more effectively, we propose a global hybrid modal triplet global metric loss. In addition to combining local metric learning and global metric learning, we also introduce the CMPM loss and CMPC loss to jointly optimize the proposed model. Extensive experiments on the CUHK-PEDES dataset show that the proposed method performs significantly better than the current research results, achieving a Rank-1/mAP accuracy of 66.27%/ 57.04%.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
大个应助yuanbai采纳,获得30
刚刚
刚刚
刚刚
刚刚
1秒前
彩虹海发布了新的文献求助10
1秒前
1秒前
Jasper应助初见采纳,获得10
1秒前
1秒前
zho完成签到,获得积分10
1秒前
1秒前
陈品琪发布了新的文献求助10
2秒前
面壁思过应助陈丽媛采纳,获得10
2秒前
迅速的访彤完成签到,获得积分10
2秒前
慕青应助极电采纳,获得10
2秒前
rjy完成签到,获得积分10
2秒前
研友_LN32Mn发布了新的文献求助10
3秒前
骤雨时晴完成签到 ,获得积分10
3秒前
John_Xiong完成签到,获得积分10
3秒前
重新开始发布了新的文献求助10
3秒前
3秒前
花里胡哨的花完成签到,获得积分10
3秒前
3秒前
英勇代荷完成签到,获得积分10
4秒前
狂野的夏柳完成签到,获得积分10
4秒前
微光完成签到,获得积分10
4秒前
阿美完成签到,获得积分10
4秒前
5秒前
李健的粉丝团团长应助Wd采纳,获得10
5秒前
xxx完成签到,获得积分10
5秒前
NexusExplorer应助Cai采纳,获得10
6秒前
爆米花应助xiaoyue采纳,获得10
6秒前
畅快诗蕊应助唐代斯采纳,获得10
6秒前
大海之滨发布了新的文献求助10
6秒前
直率的莺发布了新的文献求助10
6秒前
星宿完成签到,获得积分10
6秒前
lzd发布了新的文献求助10
6秒前
烽火残心发布了新的文献求助10
6秒前
雯雯发布了新的文献求助10
6秒前
a61发布了新的文献求助10
7秒前
高分求助中
2025-2031全球及中国金刚石触媒粉行业研究及十五五规划分析报告 15000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1000
The Composition and Relative Chronology of Dynasties 16 and 17 in Egypt 1000
Russian Foreign Policy: Change and Continuity 800
Real World Research, 5th Edition 800
Qualitative Data Analysis with NVivo By Jenine Beekhuyzen, Pat Bazeley · 2024 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5701736
求助须知:如何正确求助?哪些是违规求助? 5144948
关于积分的说明 15235040
捐赠科研通 4856722
什么是DOI,文献DOI怎么找? 2606073
邀请新用户注册赠送积分活动 1557344
关于科研通互助平台的介绍 1515215