A Versatile Framework for Multi-Scene Person Re-Identification

计算机科学 人工智能 鉴定(生物学) 推论 编码 模态(人机交互) 计算机视觉 机器学习 植物 生物 生物化学 化学 基因
作者
Wei‐Shi Zheng,Junkai Yan,Yi-Xing Peng
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:: 1-18 被引量:6
标识
DOI:10.1109/tpami.2024.3381184
摘要

Person Re-identification (ReID) has been extensively developed for a decade in order to learn the association of images of the same person across non-overlapping camera views. To overcome significant variations between images across camera views, mountains of variants of ReID models were developed for solving a number of challenges, such as resolution change, clothing change, occlusion, modality change, and so on. Despite the impressive performance of many ReID variants, these variants typically function distinctly and cannot be applied to other challenges. To our best knowledge, there is no versatile ReID model that can handle various ReID challenges at the same time. This work contributes to the first attempt at learning a versatile ReID model to solve such a problem. Our main idea is to form a two-stage prompt-based twin modeling framework called VersReID. Our VersReID firstly leverages the scene label to train a ReID Bank that contains abundant knowledge for handling various scenes, where several groups of scene-specific prompts are used to encode different scene-specific knowledge. In the second stage, we distill a V-Branch model with versatile prompts from the ReID Bank for adaptively solving the ReID of different scenes, eliminating the demand for scene labels during the inference stage. To facilitate training VersReID, we further introduce the multi-scene properties into self-supervised learning of ReID via a multi-scene prioris data augmentation (MPDA) strategy. Through extensive experiments, we demonstrate the success of learning an effective and versatile ReID model for handling ReID tasks under multi-scene conditions without manual assignment of scene labels in the inference stage, including general, low-resolution, clothing change, occlusion, and cross-modality scenes. Codes and models will be made publicly available.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
浮游应助z1z1z采纳,获得10
1秒前
浮游应助z1z1z采纳,获得10
1秒前
1秒前
1秒前
无名之辈完成签到,获得积分10
1秒前
水穷云起发布了新的文献求助10
3秒前
4秒前
玲玲发布了新的文献求助10
5秒前
飞丞发布了新的文献求助10
5秒前
科目三应助Lee采纳,获得10
5秒前
6秒前
可爱丹烟完成签到,获得积分10
7秒前
7秒前
7秒前
希望天下0贩的0应助里多采纳,获得10
7秒前
WYP完成签到,获得积分20
7秒前
8秒前
大象放冰箱完成签到,获得积分10
9秒前
nobody发布了新的文献求助30
9秒前
xiaoshuai发布了新的文献求助10
9秒前
可爱丹烟发布了新的文献求助10
10秒前
深情安青应助hhh采纳,获得10
10秒前
小猫完成签到 ,获得积分10
10秒前
11秒前
无花果应助llllllb采纳,获得10
11秒前
Xiaoxiao应助丢丢采纳,获得10
11秒前
超帅冬易发布了新的文献求助10
11秒前
lpp_发布了新的文献求助10
12秒前
14秒前
14秒前
donzang完成签到,获得积分20
14秒前
深情安青应助666采纳,获得10
15秒前
15秒前
15秒前
美好斓发布了新的文献求助30
16秒前
xfl完成签到,获得积分10
16秒前
LYZSh完成签到,获得积分10
16秒前
wangyan给wangyan的求助进行了留言
17秒前
如来发布了新的文献求助20
18秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Manipulating the Mouse Embryo: A Laboratory Manual, Fourth Edition 1000
Comparison of spinal anesthesia and general anesthesia in total hip and total knee arthroplasty: a meta-analysis and systematic review 500
INQUIRY-BASED PEDAGOGY TO SUPPORT STEM LEARNING AND 21ST CENTURY SKILLS: PREPARING NEW TEACHERS TO IMPLEMENT PROJECT AND PROBLEM-BASED LEARNING 500
Distinct Aggregation Behaviors and Rheological Responses of Two Terminally Functionalized Polyisoprenes with Different Quadruple Hydrogen Bonding Motifs 460
Writing to the Rhythm of Labor Cultural Politics of the Chinese Revolution, 1942–1976 300
Lightning Wires: The Telegraph and China's Technological Modernization, 1860-1890 250
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 催化作用 遗传学 冶金 电极 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 4579576
求助须知:如何正确求助?哪些是违规求助? 3997888
关于积分的说明 12376943
捐赠科研通 3672273
什么是DOI,文献DOI怎么找? 2023883
邀请新用户注册赠送积分活动 1057965
科研通“疑难数据库(出版商)”最低求助积分说明 944677