判别式
特征(语言学)
计算机科学
人工智能
突出
鉴定(生物学)
模式识别(心理学)
模态(人机交互)
模式
光学(聚焦)
特征学习
匹配(统计)
秩(图论)
编码(集合论)
语义学(计算机科学)
数学
物理
程序设计语言
社会学
集合(抽象数据类型)
哲学
光学
组合数学
统计
生物
植物
语言学
社会科学
作者
Qiong Wu,Jiaer Xia,Pingyang Dai,Yiyi Zhou,Yongjian Wu,Rongrong Ji
标识
DOI:10.1109/tnnls.2024.3382937
摘要
Visible-infrared person re-identification (VI-ReID) is the task of matching the same individuals across the visible and infrared modalities. Its main challenge lies in the modality gap caused by the cameras operating on different spectra. Existing VI-ReID methods mainly focus on learning general features across modalities, often at the expense of feature discriminability. To address this issue, we present a novel cycle-construction-based network for neutral yet discriminative feature learning, termed CycleTrans. Specifically, CycleTrans uses a lightweight knowledge capturing module (KCM) to capture rich semantics from the modality-relevant feature maps according to pseudo anchors. Afterward, a discrepancy modeling module (DMM) is deployed to transform these features into neutral ones according to the modality-irrelevant prototypes. To ensure feature discriminability, another two KCMs are further deployed for feature cycle constructions. With cycle construction, our method can learn effective neutral features for visible and infrared images while preserving their salient semantics. Extensive experiments on SYSU-MM01 and RegDB datasets validate the merits of CycleTrans against a flurry of state-of-the-art (SOTA) methods, $+1.88\%$ on rank-1 in SYSU-MM01 and $+1.1\%$ on rank-1 in RegDB. Our code is available at https://github.com/DoubtedSteam/CycleTrans.
科研通智能强力驱动
Strongly Powered by AbleSci AI