计算机科学
模态(人机交互)
一致性(知识库)
人工智能
特征(语言学)
身份(音乐)
特征学习
匹配(统计)
机器学习
自然语言处理
模式识别(心理学)
数学
语言学
哲学
物理
统计
声学
作者
Bin Yang,Jun Chen,Cuiqun Chen,Mang Ye
标识
DOI:10.1109/tifs.2023.3341392
摘要
Unsupervised visible-infrared person re-identification (US-VI-ReID) aims at learning a cross-modality matching model under unsupervised conditions, which is an extremely important task for practical nighttime surveillance to retrieve a specific identity. Previous advanced US-VI-ReID works mainly focus on associating the positive cross-modality identities to optimize the feature extractor by off-line manners, inevitably resulting in error accumulation of incorrect off-line cross-modality associations in each training epoch due to the intra-modality and inter-modality discrepancies. They ignore the direct cross-modality feature interaction in the training process, i.e., the on-line representation learning and updating. Worse still, existing interaction methods are also susceptible to inter-modality differences, leading to unreliable heterogeneous neighborhood learning. To address the above issues, we propose a dual consistency-constrained learning framework (DCCL) simultaneously incorporating off-line cross-modality label refinement and on-line feature interaction learning. The basic idea is that the relations between cross-modality instance-instance and instance-identity should be consistent. More specifically, DCCL constructs an instance memory, an identity memory, and a domain memory for each modality. At the beginning of each training epoch, DCCL explores the off-line consistency of cross-modality instance-instance and instance-identity similarities to refine the reliable cross-modality identities. During the training, DCCL finds credible homogeneous and heterogeneous neighborhoods with on-line consistency between query-instance similarity and query-instance domain probability similarities for feature interaction in one batch, enhancing the robustness against intra-modality and inter-modality variations. Extensive experiments validate that our method significantly outperforms existing works, and even surpasses some supervised counterparts. The source code is available at https://github.com/yangbincv/DCCL .
科研通智能强力驱动
Strongly Powered by AbleSci AI