模态(人机交互)
鉴定(生物学)
模式
计算机科学
判别式
匹配(统计)
粒度
人工智能
图形
二部图
光学(聚焦)
无监督学习
模式识别(心理学)
任务(项目管理)
机器学习
理论计算机科学
光学
物理
数学
操作系统
生物
经济
社会学
管理
统计
社会科学
植物
作者
Licun Dai,Zhiming Luo,Shaozi Li
标识
DOI:10.1145/3643490.3661809
摘要
Unsupervised visible-infrared person re-identification (USVI-ReID) is a challenging task that aims to retrieve images of the same person from different modalities without annotations. Existing works mainly focus on constructing cross-modality relationships with global features, the fine-grained part features remain unexplored, resulting in insufficient cross-modality learning. Therefore, we propose a Part-based Cross-Modality (PCM) learning framework to explore part features for USVI-ReID. Specifically, we first design a Part-integrated Dual-Contrastive (PDC) learning framework to obtain part features and learn discriminative information intramodality. Then, to associate samples from two modalities, we devise a Part-assisted Multiple Matching (PMM) module, which matches clusters with a weighted duplicated bipartite graph. Assisted by part features, the cost matrix for graph matching can be refined. Meanwhile, a Cross Alignment Learning (CAL) module is introduced to reduce modality discrepancy by aligning features at the granularity-level, memory-level and modality-level. Extensive experiments on two public datasets demonstrate the effectiveness of our proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI