鉴定(生物学)
红外线的
特征(语言学)
人工智能
模式识别(心理学)
融合
计算机科学
物理
生物
光学
语言学
植物
哲学
作者
Sixian Chan,Weihao Meng,Zhuorong Li,Jie Hu,Xiaolong Zhou
标识
DOI:10.1109/tetci.2025.3579392
摘要
Visible-Infrared Pedestrian Re-identification(VI-ReID) aims to search for pedestrian identities across different spectra. The main challenge of the VI-ReID task is minimizing the modality differences between visible and infrared light. Existing approaches deal with the problem from the perspective of feature enrichment to help the model improve the discriminative ability. However, they lack an in-depth study of the relationship between the features. To address this problem, we propose a Diverse-Feature Hierarchical Fusion Learning Network(DHFLNet) for the VI-ReID. With the help of time-series fusion among diverse features, our proposed DHFLNet can effectively establish inter-sample connections and thus reduce modality differences. Specifically, we propose a Modal Feature Enhancement Module(MFEM), for which we design a Quaternary Central Cluster Loss(QCCLoss) for supervised feature extraction. Furthermore, for the obtained features, we deeply study the relationship between the features and propose a Temporal-Based Feature Fusion Module(TFFM), which considers the features of the convolutional block as temporal information and fuses them hierarchically to explore the information from a global perspective, which effectively improves the robustness and accuracy of the model. Extensive experiments demonstrate that our model achieves superior performance on the publicly available datasets SYSU-MM01, RegDB, and LLCM.
科研通智能强力驱动
Strongly Powered by AbleSci AI