特征(语言学)
对偶(语法数字)
鉴定(生物学)
红外线的
计算机科学
人工智能
模式识别(心理学)
物理
生物
光学
艺术
语言学
植物
文学类
哲学
作者
Guoqing Zhang,Yinyin Zhang,Hongwei Zhang,Yuhao Chen,Yinqiang Zheng
标识
DOI:10.1016/j.jvcir.2024.104076
摘要
Most previous visible–infrared person re-identification methods emphasized learning modality-shared features to narrow the modality differences, while neglecting the benefits of modality-specific features for feature embedding and narrowing the modality gap. To tackle this issue, our paper designs a method based on dual attention enhancement features to use shallow and deep features simultaneously. We first convert visible images into gray images to alleviate the visual difference. Then, to close the difference between modalities by learning the modality-specific features, we design a shallow feature measurement module, in which we use a class-specific maximum mean discrepancy loss to measure the distribution difference of specific features between two modalities. Finally, we design a dual attention feature enhancement module, which aims to mine more useful context information from modality-shared features to shorter the distance between classes within modalities. Specifically, our model exceeds the current SOTAs on SYSU-MM01, with 66.61% Rank-1 accuracy and 62.86% mAP.
科研通智能强力驱动
Strongly Powered by AbleSci AI