模态(人机交互)
人工智能
计算机科学
鉴定(生物学)
特征(语言学)
对抗制
计算机视觉
模式识别(心理学)
红外线的
生成对抗网络
图像(数学)
模式
物理
社会学
哲学
光学
生物
植物
语言学
社会科学
作者
Xian Zhong,Tianyou Lu,Wenxin Huang,Jingling Yuan,Wenxuan Liu,Chia‐Wen Lin
标识
DOI:10.1145/3372278.3390696
摘要
With explosive surveillance data during day and night, visible-infrared person re-identification (VI-ReID) is an emerging challenge due to the apparent cross-modality discrepancy between visible and infrared images. Existing VI-ReID work mainly focuses on learning a robust feature to represent a person in both modalities despite the modality gap cannot be effectively eliminated. Recent research works have proposed various generative adversarial network (GAN) models to transfer the visible modality to another unified modality, aiming to bridge the cross-modality gap. However, they neglect the information loss caused by transferring the domain of visible images which is significant for identification. To effectively address the problems, we observe that key information such as textures and semantics in an infrared image can help to color the image itself and the colored infrared image maintains rich information from infrared image while reducing the discrepancy with the visible image. We therefore propose a colorization-based Siamese generative adversarial network (CoSiGAN) for VI-ReID to bridge the cross-modality gap, by retaining the identity of the colored infrared image. Furthermore, we also propose a feature-level fusion model to supplement the transfer loss of colorization. The experiments conducted on two cross-modality person re-identification datasets demonstrate the superiority of the proposed method compared with the state-of-the-arts.
科研通智能强力驱动
Strongly Powered by AbleSci AI