计算机科学
人工智能
RGB颜色模型
稳健性(进化)
服装
行人
身份(音乐)
计算机视觉
变压器
机器学习
模式识别(心理学)
工程类
艺术
美学
历史
生物化学
化学
电气工程
考古
电压
运输工程
基因
作者
Mingdong Zou,C. Joanna Su,Yujie Zhou,Caihong Yuan
标识
DOI:10.1145/3652628.3652748
摘要
A novel cloth-changing person re-identification (ReID) method utilizes multi-factor to extract cloth-irrelevant identity features. It starts by creating black-cloth images by masking upper clothes and pants from RGB images. Instead of directly learning from these, a teacher ReID model pretrains on them. The original pedestrian images are then used to extract cloth-irrelevant identity features, guided by the pretrained model. The head patch is separately extracted to preserve fine-grained head features. This three-branch framework (black-cloth, original, head) employs an identical network architecture without weight sharing, enhancing the robustness of cloth-irrelevant identity features. The method employs the improved vision transformer (imViT) backbone and achieves relatively good performance on PRCC and VC-Clothes datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI