计算机科学
鉴定(生物学)
跳跃式监视
领域(数学分析)
桥(图论)
生成对抗网络
人工智能
生成语法
学习迁移
封面(代数)
传输(计算)
对抗制
机器学习
利用
深度学习
计算机视觉
计算机安全
工程类
数学
内科学
数学分析
生物
机械工程
医学
并行计算
植物
作者
Longhui Wei,Shiliang Zhang,Wen Gao,Qi Tian
出处
期刊:Cornell University - arXiv
日期:2022-03-04
被引量:118
标识
DOI:10.48550/arxiv.1711.08565
摘要
Although the performance of person Re-Identification (ReID) has been significantly boosted, many challenging issues in real scenarios have not been fully investigated, e.g., the complex scenes and lighting variations, viewpoint and pose changes, and the large number of identities in a camera network. To facilitate the research towards conquering those issues, this paper contributes a new dataset called MSMT17 with many important features, e.g., 1) the raw videos are taken by an 15-camera network deployed in both indoor and outdoor scenes, 2) the videos cover a long period of time and present complex lighting variations, and 3) it contains currently the largest number of annotated identities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe that, domain gap commonly exists between datasets, which essentially causes severe performance drop when training and testing on different datasets. This results in that available training data cannot be effectively leveraged for new testing domains. To relieve the expensive costs of annotating new training samples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to bridge the domain gap. Comprehensive experiments show that the domain gap could be substantially narrowed-down by the PTGAN.
科研通智能强力驱动
Strongly Powered by AbleSci AI