计算机科学
利用
变压器
人工智能
计算机视觉
工程类
计算机安全
电气工程
电压
作者
Xuehu Liu,Pingping Zhang,Chenyang Yu,Huchuan Lu,Xuesheng Qian,Xiaoyun Yang
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:27
标识
DOI:10.48550/arxiv.2104.01745
摘要
Video-based person re-identification (Re-ID) aims to retrieve video sequences of the same person under non-overlapping cameras. Previous methods usually focus on limited views, such as spatial, temporal or spatial-temporal view, which lack of the observations in different feature domains. To capture richer perceptions and extract more comprehensive video representations, in this paper we propose a novel framework named Trigeminal Transformers (TMT) for video-based person Re-ID. More specifically, we design a trigeminal feature extractor to jointly transform raw video data into spatial, temporal and spatial-temporal domain. Besides, inspired by the great success of vision transformer, we introduce the transformer structure for video-based person Re-ID. In our work, three self-view transformers are proposed to exploit the relationships between local features for information enhancement in spatial, temporal and spatial-temporal domains. Moreover, a cross-view transformer is proposed to aggregate the multi-view features for comprehensive video representations. The experimental results indicate that our approach can achieve better performance than other state-of-the-art approaches on public Re-ID benchmarks. We will release the code for model reproduction.
科研通智能强力驱动
Strongly Powered by AbleSci AI