计算机科学
联营
人工智能
边距(机器学习)
特征(语言学)
模式识别(心理学)
集合(抽象数据类型)
点(几何)
鉴定(生物学)
计算机视觉
数学
机器学习
生物
程序设计语言
哲学
植物
语言学
几何学
作者
Yangyang Cheng,Shan Liang,Haopeng Wang,Lu Liu,Jun Li
标识
DOI:10.1109/iccsi55536.2022.9970692
摘要
Due to low resolution feature map and global average pooling, the features output from the standard baseline of person re-identification (ReID) are mixed with a large amount of background information during the construction of high-dimensional features. In order to solve the problem, we propose a new person ReID method based on foreground mask estimation. Firstly, in the data preparation stage, we make the key point area label for training set by using the human key point detection model OpenPose, which provides a reference for the design of loss function; Secondly, we propose foreground mask estimation for person ReID. HRNetv2-W32 is selected as the backbone to obtain a high-resolution feature map and a network branch is added after backbone to estimate the foreground mask which has a high distinction between foreground and background. We propose to map the mask onto the feature map to avoid introducing a large amount of background information. Moreover, a new loss function called Excess the Mean with Margin Loss (EMML) is proposed for mask-estimated branch, and mask visualization Experiments show multiple losses, including EMML, Triplet loss and ID loss, can ensure that foreground and background on the mask are clearly distinguished while they simultaneously supervise the training of model. In the experimental stage, we compare the feature maps obtained by our method with those obtained by baseline, which shows the effectiveness in eliminating background information. And we evaluate the proposed method on two public datasets, including Market1501 and DukeMTMC-ReID, Rank-1/mAP reached 95.0%/96.1% and 90.4%/79.8% respectively with only using global features.
科研通智能强力驱动
Strongly Powered by AbleSci AI