Person ReID is an important task with wide real-world applications such as intelligent video surveillance, smart retailing, etc., aiming at matching person images captured from non-overlapping cameras. One major issue that challenges this task is the ubiquitous occlusion over the captured persons. For example, as shown in Fig. 1, people in an unmanned supermarket are occluded by goods, shelves or other persons, making it difficult to track their movements.
person mask, semantic parsing or pose estimation, to align the detected persons. However, these approaches may fail to generate accurate external cues in heavily occluded cases such as half body of a subject being occluded. Furthermore, it inevitably incurs more processing time to infer these external cues. Some other approaches[15, 21], by using part-based models, have achieved better performance via part-to-part matching, but they require strict person alignment in advance.
In this paper, we propose a novel alignment-free approach that can re-identify persons accurately without requiring person alignment in advance even in the presence of heavy occlusion with the help of a foreground-aware pyramid reconstruction (FPR) based similarity measure. In particular, we firstly utilize the fully convolution network (FCN) to generate discriminative spatial feature maps that contain spatial coordinate information, and then post-process them via pyramid pooling, to extract spatial pyramid features. We then develop a novel matching score computation method that can be easily incorporated into any end-to-end person ReID model. More concretely, the proposed computation method encourages each spatial feature in the probe feature map to be linearly reconstructed from the basis spatial features within the gallery feature maps, and the average reconstruction error is used as the final matching score. In this way, the model is independent of the size of images and naturally skips the time-consuming alignment step. We also design a foreground probability generator to learn foreground probability maps (FPM) that can guide the spatial reconstruction by assigning the body parts with larger weights and the occlusion parts with smaller weights to overcome the occlusion problem. The proposed approach encourages the reconstruction error of the spatial feature maps extracted from the same person to be smaller than that of different identities. We conduct extensive experiments to validate the effectiveness of our proposed approach, and the results have clearly proved it can achieve accurate person ReID performance even in the presence of heavy occlusion.
To sum up, this work makes the following contributions:
We introduce a novel end-to-end spatial pyramid features learning architecture that can process input persons of different sizes and scales, and generate discriminative features.
We propose an occlusion-sensitive alignment-free approach, i.e. foreground-aware pyramid reconstruction (FPR), that utilizes the foreground probability generator to guide the pyramid reconstruction for occluded person ReID. Unlike previous methods, it does not require any external cues in the application phase.
Experimental results demonstrate that the proposed approach achieves impressive results on multiple occlusion datasets including Partial REID , Partial iLIDS , and Occluded REID . It exceeds some occluded ReID approaches by more than 30% in terms of Rank 1 accuracy. Additionally, FPR achieves competitive results on multiple benchmark person datasets including Market1501 , DukeMTMC  and CUHK03 .
2 Related Work
Occluded person ReID has attracted increasing attention due to its practical importance. Generally, previous methods for addressing this problem leverage external cues such as mask and pose, or adopt part-to-part matching.
Approaches with External Cues. Mask-guided models [4, 8, 12] use person masks that contain body shape information to help remove the background clutters at pixel-level for person re-identification. For example, Kalayeh et al.  proposed a model that integrates human semantic parsing in person re-identification. It is similar to , Qi et al.  combined source images with person masks as the inputs to remove the appearance variations (illumination, pose, occlusion, etc.). Pose-guided models [6, 13, 14] utilize the skeleton as an external cue to effectively relieve the part misalignment problem by locating each part using person landmarks. For instance, Su et al.  proposed a Pose-driven Deep Convolutional (PDC) model to learn improved feature extractors and matching models in an end-to-end manner. The PDC can explicitly leverage the human part cues to alleviate the identification difficulties caused by pose variations. Suh et al.  proposed a two-stream network, which consists of an appearance map extraction stream and a body part map extraction stream. Following the two streams, a part-aligned feature map was obtained by a bilinear mapping of the corresponding local appearance and body part descriptors. Although these approaches can indeed address occlusion problem, they heavily depend on accurate pedestrian segmentation, and also cost much time to infer the external cues.
|FPR (ours)||Alignment-free||Do not require|
Part-based models [15, 16, 18] employ a part-to-part matching strategy to handle occlusion and mostly target at the cases where the person of interest is partially out of the camera’s view. Zheng et al. 
proposed a local patch-level matching model called Ambiguity-sensitive Matching Classifier (AMC) based on dictionary learning with explicit patch ambiguity modeling, and introduced a global part-based matching model called Sliding Window Matching (SWM), which can provide complementary spatial layout information. However, the computation cost of AMC+SWM is rather expensive as features are calculated repeatedly without further acceleration. Sunet al.  proposed a Part-based Convolutional Baseline (PCB) network that outputs a convolutional feature consisting of several part-level features. PCB focuses on the content consistency within each part to address the occlusion problem. However, all these methods cannot skip the alignment step as well. He et al. 
proposed to reconstruct the feature map of holistic pedestrian from the visible parts by lasso regression for addressing partial person ReID.
Table 1 compares the state-of-the-art algorithms to our approach about alignment and external cues requirement. It is noted that external cues based approaches are mainstream for occluded person ReID. However, accurate and stable external cues used for person alignment are hard to acquire in the application phase when half body is occluded. Different from previous approaches, our proposed method is alignment-free and more effective when it comes the ReID problem of occluded persons. It does not rely on any external cues while still achieves higher accuracy.
3 Proposed Approach
In this section, we elaborate on the proposed alignment-free occluded person re-identification approach. We first introduce the network architecture. After that, we introduce the foreground-aware pyramid reconstruction for computing matching scores between two persons with occlusion. Then we explain the training strategy of our model.
3.1 Architecture of the Proposed Model
The architecture of the proposed ReID model is shown in Fig. 2. Structurally, it consists of a Full Convolutional Network (FCN), a Pyramid Pooling layer and a Foreground Probability Generator. We now explain them one by one.
Conventional CNNs involving fully connected layers require a fixed-size input images as inputs. In fact, the requirement comes from fully-connected layers that demand fix-length vectors as inputs. Convolutional layers operate in a sliding-window manner and generate correspondingly-size spatial outputs. To handle an different sizes of person images, we discard all fully-connected layers to implement Fully Convolutional Network (FCN) that only convolution and pooling layers remain. Therefore, full convolutional network still retains spatial coordinate information, which is capable of extracting spatial features from different sizes of person images. The proposed FCN is based on ResNet-50 , it only contains 1 convolutional layer and 4 Resblocks layers, and the last Resblock outputs the spatial feature map.
The detected persons for re-identification may have different scales, which makes it difficult to align their spatial features and brings errors to their similarity measure. To obtain robust spatial features regardless of scale variation, the features from FCN are further processed by a pyramid pooling layer to generate spatial pyramid features. The pyramid pooling layer consists of multiple max-pooling layers of different kernel sizes so that it has more comprehensive receptive fields over the input images. As shown in Fig.2, the output spatial features from the pooling layers of small kernel size capture the appearance information of a small local region. The output spatial features from the pooling layers of large kernel size capture the appearance information from relatively large regions in the image. Finally, we concatenate the spatial pyramid features to obtain the final spatial feature that contains multi-scale information of the input thus the scale variation problem has been well addressed.
Foreground Probability Generator.
The target person to re-identify is provided with person detection bounding boxes. The detected person bounding boxes are coarse, often containing background and occlusion. Therefore, the output spatial features are contaminated by the occlusion and background. To guarantee the following spatial feature matching with less contamination from occlusion, we design a foreground probability generator to obtain the foreground probability maps (FPM). Such FPM would differentiate foreground from background and guide the following pyramid reconstruction for robust matching score computation. We will explain this module detailedly in the next subsection. As shown in Fig. 2, the foreground probability map generator consists of a
convolution layer and a softmax layer.
3.2 Foreground-aware Pyramid Reconstruction
Our proposed model performs foreground-aware pyramid reconstruction (FPR) to compute matching scores for input persons without requiring to align them in advance. Fig. 2 illustrates the workflow of FPR.
Suppose there is a pair of person images (probe: an occluded person image) and (gallery: an unoccluded person image), which may have different sizes. Denote the spatial pyramid maps of from FCN as , where consists of multi-scale feature maps generated from max-pooling layers in the pyramid pooling layer. Where is a vectorized tensor, and , and is the width, the height, the channel of the tensor. As shown in Fig. 2, a total of spatial features from locations are aggregated into a matrix , where . Likewise, we construct the gallery feature matrix , and . Then, that denotes a local feature of a person part should be represented by a linear combination of . In other words, some spatial features in should be able to linearly reconstruct and the similarity between them can be computed as the reconstruction residual. Therefore, we first try to obtain the linear representation coefficients of with respect to , where . With an -norm regularization over , the linear representation formulation is
For spatial features in , the Eq. (1) can be rewritten as
where , and controls the smoothness of the coding vector .
We use the least square algorithm to solve , i.e. . Then the reconstruction probe spatial features can be represented as
Let the residual spatial features . Then average reconstruction error is computed as
where , and is the spatial reconstruction error of the -th spatial feature. The average reconstruction can be regarded as the distance between two person images.
With the above score computation, the alignment step in previous methods can be favourably avoided. However, it suffers from an obvious limitation: since the background and occlusion spatial features are all pooled into , the reconstruction error of background or occlusion spatial features would be very large. As a consequence, the average reconstruction error increases, resulting in unreliable similarity scores and leads to mismatching. To address this problem, we propose to reduce the influence of background by assigning it small weights, while enhance the effect of foreground by assigning these regions large weights adaptively. Therefore, we consider using spatial foreground probability maps to guide spatial pyramid reconstruction for further obtaining the FPR model.
Specifically, given the probe person image, the foreground probability generator as introduced above outputs spatial probability maps . Then the foreground probability vector can be obtained, which reveals the different contributions of the spatial features from the probe image to spatial reconstruction. For the foreground spatial features, the output values in the FPM are relatively large, while for the background spatial features, the output values in the FPM are relatively small. Therefore, the ReID model can leverage the spatial vector to guide the spatial reconstruction. We perform weighted sum operation over the reconstruction error and the foreground probability vector . Then the FPR distance of two person images can be defined as
The overall FPR is outlined in Algorithm 1.
3.3 Model Training
We then explain the training strategy of the foreground probability generator as well as the whole model. Two loss functions, the triplet lossand the foreground probability generator loss as shown in Fig. 3, are used to optimize the whole ReID model.
The is the hard example triplet loss function, which ensures that an image of a specific person is closer to all other images of the same person than any other images of a different person.
The goal of triplet embedding learning is to learn a function . Here, we want to make an image (anchor) of a specific person closer to all other images (positive) of the same person than to any image (negative) of any other person in the image embedding space. Thus, we want , where is FPR measure between a pair of person images. Then the Triplet Loss with samples is defined as , where is a margin that is enforced between a pair of positive and negative. To effectively select triple samples, the batch hard triplet loss modified by the triplet loss is adopted. The core idea is to form batches by randomly sampling subjects, and then randomly sampling images of each subject, thus resulting in a batch of images. Now, for each anchor sample in the batch, we can select the hardest positive and hardest negative samples within the batch when forming the triplets for computing the loss, which is called the Batch Hard Triplet Loss:
Foreground Probability Generator Loss
The is the spatial background-foreground classifier, which aims to classify the background/occlusion part and the person part. We treat this problem as a binary classification problem. Given a person image, corresponding spatial features are extracted. The label of is determined by the person mask obtained by the semantic segmentation model . The spatial feature corresponds to the mask region . We calculate the average pixel value of to obtain its mask-label :
where are the width and the height of the mask patch . Then we set a label threshold () to obtain the labels of spatial features. The spatial background/foreground label can be defined as
where is the label threshold and . The foreground probability generator loss function is then given by
where and respectively indicate the background and foreground spatial feature labels.
Fig. 4 shows some FPM of occluded person images that are generated by the softmax layer. We can see that the spatial background-foreground classifier can accurately detect the person parts.
The final total loss function is defined as
where controls the importance of the spatial foreground probability generator loss function.
In this section we first verify the effectiveness of our proposed approach for the task of occluded person re-identification, and then experiment on non-occluded datasets to test its generalizability. Also, we perform parameter analysis to investigate the influence of weight and threshold in training and testing phases.
4.1 Experiment Settings
Our implementation is based on the publicly available code of PyTorch. All models are trained and tested on Linux with GTX TITAN X GPUs. During training, all training samples are all re-scaled to. No data augmentation is used. Besides, we empirically set = 0.02 in Eq. (10), = 0.35 in Eq. (8) and in Eq. (2
). For the batch hard triplet loss function, one batch consists of 16 subjects, and each subject has 4 different images. Therefore, each batch returns 64 groups of hard triples. The proposed model is trained with 200 epochs.
Evaluation Protocol. For performance evaluation, we employ the standard metrics as in most person ReID literature, namely the cumulative matching cure (CMC) and the mean Average Precision (mAP). To evaluate our method, we re-implement the evaluation code provided by  in Python.
4.2 Evaluation on Occluded Person Datasets
Datasets. Partial REID  is a specially designed partial person dataset that includes 600 images from 60 people, with 5 full-body images and 5 occluded images per person. These images were collected on a university campus by 6 cameras from different viewpoints, backgrounds and different types of occlusion. The examples of partial persons in the Partial REID dataset are shown in Fig. 5(a). We follow the evaluation protocols in  where 300 full-body images of 60 identities are used as the gallery set and 300 occluded-body images of the same 60 identities are used as the probe set. Partial iLIDS  contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage. Fig. 5(b) shows some examples of individual images from the iLIDS dataset. For the gallery set, 238 images of 119 individuals captured by 1st, 2nd cameras are used as the gallery set and 238 images of 119 individuals captured 3rd, 4th cameras are used as a probe set. Occluded REID  is an occluded person dataset captured by mobile cameras, consisting of 2,000 images of 200 occluded persons (see Fig. 5(c)). Each identity has 5 full-body person images and 5 occluded person images with different types of occlusion. All images with different viewpoints and backgrounds are resized to . The details of the training set and testing set are shown in Table 2.
|Partial REID||Partial iLIDS|
|Part-based||PCB (ECCV18) ||92.30||77.40||61.30||54.20||81.80||66.10|
|PCB+RPP (ECCV18) ||93.80||81.60||63.70||57.50||83.30||69.20|
|DSR (CVPR18) ||94.71||85.78||75.24||71.15||88.14||77.07|
|Mask-guided||SPReID (CVPR18) ||92.54||81.34||-||-||-|
|MGCAM (CVPR18) ||83.79||74.33||50.14||50.21||46.71||46.87|
|MaskReID (Arxiv18) ||90.02||75.30||-||-||-||-|
|Pose-guided||PDC (ICCV17) ||84.14||63.41||-||-||-||-|
|PABR (Arxiv18) ||90.20||76.00||-||-||-||-|
|Pose-transfer (CVPR18) ||87.65||68.92||33.80||30.50||30.10||28.20|
|PSE (CVPR18) ||87.70||69.00||-||-||27.30||30.20|
|Attention-based||DuATM (CVPR18) ||91.42||76.62||-||-||-||-|
|HA-CNN (CVPR18) ||91.20||75.70||44.40||41.00||41.70||38.60|
|AACN (CVPR18) ||85.90||66.87||-||-||-||-|
Benchmark Algorithms. Several existing partial person ReID methods are used for comparison, including Ambiguity-sensitive Matching (AMC) with Sliding Window Matching (SWM)  (AMC + SWM), PCB  and DSR , which are two part-based matching methods; and the mask-guided ReID model MaskReID . For AMC + SWM, features are extracted from supporting areas which are densely sampled with an overlap of half of the height/width of the supporting area in both horizontal and vertical directions. Each region is represented following . Besides, the weights of AMC and SWM are 0.7 and 0.3, respectively. For PCB and MaskReID, we follow their original parameter settings. Our ReID model is trained with Market1501. We follow the standard training protocols in , where 751 identities are used for training. Therefore, it is also a cross-domain setting.
Results. Table 3 shows the experimental results. We find the results on Partial REID, Partial iLIDS and occluded REID are similar. The proposed method FPR outperforms MaskReID, PCB, AMC-SWM and DSR with R1 76.33%, 68.07% and 76.30% and mAP 76.60%, 61.78%, 68.00% respectively on the three occluded person datasets. Note that the gap between FPR and DSR is significant. Our method FPR increases R1 Accuracy from 73.67% to 81.00%, from 64.29% to 68.07%, and from 72.80% to 78.30% on the three occluded person datasets respectively. This is because background and occlusion largely affect reconstruction error, and then lead to larger average error. Remarkably, FPR effectively reduces the influence of background and occlusion by assigning them small weights. For these comparison approaches, PCB is unable to relieve the influence of occlusion and background since it fuses both occlusion/background part feature and human part feature to the final feature. Although MaskReID is well suited for addressing person occlusion problem, it depends on external cues such as masks during the inference. The proposed FPR is an alignment-free approach since it does not depend on external cues to align the person images. The retrieval results are shown in Fig. 6. Experiments are conducted using the cross-domain setting and no images in the three partial datasets are used for training (Market1501 training set is used to obtain the ReID model). The FPR achieves good cross-domain performance in comparison with other approaches.
4.3 Evaluation on Non-occluded Person Datasets
We also experiment on non-occluded person datasets to test the generalizability of our proposed approach.
Datasets. Three person re-identification datasets Market1501 , CUHK03  and DukeMTMC-reID  are used. Market1501 has 12,936 training and 19,732 testing images with 1,501 identities in total from 6 cameras. Deformable Part Model (DPM) is used as the person detector. We follow the standard training and evaluation protocols in  where 751 identities are used for training and the remaining 750 identities for testing. CUHK03 consists of 13,164 images of 1,467 subjects captured by 2 cameras on CUHK campus. Both manually labelled and DFM detected person bounding boxes are provided. We adopt the new training/testing protocol  proposed since it defines a more realistic and challenging ReID task. In particular, 767 identities are used for training and the remaining 700 identities are used for testing. DukeMTMC-reID is the subset of Duke Dataset , which consists of 16,522 training images from 702 identities, 2,228 query images and 17,661 gallery images from the other identities. It provides manually labelled person bounding boxes. Here, we follow the setup in . The details of training and testing sets are shown in Table 5.
Results. Comparisons are made between FPR and 10 state-of-the-art approaches of four categories, including part-based model: PCB , mask-guided models: SPReID , MGCAM , MaskReID , pose-guided models: PDC , PABR , Pose-transfer , PSE  and attention-based models: DuATM , HA-CNN , AACN , on Market1501, CUHK03, DukeMTMC datasets. The results are shown in Table 4. From the table, it can be seen that the proposed FPR achieves competitive performance for all evaluations.
The gaps between FRP and DSR are significant. FPR increases R1 Accuracy from 94.71% to 95.42%, from 75.24% to 76.08%, from 88.14% to 88.64% on Market1501, CUHK03 and DukeMTMC, respectively. FPR increases mAP from 85.78% to 86.58%, from 71.15% to 72.31%, from 77.07% to 78.42% on Market1501, CUHK03 and DukeMTMC, respectively. These results demonstrate that the designed foreground probability generator in deep spatial reconstruction is very useful. Besides, FPR performs better than part-based model PCB, because part-level features cannot eliminate the impact of occlusion and background. Furthermore, the proposed FPR is superior to some approaches with external cues. The mask-guided and pose-guided approaches heavily rely on the external cues for person alignment, but they cannot always infer the accurate external cues in the case of severe occlusion, thus resulting in mismatching. FPR utilizes foreground probability maps to guide spatial reconstruction, which naturally avoid alignment and can address person images even in presence of heavy occlusion. Not only do the proposed FPR get good performance at R1 accuracy, but also it is superior to other methods at mAP.
4.4 Parameter Analysis
We evaluate two key parameters in our modelling, the label threshold in Eq. (10) and the weight of spatial foreground probability generator loss in Eq. (8). The two parameters would influence the performance of the proposed FPR. To explore the influence of to FPR, we fix and set the value of
from 0.01 to 0.04 at the stride of 0.01. We show the results on the three occluded person datasets in Fig.7, we find that the proposed FPR achieves the best performance when we set . To further explore the influence of to FPR, we fix and set the value of from 0 to 1 at the stride of 0.1. As shown in Fig. 7, when is approximately 0.35, the proposed FPR achieves the best performance.
We have proposed a novel and alignment-free approach called Foreground-aware Pyramid Reconstruction (FPR) to occluded person ReID. The proposed method provides a feasible scheme where the probe spatial feature can be linearly reconstructed by gallery spatial featuresto achieve effective alignment-free matching. More importantly, spatial foreground probability used in the reconstruction process can fully solve the occlusion problem. Furthermore, we embedded FPR into batch hard triplet loss function to learn more discriminative features by minimizing the reconstruction error for an image pair from the same target and maximizing that of image pair from different targets. Experimental results on three occluded datasets validate the effectiveness of FPR. Additionally, the proposed method is also competitive on the benchmark person datasets.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In , 2016.
-  L. He, J. Liang, H. Li, and Z. Sun. Deep spatial feature reconstruction for partial person re-identification: Alignment-free approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7073–7082, 2018.
-  J. L. G. W. Jiaxuan Zhuo, Zeyu Chen. Occluded person re-identification. In IEEE International Conference on Multimedia and Expo (ICME), 2018.
-  M. M. Kalayeh, E. Basaran, M. Gökmen, M. E. Kamasak, and M. Shah. Human semantic parsing for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1062–1071, 2018.
-  W. Li, X. Zhu, and S. Gong. Harmonious attention network for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
-  J. Liu, B. Ni, Y. Yan, P. Zhou, S. Cheng, and J. Hu. Pose transferrable person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4099–4108, 2018.
-  T. Liu, T. Ruan, Z. Huang, Y. Wei, S. Wei, Y. Zhao, and T. Huang. Devil in the details: Towards accurate single and multiple human parsing. arXiv preprint arXiv:1809.05996, 2018.
-  L. Qi, J. Huo, L. Wang, Y. Shi, and Y. Gao. Maskreid: A mask based deep ranking neural network for person re-identification. arXiv preprint arXiv:1804.03864, 2018.
-  E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi. Performance measures and a data set for multi-target, multi-camera tracking. In European Conference on Computer Vision, pages 17–35. Springer, 2016.
-  M. S. Sarfraz, A. Schumann, A. Eberle, and R. Stiefelhagen. A pose-sensitive embedding for person re-identification with expanded cross neighborhood re-ranking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
-  J. Si, H. Zhang, C.-G. Li, J. Kuen, X. Kong, A. C. Kot, and G. Wang. Dual attention matching network for context-aware feature sequence based person re-identification. arXiv preprint arXiv:1803.09937, 2018.
C. Song, Y. Huang, W. Ouyang, and L. Wang.
Mask-guided contrastive attention model for person re-identification.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1179–1188, 2018.
-  C. Su, J. Li, S. Zhang, J. Xing, W. Gao, and Q. Tian. Pose-driven deep convolutional model for person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 3980–3989, 2017.
-  Y. Suh, J. Wang, S. Tang, T. Mei, and K. M. Lee. Part-aligned bilinear representations for person re-identification. arXiv preprint arXiv:1804.07094, 2018.
-  Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang. Beyond part models: Person retrieval with refined part pooling. arXiv preprint arXiv:1711.09349, 2017.
-  G. Wang, Y. Yuan, X. Chen, J. Li, and X. Zhou. Learning discriminative features with multiple granularities for person re-identification. arXiv preprint arXiv:1804.01438, 2018.
-  J. Xu, R. Zhao, F. Zhu, H. Wang, and W. Ouyang. Attention-aware compositional network for person re-identification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
-  H. Zhao, M. Tian, S. Sun, J. Shao, J. Yan, S. Yi, X. Wang, and X. Tang. Spindle net: Person re-identification with human body region guided feature decomposition and fusion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
-  W.-S. Zheng, S. Gong, and T. Xiang. Person re-identification by probabilistic relative distance comparison. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 649–656, 2011.
-  W.-S. Zheng, X. Li, T. Xiang, S. Liao, J. Lai, and S. Gong. Partial person re-identification. In IEEE International Conference on Computer Vision (ICCV), 2015.
-  Z. Zheng, L. Zheng, and Y. Yang. Pedestrian alignment network for large-scale person re-identification. arXiv preprint arXiv:1707.00408, 2017.
-  Z. Zheng, L. Zheng, and Y. Yang. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3754–3762, 2017.