Let Features Decide for Themselves: Feature Mask Network for Person Re-identification

11/20/2017 ∙ by Guodong Ding, et al. ∙ Australian National University 0

Person re-identification aims at establishing the identity of a pedestrian from a gallery that contains images of multiple people obtained from a multi-camera system. Many challenges such as occlusions, drastic lighting and pose variations across the camera views, indiscriminate visual appearances, cluttered backgrounds, imperfect detections, motion blur, and noise make this task highly challenging. While most approaches focus on learning features and metrics to derive better representations, we hypothesize that both local and global contextual cues are crucial for an accurate identity matching. To this end, we propose a Feature Mask Network (FMN) that takes advantage of ResNet high-level features to predict a feature map mask and then imposes it on the low-level features to dynamically reweight different object parts for a locally aware feature representation. This serves as an effective attention mechanism by allowing the network to focus on local details selectively. Given the resemblance of person re-identification with classification and retrieval tasks, we frame the network training as a multi-task objective optimization, which further improves the learned feature descriptions. We conduct experiments on Market-1501, DukeMTMC-reID and CUHK03 datasets, where the proposed approach respectively achieves significant improvements of 5.3%, 9.1% and 10.7% in mAP measure relative to the state-of-the-art.



There are no comments yet.


page 1

page 3

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Person re-identification deals with the task of matching identities across images captured from disjoint camera views. Given a query image, a person re-identification system determines whether the person has been observed by another camera at another time. Significant attention has been dedicated to person re-identification in the past few years, as the task is essential in video surveillance for cross-camera tracking, multi-camera event detection, and pedestrian retrieval. Albeit highly important, the re-identification problem poses significant challenges due to the viewpoint and pose changes, illumination variations, cluttered backgrounds, occlusions, and indiscriminative appearances across different cameras (or even for the same camera). Moreover, re-identification task considers new identities at test time, therefore it requires a high generalization capability of the learned feature encodings.

Existing research on person re-identification problem can be broadly divided into two mainstreams. (a) Methods that seek to learn a discriminative metric, which allows instances of the same identity to be closer while instances of different identities to be far away [6, 11, 14, 37, 31, 13]. These metrics learning methods mainly adopt pairwise [11, 14, 37] or triplet [31, 13, 24] loss to obtain an embedding for each probe image and distinguish identities in the projected space. Along similar lines, [4] proposes a quadruplet ranking loss that is capable of achieving smaller intra-class variations and large inter-class distances, which results in an improved performance on the test set. (b) Methods that focus on designing robust visual descriptors to model the appearance of the person [22, 9, 39, 18]. Among these techniques, handcrafted features found initial success [9, 22]. More recently, automatically learned feature representation using deep architectures have shown excellent improvements [41, 25, 33, 43]. The prevalent re-identification approaches belonging to these two research streams assume that the person bounding boxes are provided by a dedicated detector. However, such detections are not always perfect, resulting in problems such as the inclusion of excessive background in the object box, incomplete coverage of body and localization mismatch. This is exacerbated by the fact that there exist heavy occlusions partially masking the pedestrians in surveillance scenarios.

Our intuition is that a desired capability that allows overcoming these challenges is to pay attention to important yet perhaps subtle local details alongside the supposedly prominent global cues. In this paper, we propose an automatic approach which learns to focus on local details as well as the global image descriptions using deep neural networks. This helps the identification algorithm to filter out the irrelevant image parts and concentrate on the regions that carry more valuable and discriminative cues for the identity prediction task.

We formulate the proposed feature selection strategy as a soft-attention with in a deep network, which enables an end-to-end learning framework. In addition to avoiding the problems due to imperfect pedestrian detection windows, our network learns to resolve ambiguities (such as similar clothing of two different identities) by shifting attention towards more distinguishing aspects of the respective identities. To this end, we utilize already learned global discriminative features as a guidance and a dynamic selection mechanism to assign different importance weights to local feature representations.

Recent deep learning based re-identification approaches perform training on the identity classification task and use the network features for the test set to perform retrieval


. This approach limits the representation capability of deep features since the end-task is different from the one used during learning. To overcome this shortcoming, we propose a multi-task loss formulation that considers both classification and ranking objectives during the training phase. The ranking loss enforces the locally attentive network outputs to take guidance from the predictions based on global features by introducing a margin violation penalty. Our results on three large-scale datasets demonstrate significant performance improvements.

Our main contributions are threefold, as given below:

  • We propose a Feature Mask Network (FMN) that can dynamically attend to local details in an image and use them alongside global representations for improved pedestrian re-identification.

  • We introduce a multi-task formulation, which optimizes a classification as well as a pair-wise ranking loss to learn highly robust feature descriptions.

  • The proposed approach is easy to implement, efficient to train, while consistently outperforming the state-of-art methods on all three benchmark datasets.

The rest of this paper is organized as follows. Section 2 reviews and analyzes the related literature. Section 3 provides the details of our proposed network. Section 4 reports experimental results, and Section 5 concludes this paper with an outlook towards the future work.

Figure 2: The overall architecture of our network. The GRN obtains high-level feature representations using a fine-tuned ResNet architecture. Using these features, the MN creates a feature mask which dynamically attends to local details useful to person identification. The re-weighted features are used in the bottom LAN to learn more locally focused features. Note that the GRN and LAN learn a separate set of parameters. In testing, we concatenate the last fully connected layer features from both branches to form the final pedestrian descriptor.

2 Related Work

Person re-identification methods employ handcrafted or automatically learned feature representations. There have been several types of handcrafted features used for person re-identification, including local binary patterns (LBP) [14, 35], color histogram [14] and local maximal occurrence (LOMO) [18] statistics. Methods based on handcrafted features focus more on learning an effective distance metric to compare the features, while deep learning based methods jointly learn the best feature representations and the associated distance metrics in a data-driven fashion. We outline the representative works in both streams below.

Metric learning for person re-identification: Several works aim towards metric learning for person re-identification. The main goal is to learn a similarity metric to map similar identities closer in a projected space, while push away different identities as far as possible. This type of work usually takes image pairs or triplets as inputs and learn a similarity metric by adding pairwise or triplet comparison constraints [7, 29, 37]. Specifically, [7] divides convolution features according to different spatial locations and uses a triplet loss. [37] proposed a Siamese network which takes in image pairs and split a pedestrian image into three horizontal parts followed by a separate training process for each part. A cosine distance measure is used in [37] to score similarity of the input images while [8] uses a Euclidean distance. Deep RE-ID [16] integrates several different layers designed for different sub-tasks, such as displacement and pose transforms. [29]

utilizes long short-term memory (LSTM) to sequentially process image parts such that the relationships between spatial parts are learned to improve network’s discriminative ability.

[31] proposes a framework to learn single and cross-image representations using two different CNNs for improving matching performance, and extend their model by utilizing two different CNNs for joint SIR and CIR learning based on either pairwise comparison or triplet objective.

Deep learning for person re-identification: Several recent approaches have adopted deep architectures due to their superior expressive power for discriminative feature learning from large-scale visual data [43, 31]. Some of them focus on learning the image representations together with the similarity function [16, 33, 31]

. Other works manage to learn feature representations, tackling person re-identification as a classification problem. Common practices include first training the network to correctly classify samples into different categories and then exploit fully connected layer output as final image descriptor for retrieval

[41, 25, 33, 5]. Zheng et al. [44] adopted generative adversarial networks (GAN) to generate unlabeled data to enlarge the training set. Another group of approaches in the literature incorporate a margin-based loss (e.g., a constrastive loss), which enforces separation between positive and negative pairs [37, 13]. Recently, [41] showed that a CNN itself can learn very discriminative representations without any extra part-matching or margin-based training. Following [41], here we show that a fine tuned network can act as a strong baseline and the proposed FMN can effectively boost its performance.

Connection with existing work: Closest to our proposed approach is the recent pedestrian alignment network by Zheng et al. [43]. However, their approach is still significantly different to ours since they aim to solve the misalignment problem in automatically detected pedestrian bounding boxes. This misalignment is reduced by applying an affine transformation whose parameters are learned automatically within a network. In contrast, our work advocates that the intermediate feature representations within the network have different levels of relevance for our end-task since the object of interest often only covers a part of the input image. Therefore, directly considering the local features can result in a better performance without the need to align and transform the input image. In our work, we show that appropriately shifting the network’s attention towards the local information in feature encoding can greatly help the person re-identification task. Furthermore, the proposed multi-task formulation helps in learning discriminative features by considering the predictions at both global and local levels.

3 Proposed Method

Our approach is based on the proposition that a successful person re-identification system needs to give importance to both the global and the local discriminative aspects of a pedestrian whose image is acquired from different views using multiple surveillance cameras. To this end, we introduce a novel CNN based deep learning architecture, which learns to focus both on the global and local cues of a person that are useful for its re-identification. We describe our proposed architecture and training procedure in the following sections.

3.1 Feature Mask Network

The proposed CNN architecture is shown in Fig. 2. The complete system consist of three main components, which we term as (from top to bottom) a) Global Representation Network, b) Mixing Network and c) Locally Attentive Network.

The Global Representation Network (GRN) learns the holistic feature representations corresponding to an input image. It is designed as a Residual Network [12]

with five residual blocks each (except the first module) containing skip connections. The GRN has a total of 50 parameter layers with a total of 3.8 billion FLOPs. It has been pre-trained on the ImageNet image classification dataset and fine-tuned on the person re-identification dataset. The Mixing Network (MN) predicts the mask weights for the local features from the initial layers in the GRN. These weights are derived from the global feature representation learned using the GRN for the pedestrian images. The MN consists of a transformation layer implemented as a fully connected (FC) layer on top of the global features from GRN, followed by a reshaping module and a mixer which performs element-wise product between the local feature representations and the mask weights.

The input pedestrian images may contain excessive background or mis-alignment errors due to high appearance, scale and pose variations in the candidate profiles. Therefore, the important information in an image may get suppressed in the global representation learned by the GRN. The third block in our scheme, called the Locally Attentive Network (LAN), learns to attend to local discriminative features which can provide useful clues for a person’s identity matching. A LAN takes local features from the GRN, which are re-weighted by the mask predicted using the MN. These modified activations basically exhibit an attention mechanism, where the discriminating local features are given more importance compared to others. Furthermore, the proposed attention mechanism also effectively incorporates the relevant contextual information which is useful for the person re-identification task. The LAN consists of four residual blocks with similar architecture to the corresponding blocks in the GRN. Since, the LAN parameters are not shared with the GRN, it learns a different global representation with a refocus on the locally discriminating information. This information acts as a complementary source of information which we found to be highly useful in our experiments (see Sec. 4.3). In the following, we describe the details of mask computation.

Input: Pre-trained model , Re-ID training data , Identity labels , Maximum Iterations ,
Output: Learnt FMN Model , , and denote parameters of GRN, MN and LAN, respectively.
Initialization: , , random initialization for
Global representation learning;
1 for  do
       Keep , fixed;
       Update using Eq. (4)
2 ;
Locally attentive representation learning;
3 for  do
       Keep fixed and feed-forward GRN;
       Produce mask using Eq. (1) and Eq. (3.2);
       Apply mask on LAN inputs with using Eq. (3);
       Update and using both Eq. (4) and Eq. (5);
4, ;
Algorithm 1 Feature Mask Net Optimization

3.2 Mask Computation

The MN operates on the global feature representation from higher layers (final fully connected layer in our case) and the local feature representation from lower layers (output from first residual block in our case) in the Residual network. Since the feature representations from the lower layers in a CNN is arranged as multiple 2D activation maps for color images, we can represent their dimensions more conveniently as: , where , and denote the height, width and the number of feature channels respectively. The MN first transforms the global feature from GRN to a dimensional output which can then be used to compute the feature mask:



denotes the ReLU activation function and

is the transitional mask and is the weight matrix for the transformation which is equivalent to a fully connected layer in our mixing network. As a result, we can learn this feature mapping directly from the data which can provide an image-specific mask for the local features. Since our goal is to attend to local features in the spatial domain, we identically re-weight all feature channels in using the same predicted feature mask. The final feature mask () is obtained from by reshaping and applying element-wise exponentiation as follows:


Once the feature mask is obtained, MN uses a mixer to combine it with using the channel-wise Hadamard product (denoted as ‘’ below):


where, denotes the feature channel number in . We outline the training process for our proposed network below.

3.3 Classification and Ranking

The network is trained in two stages, summarized in Algorithm 1. First, the GRN is trained to predict the pedestrian identities in an end-to-end manner. The parameter learning process for GRN involves a weight initialization step using a ResNet model pretrained on the ImageNet dataset, followed by our task-specific fine-tuning. Once the training process is complete, the feature representation from the GRN encodes global discriminator information corresponding to a given image. Afterwards, the GRN weights are kept fixed, while the MN and LAN weights are learned jointly in the next stage. Similar to the first stage, the second stage training is also performed using the pedestrian identities as the ground-truth labels. In contrast to the global representations learned via GRN, the second stage training focuses on the locally discriminate information and re-shifts the attention appropriately using the MN to obtain a complementary feature representation.

Both the GRN and LAN are trained for the identity classification task using supervised learning in two stages. We use a conventional cross-entropy loss function for both stages as follows:


where denote the predicted output and

denotes the desired output as a one-hot vector. Here,

denotes the number of classes in the dataset, which is equal to the total units in the output layer.

However, the softmax loss does not directly consider the ranking errors. Therefore, in the second stage of joint network training, we add on top of LAN a ranking loss defined as follows :


where represents the margin,

denote the prediction probability on the correct category label

of GRN and LAN, respectively. Imposing rank loss enables LAN to take the prediction form GRN as reference, and enforces LAN to make better predictions for correct labels by a margin, thus leading to more confident and accurate predictions.

3.4 Image Descriptor

After learning the parameters of the network, the final image descriptor at test time is obtained by combining the feature representations from the GRN and the LAN. These feature representations are derived from the last FC layer in each network, which contain task specific discriminative information pertaining to both global and local pedestrian attributes. The following relation is used to compute a re-weighted concatenation of normalized individual descriptors:


where operator denotes an -norm. The parameter decides the trade-off between features from the two separate breaches, GRN and LAN, respectively. We set to in our experiments following [43]. The resulting descriptor is used to find closest matches from the gallery by performing a nearest neighbour (NN) search based on Euclidean distance. This provides an initial ranking list, which is further improved by the re-ranking process, as described below.

3.5 Re-ranking

The person re-identification process can be viewed as a retrieval task, in which a given query image is used to retrieve the images containing the same identity. Based on the initial ranking obtained using the descriptor , we perform a re-ranking step to further improve the re-identification performance. The re-ranking step discovers the relationships in the initial ranking to remove the false matches and obtain an improved list. For example, a simple strategy is to remove the matches which do not conform with the top- ranked images [36]. However, this approach performs poorly when the top- matches are noisy. To overcome this problem, we use the -reciprocal nearest neighbours in the re-ranking step as proposed in [45]. By definition, two images are reciprocal nearest neighbours if a search using one of the images ranks the second one among the top- images. This reduces the false positives in the re-ranked list and results in significantly better performance.

The re-ranking process operates in an unsupervised manner. Specifically, a -reciprocal feature () is calculated for each image using the gallery by computing reciprocal neighbouring relationships [45]. Given a query image, this feature is used along side the descriptor to find similarity with the gallery. Note that instead of the Euclidean distance, the Jaccard similarity measure is used to match -reciprocal features. The final distance is calculated by the aggregation of both distance measures. Remarkably, the re-ranking process relies heavily on the features calculated using our proposed network architecture. A flawed feature representation can lead to a degraded performance as a result of re-ranking. In our case, a boost is performance using the re-ranking approach shows the strength of our proposed feature description (see Sec. 4.3).

4 Experiments

We extensively evaluate the proposed approach on three person re-identification datasets: Market-1501 [40], CUHK03 [16], and DukeMTMC-reID [23]. We first briefly provide the dataset details and the evaluation protocol, followed by our experimental results and analysis.

Methods dim Market-1501 DukeMTMC-reID CUHK03 detected CUHK03 labeled
1 5 20 mAP 1 5 20 mAP 1 5 20 mAP 1 5 20 mAP
GRN 2048 79.33 91.48 96.62 58.50 67.91 81.33 89.59 48.40 38.00 48.14 59.93 33.68 34.36 46.64 58.14 30.14
LAN 2048 81.15 91.39 96.44 59.88 69.84 83.03 90.04 51.05 31.50 46.07 60.86 29.39 32.36 47.57 61.93 30.41
FMN 4096 85.99 93.74 97.51 67.12 74.51 85.05 92.41 56.88 42.57 56.21 67.36 39.21 40.71 54.57 65.50 38.05
Table 1: Comparison of different methods on Market-1501, DukeMTMC-reID, CUHK03 (detected), and CUHK03 (labeled). Rank-1, 5, 20 accuracy (%) and mAP (%) are reported. Consistent improvement of our proposed on all datasets in terms of rank@k accuracy and mAP can be observed.

4.1 Datasets

Figure 3:

Pedestrian image samples from different datasets. In each grid, we show two identities with two captured images each. As we can see, there are illumination variances, view point changes and even large occlusions.

Market-1501: Market-1501 is a person re-identification dataset consisting of 32,688 images with bounding boxes of 1501 pedestrians. These image are captured on campus and persons are detected using Deformable Parts Model (DPM) [10]. Each identity has at most been captured by 6 different cameras. In this experiment setting, 12,936 detected images of 751 identities been taken as training set, whilst the rest 19,732 images of 750 identities are used for testing, following the original evaluation protocols.

DukeMTMC-reID: DukeMTMC-reID is one of the largest pedestrian image datasets derived from DukeMTMC [23] and comprises of 36,411 pedestrian images of 1,404 identities. It is split into train/test sets of 16,522/19,889, and we evaluate with 2,228 queries to retrieve from 17,661 gallery images. The dataset introduces occlusion of pedestrians, such as cars or trees, which makes this task more challenging.

CUHK03: CUHK03 contains 14,097 images of 1,467 identities. On average, each identity has 9.6 images captured by 5 different sets of cameras. The dataset provides both manually cropped bounding boxes and the automatically cropped bounding boxes using a pedestrian detector, named as ‘CUHK03 labeled’ and ‘CUHK03 detected’, respectively. For fair comparison, we evaluate our method on the new training/testing protocol proposed in [45] which divides original dataset by half, yielding a training set of 767 identities and a testing set of 700 identities. [45] argue that this split conforms with the real-world person re-id tasks, which can only provide limited training samples, while re-id is performed on a larger unseen sample pool.

4.2 Implementation Details

Network Training: Due to the high classification accuracy of ResNet-50 [12], following the baseline in [41]

we also use it as our backbone architecture. The network is pre-trained on the ImageNet dataset consisting of 1000 object classes. To fine-tune it for the person re-identification task, we replace the 1000 units in the final FC layer with the number of training identities in dataset. This forms the GRN branch in our network, which is trained with an initial learning rate of 0.1, that is reduced to 0.01 after 20 epochs. For the MN and LAN branches, we train with the same initial learning rate, which is reduced to 0.01 after 35 epochs. The training is performed until the the network converges. We update our parameters with stochastic gradient descent with 0.9 momentum. The training dataset is augmented with horizontal flipping and cropping the original images.

Evaluation Metrics:

We evaluate our methods with mean average precision (mAP) and rank-1, rank-5 and rank-20 accuracy measures. The rank-i accuracy denotes the rate at which one or more correctly matched images appear in top-i. If no correctly matched images appear in the top-i of the sorted list, rank-i=0, otherwise rank-i=1. We report the mean rank-i accuracy for query images. Also, for each method, we calculate the area under the Precision-Recall curve and the mean of the average precision scores for each query, which reflects the overall precision and recall rate.

The proposed framework is implemented using the MatConvNet [30] library.

4.3 Evaluation

Method rank-1 mAP
DADM (ECCV’16) [25] 39.4 19.6
BoW+kissme (ICCV’15) [40] 44.42 20.76
MR-CNN (AVSS’17) [27] 45.58 26.11
MST-CNN (ACMMM’16) [21] 45.10 -
FisherNet (PR’17) [32] 48.15 29.94
CAN (TIP’17) [20] 48.24 24.43
SL (CVPR’16) [3] 51.90 26.35
DNS (CVPR’16) [38] 55.43 29.87
Gate Reid (ECCV’17) [28] 65.88 39.55
SOMAnet (ArXiv’17) [2] 73.87 47.89
Verif,-Identif. (TOMM’17) [42] 79.51 59.87
MSCAN (CVPR’17)[15] 80.31 57.53
SVDNet (ICCV’17)[26] 82.3 62.1
SSM (CVPR’17) [1] 83.7 65.5
GAN (ICCV’17) [44] 83.97 66.07
APR (ArXiv’17) [19] 84.29 64.67
PAN (ArXiv’17) [43] 82.81 63.35
PAN+re-rank (ArXiv’17) [43] 85.78 76.56
JLML (IJCAI’17) [17] 85.1 65.5
DPFL (ICCV’17) [5] 88.9 73.1
Basel. 79.33 58.50
FMN 85.99 67.12
FMN+re-rank 87.92 80.62
Table 2: Rank-1 accuracy (%) and mAP (%) on Market-1501 dataset. The best and second best performance are marked in red and blue, respectively.

Effectiveness of FMN. We comprehensively evaluate our proposed FMN on all three large-scale re-identification benchmarks to show the effectiveness of our proposed network architecture. The individual performance of the GRN, LAN and overall results of the complete model are shown in Table 1. This serves as an ablation study. Since GRN is a fine-tuned version of the pre-trained model, therefore GRN is used as a strong baseline in this work. As one can note, independent LAN feature achieves score that’s close to the baseline branch, and when combined with the baseline feature, a significant boost in overall performance is observed. Rank-1 accuracy for Market-101, DukeMTMC-reID, CUHK03 detected and CUHK03 labeled datasets have been improved by a margin of 6.66%, 6.60%, 4.57% and 6.35%, respectively. The performance in terms of mAP values have also been improved remarkably by 8.62%, 8.48%,5.53% and 7.91%, respectively. This consistent performance gain proves that the locally attentive feature representations learned by the MN and LAN are complementary to the global features from GRN. Notably, the rank-1 LAN accuracy for CUHK03 detected dataset experience a considerable decline (around 6.5%) compared with the baseline, we speculate that this behaviour is due to the severe occlusions and detection errors in the dataset.

Comparison with the State-of-the-Art Methods: We also compare our proposed approach with the state-of-art methods on Market-1501, DukeMTMC-reID and CUHK03 datasets in Tables 2, 3 and 4 respectively. On all three datasets, we achieve the best or the second best performances in comparison to the very recent methods with more sophisticated pipelines. As shown in Table 2, we achieved rank-1 = 87.92%, mAP=80.62% using the re-ranking approach explained in Sec. 3.5, which is a highly consistent performance across both metrics. On DukeMTMC-reID, our proposed method also achieved the best performance with rank-1 = 79.52%, mAP = 72.79%. On CUHK03 dataset, we observe our rank-1 is 5.6% higher than best competing methods on detected and 2.1% on labeled datasets.

GRN vs LAN: Alongside the quantitative comparisons on the three aforementioned datasets, we also qualitatively analyze the effect of FMN for the case if highly confusing pedestrian examples. To this end, we illustrate query images from each of the three datasets in Fig. 4 along with the Rank-1 mismatch (predicted by baseline) and a true match (predicted by our proposed model) for the respective pedestrian. We also visualize the heat maps obtained from both the GRN and LAN to study the salient image regions which are given more attention by the network during the prediction process. One can notice from both Figure1 and Figure 4 that global representations from GRN focus mainly around the main torso of the body. This can lead to incorrect predictions because the upper trunk of the body can be identical for two altogether different identities. In contrast, more subtle local details such as the clothing and attire specifics (left example), footwear (middle example) and differences of the back-packs (right example) can provide more useful cues for correct identification of persons. The FMN attends to both global and local details and leverages their complementary characteristics to correctly identify the corresponding match for a query image.

Method rank-1 mAP
BoW+kissme (ICCV’15) [40] 25.13 12.17
LOMO+XQDA (CVPR’15) [18] 30.75 17.04
GAN (ICCV’17) [44] 67.68 47.13
OIM (CVPR’17) [34] 68.1 -
APR (ArXiv’17) [19] 70.7 51.9
SVDNet (ICCV’17) [26] 76.7 56.8
PAN (ArXiv’17) [43] 71.6 51.5
PAN+re-rank (ArXiv’17) [43] 75.9 66.7
DPFL (ICCV’17) [5] 79.2 60.6
Basel. 67.91 48.40
FMN 74.51 56.88
FMN+re-rank 79.52 72.79
Table 3: Rank-1 accuracy (%) and mAP (%) on DukeMTMC-reID.
Method Detected Labeled
rank-1 mAP rank-1 mAP
BoW+XQDA (ICCV’15) [40] 6.4 6.4 8.0 7.3
LOMO+XQDA (CVPR’15) [18] 12.8 11.5 14.8 13.6
ResNet+XQDA (CVPR’17) [46] 31.1 28.2 32.0 29.6
[46]+re-rank (CVPR’17) 34.7 37.4 38.1 40.3
PAN (ArXiv’17) [43] 36.3 34.0 36.9 35.0
PAN+re-rank (ArXiv’17) [43] 41.9 43.8 43.9 45.8
DPFL (ICCV’17) [5] 40.7 37.0 43.5 40.5
Basel. 38.2 34 34.4 30.1
FMN 42.6 39.2 41.0 38.1
FMN+re-rank 47.5 48.5 46.0 47.6
Table 4: Results on CUHK03 dataset with new evaluation protocol. New protocol divide each dataset roughly by half for training and testing. Under this setting, we use a larger testing gallery and smaller training set.
Figure 4: Qualitative examples of improved retrieval results on the Market-150, DukeMTMC-reID and CUHK03 datasets (left to right). For each query image, we present its false rank-1 match based solely on GRN (denoted by mismatch) followed by its accurate match using our proposed architecture. Second and third rows visualize the heat maps obtained from GRN and LAN, respectively. One can observe that by attending to local distinctive parts (e.g., subtle variations in apparel design, shoes and carry on bags) using LAN, the overall performance is boosted and the proposed network is therefore better suited for re-identification task.

Local Feature Selection: The GRN extracts a hierarchy of features corresponding to different level of details in the input image. From initial to final layers, the output features transition from local to global information. We evaluate the effect of local feature selection from GRN on the person re-identification accuracy. In Table 5, we report rank-1 accuracy and mAP when the local features from different GRN layers are used in the mixing network. For the case of Res1, we apply masking operation on the output of pooling layer (Pool1) which significantly reduces the MN parameters to only a quarter of that on Res1. We can see a consistent performance boost as the selected layer is changed from final layers towards initial GRN layers. Specifically, the rank-1 accuracy of 82.39 and mAP of 61.95 at Res4 changes to 85.99 and 67.12, respectively, at Pool1. This observation is intuitive, since the network achieves best performance when the complementary information from both global and local levels are combined for the re-identification task. The high layers encode more holistic information about appearance and shape, while the local information can disambiguate cases with high appearance similarities at a global level. When the local features from higher layers are used, the resulting performance is low since the features from near-by layers encode redundant information.

mask size rank-1 mAP
Pool1 5656 85.99 67.12
Res2 5656 85.15 65.84
Res3 2828 84.23 65.16
Res4 1414 82.39 61.95
Table 5: Rank-1 accuracy and mAP results with different layer being masked on Market-1501 dataset.

5 Conclusion

A person occupies only a portion of the input image, and a global scene description does not suffice for an accurate identity matching. In this work, we proposed a hybrid architecture for CNN, which simultaneously learns to focus on the more discriminative parts of the input scene. Given a global feature, we directly predict the attention mask which is used to re-weight the local scene details in the feature space. This strategy allows the flexibility to re-focus attention on the local details which can be highly valuable for predicting a persons unique identity. The locally aware feature description leads to a highly compact and complementary feature representation, which together with the global representation achieves highly accurate results on three large-scale datasets. Significant boosts are observed when the proposed features are used along-with a re-ranking strategy, demonstrating the strength of proposed features to correctly encode reciprocal relationships between person identities.


  • [1] S. Bai, X. Bai, and Q. Tian. Scalable person re-identification on supervised smoothed manifold. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , July 2017.
  • [2] I. B. Barbosa, M. Cristani, B. Caputo, A. Rognhaugen, and T. Theoharis. Looking beyond appearances: Synthetic training data for deep cnns in re-identification. arXiv preprint arXiv:1701.03153, 2017.
  • [3] D. Chen, Z. Yuan, B. Chen, and N. Zheng. Similarity learning with spatial constraints for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1268–1277, 2016.
  • [4] W. Chen, X. Chen, J. Zhang, and K. Huang. Beyond triplet loss: a deep quadruplet network for person re-identification. In The Conference on Computer Vision and Pattern Recognition, 2017.
  • [5] Y. Chen, X. Zhu, and S. Gong. Person re-identification by deep learning multi-scale representations. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
  • [6] Y.-C. Chen, X. Zhu, W.-S. Zheng, and J.-H. Lai. Person re-identification by camera correlation aware feature aug-mentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (DOI: 10.1109/TPAMI.2017.2666805).
  • [7] D. Cheng, Y. Gong, S. Zhou, J. Wang, and N. Zheng. Person re-identification by multi-channel parts-based cnn with improved triplet loss function. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1335–1344, 2016.
  • [8] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 539–546. IEEE, 2005.
  • [9] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani. Person re-identification by symmetry-driven accumulation of local features. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2360–2367. IEEE, 2010.
  • [10] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008.
  • [11] M. Guillaumin, J. Verbeek, and C. Schmid. Is that you? metric learning approaches for face identification. In Computer Vision, 2009 IEEE 12th international conference on, pages 498–505. IEEE, 2009.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [13] A. Hermans, L. Beyer, and B. Leibe. In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737, 2017.
  • [14] M. Koestinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof. Large scale metric learning from equivalence constraints. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2288–2295. IEEE, 2012.
  • [15] D. Li, X. Chen, Z. Zhang, and K. Huang. Learning deep context-aware features over body and latent parts for person re-identification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • [16] W. Li, R. Zhao, T. Xiao, and X. Wang. Deepreid: Deep filter pairing neural network for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 152–159, 2014.
  • [17] W. Li, X. Zhu, and S. Gong. Person re-identification by deep joint learning of multi-loss classification. In IJCAI, pages 2194–2200. ijcai.org, 2017.
  • [18] S. Liao, Y. Hu, X. Zhu, and S. Z. Li. Person re-identification by local maximal occurrence representation and metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2197–2206, 2015.
  • [19] Y. Lin, L. Zheng, Z. Zheng, Y. Wu, and Y. Yang. Improving person re-identification by attribute and identity learning. arXiv preprint arXiv:1703.07220, 2017.
  • [20] H. Liu, J. Feng, M. Qi, J. Jiang, and S. Yan. End-to-end comparative attention networks for person re-identification. IEEE Transactions on Image Processing, 2017.
  • [21] J. Liu, Z.-J. Zha, Q. Tian, D. Liu, T. Yao, Q. Ling, and T. Mei. Multi-scale triplet cnn for person re-identification. In Proceedings of the 2016 ACM on Multimedia Conference, pages 192–196. ACM, 2016.
  • [22] T. Matsukawa, T. Okabe, E. Suzuki, and Y. Sato. Hierarchical gaussian descriptor for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1363–1372, 2016.
  • [23] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi. Performance measures and a data set for multi-target, multi-camera tracking. In European Conference on Computer Vision, pages 17–35. Springer, 2016.
  • [24] F. Schroff, D. Kalenichenko, and J. Philbin.

    Facenet: A unified embedding for face recognition and clustering.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 815–823, 2015.
  • [25] C. Su, S. Zhang, J. Xing, W. Gao, and Q. Tian. Deep attributes driven multi-camera person re-identification. In European Conference on Computer Vision, pages 475–491. Springer, 2016.
  • [26] Y. Sun, L. Zheng, W. Deng, and S. Wang. Svdnet for pedestrian retrieval. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • [27] E. Ustinova, Y. Ganin, and V. S. Lempitsky.

    Multiregion bilinear convolutional neural networks for person re-identification.

    In AVSS, 2017.
  • [28] R. R. Varior, M. Haloi, and G. Wang. Gated siamese convolutional neural network architecture for human re-identification. In European Conference on Computer Vision, pages 791–808. Springer, 2016.
  • [29] R. R. Varior, B. Shuai, J. Lu, D. Xu, and G. Wang. A siamese long short-term memory architecture for human re-identification. In European Conference on Computer Vision, pages 135–153. Springer, 2016.
  • [30] A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. In Proceeding of the ACM Int. Conf. on Multimedia, 2015.
  • [31] F. Wang, W. Zuo, L. Lin, D. Zhang, and L. Zhang. Joint learning of single-image and cross-image representations for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1288–1296, 2016.
  • [32] L. Wu, C. Shen, and A. van den Hengel. Deep linear discriminant analysis on fisher networks: A hybrid architecture for person re-identification. Pattern Recognition, 65:238–250, 2017.
  • [33] T. Xiao, H. Li, W. Ouyang, and X. Wang. Learning deep feature representations with domain guided dropout for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1249–1258, 2016.
  • [34] T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang. Joint detection and identification feature learning for person search. In Proc. CVPR, 2017.
  • [35] F. Xiong, M. Gou, O. Camps, and M. Sznaier. Person re-identification using kernel-based metric learning methods. In European conference on computer vision, pages 1–16. Springer, 2014.
  • [36] M. Ye, C. Liang, Y. Yu, Z. Wang, Q. Leng, C. Xiao, J. Chen, and R. Hu. Person reidentification via ranking aggregation of similarity pulling and dissimilarity pushing. IEEE Transactions on Multimedia, 18(12):2553–2566, 2016.
  • [37] D. Yi, Z. Lei, S. Liao, and S. Z. Li. Deep metric learning for person re-identification. In Pattern Recognition (ICPR), 2014 22nd International Conference on, pages 34–39. IEEE, 2014.
  • [38] L. Zhang, T. Xiang, and S. Gong. Learning a discriminative null space for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1239–1248, 2016.
  • [39] R. Zhao, W. Ouyang, and X. Wang. Unsupervised salience learning for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3586–3593, 2013.
  • [40] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In Proceedings of the IEEE International Conference on Computer Vision, pages 1116–1124, 2015.
  • [41] L. Zheng, Y. Yang, and A. G. Hauptmann. Person re-identification: Past, present and future. arXiv preprint arXiv:1610.02984, 2016.
  • [42] Z. Zheng, L. Zheng, and Y. Yang. A discriminatively learned cnn embedding for person re-identification. TOMM, 2017.
  • [43] Z. Zheng, L. Zheng, and Y. Yang. Pedestrian alignment network for large-scale person re-identification. arXiv preprint arXiv:1707.00408, 2017.
  • [44] Z. Zheng, L. Zheng, and Y. Yang. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
  • [45] Z. Zhong, L. Zheng, D. Cao, and S. Li. Re-ranking person re-identification with k-reciprocal encoding. 2017.
  • [46] Z. Zhong, L. Zheng, D. Cao, and S. Li. Re-ranking person re-identification with k-reciprocal encoding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.