IAN: The Individual Aggregation Network for Person Search

05/16/2017 ∙ by Jimin Xiao, et al. ∙ National University of Singapore Xi'an Jiaotong-Liverpool University 0

Person search in real-world scenarios is a new challenging computer version task with many meaningful applications. The challenge of this task mainly comes from: (1) unavailable bounding boxes for pedestrians and the model needs to search for the person over the whole gallery images; (2) huge variance of visual appearance of a particular person owing to varying poses, lighting conditions, and occlusions. To address these two critical issues in modern person search applications, we propose a novel Individual Aggregation Network (IAN) that can accurately localize persons by learning to minimize intra-person feature variations. IAN is built upon the state-of-the-art object detection framework, i.e., faster R-CNN, so that high-quality region proposals for pedestrians can be produced in an online manner. In addition, to relieve the negative effect caused by varying visual appearances of the same individual, IAN introduces a novel center loss that can increase the intra-class compactness of feature representations. The engaged center loss encourages persons with the same identity to have similar feature characteristics. Extensive experimental results on two benchmarks, i.e., CUHK-SYSU and PRW, well demonstrate the superiority of the proposed model. In particular, IAN achieves 77.23 state-of-the-art by 1.7

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Person re-identification is to re-identify the same person across different cameras, and it has attracted increasingly more interest in recent years  [2, 3]. The emergence of this task is mainly stimulated by (1) increasing demand of public security and (2) widespreading surveillance camera networks among public places, such as airports, universities, shopping malls, etc. The obtained images from surveillance cameras are usually with some characteristics, e.g., low-quality, variable, and contain motion blur. Traditional biometrics, such as face [4, 5], iris [6] and fingerprint [7], are generally not available. Thus, many person re-identification applications exploit the reliable body appearance.

Technically, a person re-identification system for video surveillance consists of three components, including person detection, person tracking, and person retrieval. While the first two components are independent computer vision tasks, most person re-identification studies focus on the third component. Numerous re-identification algorithms as well as datasets

[8, 9, 10, 11, 12, 13] have been proposed during the past decades and the performance on these benchmarks have been improved substantially. All these algorithms focus on the third component of the pipeline, assuming the person/pedestrian detection are already available. In other words, a query person is matched with cropped pedestrians in the gallery instead of searching for the target person from whole images. In reality, perfect pedestrian bounding boxes are unavailable in surveillance scenarios. In addition, existing pedestrian detectors unavoidably produce false alarms, misdetections, and misalignments. All these factors compromise the re-identification performance. Therefore, current re-identification algorithms cannot be directly applied to real surveillance systems, where we need to search a person from whole images, as shown in Fig. 1.

While majority of person re-identification works engage boxes manually annotated or produced by a fixed detector in their applications, it is necessary to study the impact of pedestrian detectors on re-identification accuracy. Specifically, [14, 15]

showed that considering detection and re-identification jointly leads to higher person search accuracy than optimizing them separately. To the best of our knowledge, end-to-end deep learning for person search (E2E-PS)

[16] is the first work to introduce an end-to-end deep learning framework to jointly handle the challenges from both detection and re-identification. Thereby, the detector and re-identification parts can interact with each other so as to reduce the influence of detection misalignments.

Fig. 1: Person search from whole images without cropping out persons. The left column is probe/query image, other columns are gallery images without cropped pedestrians. The green bounding boxes are searching results. To find the right person in the gallery images, we need to detect all the persons within the image, and compare the detected persons with the probe image.
Fig. 2: The objective of center loss is to reduce the intra-class distance by pulling the sample features towards each class center. Left side: feature distance without center loss; right side: feature distance using center loss.

In E2E-PS [16], the re-identification feature learning exploits a modified softmax loss. Early works show that such kind of identification task could greatly benefit the feature learning [17]

. Meanwhile, it is found that the identification task increases the inter-personal variations by drawing features extracted from different identities apart, while the verification task reduces the intra-personal variations by pulling features extracted from the same identity together

[18]. Inspired by this, softmax loss and contrastive loss are jointly used for feature learning, leading to better performance than the sole softmax loss [18]. But we cannot directly introduce such verification tasks into the person search faster R-CNN framework [1] used in E2E-PS [16], since the pedestrians appearing in each image are random, sparse, and unbalanced. It is difficult to organize equivalent amount of positive and negative pedestrian pairs within the faster R-CNN framework.

To address this critical issue, we propose a novel Individual Aggregation Network (IAN) that can not only accurately localize pedestrians but also minimize feature representations of intra-person variations. In particular, IAN is built upon the state-of-the-art object detection framework, i.e., Faster R-CNN [1], so that high-quality region proposals for pedestrians can be produced in an online manner for person search. In addition, to relieve the negative effect caused by various visual appearances of the same individual, a novel center loss [19] that can increase the intra-class compactness of feature representations is introduced. The center loss encourages learned pedestrian representations from the same class to share similar feature characteristics. The IAN model can be embedded in any CNN-based person search framework for improving the performance.

In particular, center loss is able to increase intra-class feature compactness without requiring to aggregate positive and negative verification samples. Center loss tracks the feature centers of all classes, and these feature centers are constantly updated based on the recent observed class samples. Meanwhile, it manages to pull the sample features towards each class center that this sample belongs to. This process is demonstrated in Fig. 2

. During the model development, we found that neural networks with dropout

[20] are not compatible with center loss [19]. We study this phenomenon in both analytic and experimental ways. We believe this finding could be useful guidance for neural network framework design in the community, which is one of our contribution.

Finally, it is encouraging to see that our proposed IAN achieves mAP and top-1 accuracy on the CUHK-SYSU person search dataset [16], which is the new state-of-the-art for this dataset. Meanwhile, we also obtain state-of-the-art performance on the PRW dataset [15].

The remainder of this paper is organized as follows. Related work is presented in Section II. The proposed person search method is described in Section III, with implementation details described in Section IV. We present and discuss the experimental results in Section V. Finally, conclusions are draw in Section VI.

Ii Related Works

Person re-identification CNN-based deep learning models have attracted a lot of attentions and been successfully applied in person re-identification since two pioneer works [8, 9]. Generally, two categories of CNN models are commonly employed in this community. One cateorgy is the Siamese model using image pairs [8, 21] or triplets [22] as input. The other category is the classification model as used in image classification and object detection. Most re-identification datasets provide only two images for each pedestrian such as VIPeR [23], CUKH01 [24] CUHK03 [9], therefore, currently most CNN-based re-identifications schemes use the Siamese model. In [8]

, an input image is partitioned into three overlapping horizontal parts, and the parts go through two convolutional layers plus one fully connected layer which fuses them and outputs a vector to represent this image, and lastly, two vectors are connected by a cosine layer. Ahmed et al.

[21] improved the Siamese model by computing the cross-input neighborhood difference features, which compares the features from one input image to features in neighboring locations of the other image. In [25]

, Varior et al. incorporate long short-term memory (LSTM) modules into a Siamese network so that the spatial connections can be memorized to enhance the discriminative ability of the deep features. Similarly, Liu et al.

[26] propose to integrate a soft attention based model in a siamese network to adaptively focus on the important local parts of the input image pair.

One disadvantage of the Siamese model is that it cannot take full advantage of the re-identification annotations. The Siamese model only considers pairwise labels (similar or not similar), which is a weak label. Another potentially effective strategy is to use a classification/identification mode, which makes full use of the re-identification labels. On large datasets, such as PRW and MARS, the classification model achieves good performance without careful pairwise or triplet selection [27, 15]. In this paper, our identification method is also built based on a classification/identification mode.

Pedestrian Detection In the past year, a lot of efforts have been made to improve the performance of pedestrian detection [28, 29]. The Integrate Channel Features (ICF) detector [28] is among the most popular pedestrian detectors without using deep learning features. Following its success, many variants [30, 31] were proposed with significant improvement. Recent years, CNN-based pedestrian detectors have also been developed. Various factors, including CNN model structures, training data, and different training strategies are studied empirically in [32]. In [33], faster R-CNN is studied for pedestrian detection.

Iii Individual Aggregation Network

In practical person search applications, pedestrian bounding boxes are unavailable and the target person needs to be found from the whole images. Targeting this problem, IAN is built upon the state-of-the-art object detection framework, i.e., faster R-CNN [1], so that reasonable region proposals for pedestrians can be produced in an online manner for person search. The proposed IAN framework is shown in Fig. 3, and it is elaborated as follows.

Fig. 3: Overview of our IAN network training framework. Images containing pedestrians are input into the network. First part of residual network, i.e., layers Conv1-Res4b of ResNet-101 [34], is used to get image features. Pedestrian bounding boxes generated by region proposal network (RPN boxes), together with the ground truth pedestrian bounding boxes (GT boxes), are used for ROI pooling to generate feature vector for each box. Second part of residual network, i.e., layers Res5a-Res5c of ResNet-101 [34], uses the ROI pooling features as input. Two fully connected layers are utilized separately, one to produce the final feature vector feat to compute feature distance, and another to produce bounding box locations. Feature vectors of all candidates boxes (RPN + GT boxes) go into the random sampling softmax loss layer; while only feature vectors of ground truth pedestrian boxes (GT boxes) go into the center loss layer.
  1. In the training phase, arbitrary size images with ground truth pedestrian bounding boxes and identifications are input into the first part residual network [34]. The residual network is divided into parts, i.e, for ResNet-101 network, layers Conv1-Res4b forms the first part network, while layers Res5a-Res5c are the second part.

  2. The region proposal network (RPN) [1], is built on top of the feature maps generated with the first part network to predict pedestrian bounding boxes. The RPN is trained with ground truth pedestrian bounding boxes, using two loss layers, i.e., anchor classification and anchor regression. Besides the candidate boxes generated by the region proposal network (RPN boxes), the ground truth (GT) pedestrian bounding boxes are also used together at the network training stage. At the test stage, only RPN boxes are available.

  3. All the candidate boxes (RPN+GT boxes at training stage, RPN boxes at test stage) are used for ROI pooling to generate feature vector for each candidate box. These features are again convolved with the second part residual networks, i.e., layers Res5a-Res5c of ResNet-101.

  4. Two sibling fully connected layers are utilized separately, one to produce the final feature vector feat to compute feature distance, and the other to produce bounding box locations. At training stage, feature vectors of all candidates boxes (RPN+GT boxes) are fed into the softmax loss layer, while only feature vectors of ground truth pedestrian boxes (GT boxes) are fed into the center loss layer. The softmax variant random sampling softmax (RSS) [16] is used for training.

Overall, compared with previous person search method E2E-PS [16], the proposed IAN generates more discriminative feature representations. In IAN, using softmax loss together with center loss [19] within the faster R-CNN framwork leads to better feature representations than solely using softmax loss in [16]. Meanwhile, the VGGNet [35] used in E2E-PS [16] contains dropout layers which are intrinsically not compatible with the center loss. In our IAN, we use the state-of-the-art residual network [34]. In addition to solving the compatibility issue with center loss, replacing VGGNet with residual network also offers better discrimination power with lower computational cost.

Iii-a Softmax + Center Loss

Both compact intra-class variations and separable inter-class differences are essential for discriminative features. However, the softmax loss only encourages the separability of features. Contrastive loss [36, 18] and triplet loss [37]

, that respectively construct loss functions for image pairs and triplets, are possible solutions to encourage intra-class variation compactness. For contrastive loss, equivalent amount of positive and negative image pairs are required, whereas for triplet loss, two images among each triplet should belong to the same class/identification with one belonging to different class/identification. However, for the faster R-CNN based person search framework, it is a non-trivial task to form such image pairs and triplets within the input mini-batch. The pedestrians within each image belongs to different identifications. Meanwhile, the pedestrians appearing in each image are random, sparse and unbalanced. Within the mini-batch of faster R-CNN, it is difficult to form a balanced number of positive pedestrian pairs as negative pairs.

On the other hand, employing center loss [19] is able to avoid the need of aggregating positive and negative pairs. In the proposed IAN network, center loss is applied together with softmax loss to generate feature representations. The center loss function is defined as follows.

(1)

where is the feature vector of pedestrian box , which belongs to class , and denotes the -th class center of features. The softmax loss forces the features of different classes staying apart. The center loss pulls the features of the same class closer to their centers. Hence the feature discriminative power is highly enhanced. With the center loss, the overall network loss function is defined as:

(2)

where is the summation of loss functions in faster R-CNN, which includes the softmax loss for perosn identification classification, and is the weight of the center loss.

Ideally, should be constantly updated as the network parameters are being updated. In other words, we need to take the entire training set into account and average the features of every class in each iteration, which is inefficient and impractical. In fact, we learn the feature center of each class. In the training process, we simultaneously update the center and minimize the distances between the features and their corresponding class centers.

The center is updated based on each mini-batch. In each iteration, the centers are computed by averaging the features of the corresponding classes. Meanwhile, to avoid large perturbations caused by few mislabelled samples, we use a scalar to control the learning rate of the centers. The gradients of with respect to and the updating equation of are computed as:

(3)
(4)

where if the condition is satisfied, and otherwise .

Iii-B Why to Avoid Dropout?

In our study, we notice that neural networks with dropout are not compatible with the center loss. For example, when the proposed IAN is deployed on VGGNet with dropout layers, its person search mAP performance on the CUHK-SYSU person search dataset [16] is about lower than the results obtained by removing all the dropout layers.

Dropout is a technique for addressing overfitting problems [20]. The key idea of dropout is to randomly drop units, along with their connections, from the neural network during training. Since the dropout randomly drops units, it creates uncertainty for the features. In other words, when image features are extracted using the same network with dropout, the obtained features for the same image might be quite different in different network forward computation instances. This is contradicting with center loss, which punishes intra-class variations.

The dropout is usually deployed after the fully connect layer, as in VGGNet. Let denote the vector of inputs into layer , and denote the vector of outputs from layer . and are the weights and biases at layer , respectively. The feed-forward operation of a standard neural network can be described as

(5)
(6)

where

is any activation function, for example sigmoid or ReLu function. With dropout, the feed-forward operation becomes

(7)
(8)
(9)
(10)

Here denotes an element-wise product. For any layer ,

is a vector of independent Bernoulli random variables each of which has probability

of being .

To illustrate that dropout is not compatible with the center loss, let us take one example. Assume input image samples and are the same and belong to the same pedestrian/class. We assume layer is a fully connection layer with dropout, the output of layer is input into the center loss layer. Since image samples and are the same, we could have . The target of the center loss is to have similar features for the same class, i.e., . Considering (8)(10), it is equivalent as . Here and are vectors of independent Bernoulli random variables, leading to . Therefore, to have , the only solution is . However, zero feature cannot properly represent the image samples. From the above simple example, we could conclude that dropout is not compatible with center loss, which is consistent with our experimental verification.

Iv Implementation Details

Iv-a Training Phase

During the network training phase, the network is trained to detect pedestrians and produce discriminative features for re-identification. In our network, loss functions are used. The smoothed-L1 loss [38]

is used for the two bounding box regression layers. A softmax loss is used for the pedestrian proposal module, which classifies pedestrian and non-pedestrian. For the re-identification feature extraction part, we deploy both random sampling softmax

[16] and center loss [19]. Here it is important to note that only features of ground truth pedestrian boxes are input into the center loss layer. This helps to avoid sample noise. The overall loss is the sum of all five loss functions, and its gradient w.r.t. the network parameters is computed through back propagation.

To speed up the network convergence, the training process includes three steps:

  1. We crop ground truth bounding boxes for each training person and randomly sample the same number of background boxes. Then we shuffle the boxes, resize them to , and fine-tune the residual network model (ResNet-101 and ResNet-50) to classify the candidate boxes. The output feature size of ROI-pooling layer in Fig. 3 is . To insure the same feature size, we add one pooling layer to the residual network.

  2. We fine-tune the model resulting from the above step. Unlike the previous step, the whole images with GT pedestrian bounding boxes and identification annotations are used for the fine-turning process. loss layers excluding the center loss is used in this fine-tuning process.

  3. We fine-tune the model obtained in Step with all loss layers including the center loss. The input images and label annotations are the same as those in Step .

Iv-B Test Phase

The test phase is similar to that in [16]. For each gallery image, we get the features (feat) of all the candidate pedestrians by performing the network forward computation once. Whereas for the query image, we replace the pedestrian proposals with the given bounding box, and then do the forward computation to get its feature vector (feat). Finally, we compute the pairwise Euclidean distances between the query features and those of the gallery candidates. The person similarity level is evaluated based on the Euclidean distances.

V Experiments

Dataset and Evaluation Metrics

We use the benchmark datasets, i.e., both the CUHK-SYSU person search dataset [16] and PRW dataset [15] in our experiment. Both mean Averaged Precision (mAP) and top-1 matching rate metrics are used. A candidate window is considered as positive if it overlaps with the ground truth larger than , which is the same as the setup in previous works [16, 15].

CUHK-SYSU dataset is a large scale and scene-diversified person search dataset, which contains 18,184 images, 8,432 persons, and 99,809 annotated bounding boxes. Each query person appears in at least two images. Each image may contain more than one query person and many background people. The dataset is partitioned into a training set and a test set. The training set contains 11,206 images and 5,532 query persons. The test set contains 6,978 images and 2,900 query persons. The training and test sets have no overlap on images or query persons. The identifications in CUHK-SYSU dataset is in the range of , with being unknown persons, and 5,532 being background. boxes do not go into the random sampling softmax (RSS). Neither nor 5,532 goes into the center loss layer, because unknown persons and background are not unique as other identifications.

In the PRW dataset, a total of 11,816 frames are manually annotated to obtain 43,110 pedestrian bounding boxes, among which 34,304 pedestrians are annotated with an identifications ranging from to and the rest are assigned an identifications of . The PRW dataset is divided into a training set with 5,704 frames and 482 identifications and a test set with 6,112 frames and 450 identifications. Similar to that in CUHK-SYSU dataset, unknown persons, and background does not go into the center loss layer. boxes do not go into the random sampling softmax (RSS).

Our ablation study is based on the CUHK-SYSU dataset, so as to provide more comprehensive performance comparisons with state-of-the-art methods, such as E2E-PS [16] and JDI-PS [39].

Training/Testing Settings We build our framework on two residual networks, i.e., ResNet-101 and ResNet-50 [34]. For ResNet-101, the pedestrian proposal network is connected after layer res4b22, while for ResNet-50, it is connected after layer res4f. In the following experiments, the default network is ResNet-101 if not specified. For training Step described in Section IV-A, the learning rate is with k iterations and batch size being . For training Step , k iterations are used. The initial learning rate is and decreased by a factor of 10 after k iterations. For training Step , the learning rate is with k iterations. For both steps and , the batch size is

due to high memory cost. The networks are trained on NVIDIA GeForce TITAN X GPU with 12GB memory. Our implementation is based on the publicly available Caffe framework

[40].

For testing the CUHK-SYSU dataset, in order to evaluate the influence of gallery size, different gallery size is used, including . In the following experiments, we will report the performance based on the test protocol where the gallery size is if not specified. Each image contains background persons on average. If the gallery size is set to , a query person has to be distinguished from around background persons and thousands of non-pedestrian bounding boxes, which is challenging. While for testing the PRW dataset, all 6,112 frames in the test set are used as gallery, which is challenging.

Fig. 4: The mAP accuracy of person search on CUHK-SYSU [16] validation set using different center loss weight .

V-a Results on CUHK-SYSU dataset

Experiment on Parameter . The hyper parameter controls the weight of the center loss over the whole network loss function. It is essential to our model. So we conduct one experiment to investigate the sensitiveness of of the proposed approach with respect to . We vary from to to learn different models. The training dataset is equally divided into equal folds, use of them for training, and for validation. Cross-validation is deployed. The person search accuracies of these models on CUHK-SYSU [16] validation set are shown in Fig. 4. It is very clear that it is not a good choice simply without using the center loss (in this case ), leading to poor person search mAP performance. Proper choice of the values, e.g., , can improve the person search accuracy of the deeply learned features. We also observe that the person search performance of our model remains largely stable across a wide range . Meanwhile, it is also observed that similar trend is obtained for the top-1 accuracy. Thus, in the following experiments, we set the value as .

Overall Person Search Performance. The results of IAN and benchmarks under two evaluation metrics are summarized in Table I. We compare our performance with end-to-end deep learning for person search (E2E-PS) method [16], and joint detection and identification feature learning for person search (JDI-PS) method [39], because of their superior performance. As reported in [39], JDI-PS method [39] attains much better performance than separating pedestrian detection ([30], [41]) and re-identification (for examples, BoW [42]

+ Cosine similarity, LOMO+XQDA

[43]).

With ResNet-101, more than gain is obtained compared with [16] for both mAP and top-1 accuracy. To demonstrate the importance of center loss in IAN, we also report the performance of E2E-PS [16] when the VGGNet is replaced with ResNet-101 and ResNet-50. It is observed that gain for the two metrics is obtained only because of the center loss. It is important to note that our performance is also better than JDI-PS [39] if both deploy the ResNet-50.

Method E2E-PS [16] E2E-PS [16] E2E-PS [16] JDI-PS [39] IAN IAN
(VGGNet) (ResNet-50) (ResNet-101) (ResNet-50) (ResNet-50) (ResNet-101)
mAP (%) 77.23
top-1 (%) 80.45
TABLE I: Comparisons between IAN with E2E-PS [16] and JDI-PS [39].

Input of Center Loss. In our proposed method, only features of ground truth pedestrian boxes are input into the center loss layer. This scheme is verified by experimental results. To do this, we input all positive pedestrian boxes (excluding background and unknown persons with id ) into the center loss layer. Note that positive pedestrian boxes refer to candidate boxes overlapping with ground truth pedestrian boxes higher than threshold, i.e., . The obtained results with such scheme is lower than that uses features of ground truth pedestrian boxes, as reported Table II. This is because the objective of center loss is to increase intra-class feature compactness, but features of different positive boxes of the same pedestrian are dissimilar as they cover different regions with various background information.

Method IAN with all boxes IAN
mAP (%) 77.23
top-1 (%) 80.45
TABLE II: The person search performance if all positive pedestrian boxes are input into the center loss layer (IAN with all boxes).

Center Loss with VGGNet. In Section III-B, analysis to avoid dropout is given. We also study this phenomenon with experiments. The VGGNet model provided in [16], where dropout layers are used, is fine-turned with center loss with loss weight . The testing results with the fine-turned models are reported in Table III. It is observed that by increasing the iteration number, the performance is decreased constantly. With iterations, almost mAP is dropped compared with models without center loss. The importance of replacing VGGNet with ResNet is demonstrated with this experiment.

Iteration 0
mAP (%)
top-1 (%)
TABLE III: Person search performance using VGGNet (dropout) and center loss together.

We remove all the dropout layers in VGGNet, and test E2E-PS[16] and our IAN. The obtained results are reported in Table IV. It is interesting to see that removing the dropout layers in VGGNet leads to sightly better person search performance. Our IAN with center loss leads to about performance gain compared with E2E-PS[16] for both mAP and top-1 accuracy, if both remove the dropout layers. By comparing the results in Table III and IV, it is evident that dropout and center loss are not compatible. The experimental results support our analysis in Section III-B.

Method E2E-PS[16] E2E-PS[16] IAN
(VGGNet) (VGGNet no dropout) (VGGNet)
mAP (%) 73.65
top-1 (%) 76.14
TABLE IV: Comparison between IAN and E2E-PS [16] for VGGNet with all dropout layers removed.

Effects of Gallery Size. The task of person search is more challenging when the gallery size increases. We vary the gallery size from to 4,000, and test our approach, E2E-PS [16] with both VGGNet and ResNet-101, and JDI-PS [39]. The obtained mAPs for various gallery size are reported in Fig. 5. As expected, the mAP decreases with the increase of gallery size. Meanwhile, for various gallery sizes, our approach outperforms E2E-PS [16] with both VGGNet and ResNet-101 significantly. For large gallery size 4,000, the mAP gain over E2E-PS [16] is more than . Meanwhile, it is also observed from Fig. 5.(b) that IAN outperforms JDI-PS [39] with good gain for various gallery size. For large gallery size, i.e., 4,000, the mAP gain is . It is worth noticing that the comparison is fair because both use the ResNet-50 network.

Fig. 5: Person search performance comparison for various gallery size. (a) Comparing IAN with E2E-PS [16]; (b) Comparing IAN with JDI-PS [39].
Fig. 6: Three set of examples for the top-5 person search matches on the CUHK-SYSU test data, rows 1, 4, 7 are results of the E2E-PS [16], rows 2, 5, 8 are results of E2E-PS [16] when ResNet-101 is used; rows 3, 6, 9 are results of IAN. The red box region in the first column is the probe image. The green boxes in other columns are searching results, where red boxes are ground truth results. Best viewed in color.

Occlusion and Resolution. We also test IAN using low-resolution query persons and partially occluded persons. The gallery size is fixed as and several methods are evaluated on these subsets. The results are shown in Table V. It is observed that all the methods perform significantly worse on both the occlusion and low-resolution subsets than on the whole test set. Nevertheless, IAN consistently outperforms E2E-PS [16] significantly.

Method E2E-PS [16] E2E-PS [16] IAN
VGGNet (Res-101) (Res-101)
mAP top-1 mAP top-1 mAP top-1
Low-Res 52.60 54.48
Occlusion 53.02 54.55
Whole 77.23 80.45
TABLE V: Experimental results of three solutions on the occlusion subset, low-resolution subset.

V-B Results on PRW dataset

The obtained results on the PRW dataset are reported in Table VI. Our proposed method outperforms the DPM-Alex+IDE method reported in [15] with a margin of more than top-1 accuracy. More importantly, according to [15], various ways of combining of pedestrian detection methods and re-identification methods are tested for the PRW dataset, and it is shown that DPM-Alex+IDE achieves the best performance among all the combinations. On the other hand, the performance of IAN is also better than that of E2E-PS [16] and DPM-Alex+IDE, which demonstrates the benefits of the center loss.

Method DPM-Alex E2E-PS [16] IAN
+IDE [15] (ResNet-101) (ResNet-101)
mAP (%) 23.00
top-1 (%) 61.85
TABLE VI: Performance comparison on the PRW dataset with the state-of-the-art.

Vi Conclusions

To address challenging issues in modern person search framework, we proposed a novel Individual Aggregation Network (IAN) model that can accurately localize pedestrians and meanwhile minimize intra-person variations over feature representations. In particular, we built the IAN upon the state-of-the-art object detection framework, i.e., faster R-CNN model, so that high-quality region proposals for pedestrians are produced in an online manner for person search. In addition, IAN incorporates a novel center loss which is demonstrated to be effective at relieving the negative effect caused by large variance of visual appearance of the same person. Meanwhile, we also performed neural network compatibility study for center loss, and we explained why dropout is not compatible with center loss. Finally, extensive experiments on two benchmarks, i.e., CUHK-SYSU and PRW, show that IAN achieves the state-of-the-art performance on both dataset, and well demonstrate the superiority of the proposed IAN network.

One limitation of the proposed IAN is its large GPU memory requirement, because center loss needs to track the feature centers of all classes. Saving the GPU memory cost and reducing the network computational complexity will be our future research work for the proposed IAN network.

References

  • [1] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems, pp. 91–99, 2015.
  • [2] A. Bedagkar-Gala and S. K. Shah, “A survey of approaches and trends in person re-identification,” Image and Vision Computing, vol. 32, no. 4, pp. 270–286, 2014.
  • [3] S. Gong, M. Cristani, S. Yan, and C. C. Loy, Person Re-identification, vol. 1. Springer, 2014.
  • [4] V. Chatzis, A. G. Bors, and I. Pitas, “Multimodal decision-level fusion for person authentication,” IEEE Transactions on Systems, Man, and Cybernetics-part A: Systems and Humans, vol. 29, no. 6, pp. 674–680, 1999.
  • [5]

    F. Dornaika and A. Bosaghzadeh, “Exponential local discriminant embedding and its application to face recognition,”

    IEEE Transactions on Cybernetics, vol. 43, no. 3, pp. 921–934, 2013.
  • [6] R. M. da Costa and A. Gonzaga, “Dynamic features for iris recognition,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 4, pp. 1072–1082, 2012.
  • [7] R. Cappelli, M. Ferrara, and D. Maio, “A fast and accurate palmprint recognition system based on minutiae,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 3, pp. 956–962, 2012.
  • [8] D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Deep metric learning for person re-identification,” in

    International Conference on Pattern Recognition

    , pp. 34–39, 2014.
  • [9] W. Li, R. Zhao, T. Xiao, and X. Wang, “Deepreid: Deep filter pairing neural network for person re-identification,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 152–159, 2014.
  • [10] W.-S. Zheng, S. Gong, and T. Xiang, “Person re-identification by probabilistic relative distance comparison,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 649–656, 2011.
  • [11] W.-S. Zheng, S. Gong, and T. Xiang, “Reidentification by relative distance comparison,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 3, pp. 653–668, 2013.
  • [12] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: A benchmark,” in IEEE International Conference on Computer Vision, pp. 1116–1124, 2015.
  • [13] D. Tao, L. Jin, Y. Wang, and X. Li, “Person reidentification by minimum classification error-based KISS metric learning,” IEEE Transactions on Cybernetics, vol. 45, no. 2, pp. 242–252, 2015.
  • [14] Y. Xu, B. Ma, R. Huang, and L. Lin, “Person search in a scene by jointly modeling people commonness and person uniqueness,” in ACM international conference on Multimedia, pp. 937–940, ACM, 2014.
  • [15] L. Zheng, H. Zhang, S. Sun, M. Chandraker, and Q. Tian, “Person re-identification in the wild,” arXiv preprint arXiv:1604.02531, 2016.
  • [16] T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang, “End-to-end deep learning for person search,” arXiv preprint arXiv:1604.01850, 2016.
  • [17] Y. Sun, X. Wang, and X. Tang, “Deep learning face representation from predicting 10,000 classes,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 1891–1898, 2014.
  • [18] Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Advances in Neural Information Processing Systems, pp. 1988–1996, 2014.
  • [19] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in European Conference on Computer Vision, pp. 499–515, Springer, 2016.
  • [20] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting.,”

    Journal of Machine Learning Research

    , vol. 15, no. 1, pp. 1929–1958, 2014.
  • [21] E. Ahmed, M. Jones, and T. K. Marks, “An improved deep learning architecture for person re-identification,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 3908–3916, 2015.
  • [22] S. Ding, L. Lin, G. Wang, and H. Chao, “Deep feature learning with relative distance comparison for person re-identification,” Pattern Recognition, vol. 48, no. 10, pp. 2993–3003, 2015.
  • [23] D. Gray and H. Tao, “Viewpoint invariant pedestrian recognition with an ensemble of localized features,” in European Conference on Computer Vision, pp. 262–275, Springer, 2008.
  • [24] W. Li, R. Zhao, and X. Wang, “Human reidentification with transferred metric learning,” in Asian Conference on Computer Vision, pp. 31–44, Springer, 2012.
  • [25] R. R. Varior, B. Shuai, J. Lu, D. Xu, and G. Wang, “A siamese long short-term memory architecture for human re-identification,” in European Conference on Computer Vision, pp. 135–153, Springer, 2016.
  • [26] H. Liu, J. Feng, M. Qi, J. Jiang, and S. Yan, “End-to-end comparative attention networks for person re-identification,” arXiv:1606.04404, 2016.
  • [27] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian, “Mars: A video benchmark for large-scale person re-identification,” in European Conference on Computer Vision, pp. 868–884, Springer, 2016.
  • [28] P. Dollár, Z. Tu, P. Perona, and S. Belongie, “Integral channel features,” 2009.
  • [29] J. Marin, D. Vázquez, A. M. López, J. Amores, and L. I. Kuncheva, “Occlusion handling via random subspace classifiers for human detection,” IEEE Transactions on Cybernetics, vol. 44, no. 3, pp. 342–354, 2014.
  • [30] P. Dollár, R. Appel, S. Belongie, and P. Perona, “Fast feature pyramids for object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 8, pp. 1532–1545, 2014.
  • [31] W. Nam, P. Dollár, and J. H. Han, “Local decorrelation for improved pedestrian detection,” in Advances in Neural Information Processing Systems, pp. 424–432, 2014.
  • [32] J. Hosang, M. Omran, R. Benenson, and B. Schiele, “Taking a deeper look at pedestrians,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 4073–4082, 2015.
  • [33] L. Zhang, L. Lin, X. Liang, and K. He, “Is faster r-cnn doing well for pedestrian detection?,” in European Conference on Computer Vision, pp. 443–457, Springer, 2016.
  • [34] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [35] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
  • [36] R. Hadsell, S. Chopra, and Y. Lecun, “Dimensionality reduction by learning an invariant mapping,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 1735-1742, 2006.
  • [37] S. Ding, L. Lin, G. Wang, and H. Chao, “Deep feature learning with relative distance comparison for person re-identification,” Pattern Recognition, vol. 48, pp. 2993–3003, Oct. 2015.
  • [38] R. Girshick, “Fast R-CNN,” in IEEE International Conference on Computer Vision, pp. 1440–1448, 2015.
  • [39] T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang, “Joint detection and identification feature learning for person search,” arXiv:1604.01850v2, 2017.
  • [40] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” arXiv preprint arXiv:1408.5093, 2014.
  • [41] B. Yang, J. Yan, Z. Lei, and S. Z. Li, “Convolutional channel features,” in IEEE International Conference on Computer Vision, pp. 82–90, 2015.
  • [42] M. Koestinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof, “Large scale metric learning from equivalence constraints,” in IEEE International Conference on Computer Vision, pp. 2288–2295, 2012.
  • [43] S. Liao, Y. Hu, X. Zhu, and S. Z. Li, “Person re-identification by local maximal occurrence representation and metric learning,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 2197–2206, 2015.