Most existing methods on generating adversarial examples focus on the closed set setting, where the source and target domains share exactly the same classes [30, 7, 13, 18, 4]. However, in a more realistic scenario, we also face open set problems where the target has limited overlap or even no overlap with the source [15, 1, 5]. This new setting suggests a retrieval procedure for the target domain. Given a query image of an arbitrary class and a large database of images, we compute the similarity between the query and database images and rank the images according to their similarity to the query. Under this context, we consider the task of generating adversarial examples out of the query images to fool the retrieval system.
When considering open set recognition, existing closed set attack methods encounter two problems. First, closed set methods attack class predictions to generate adversarial examples, but this strategy is inconsistent with the testing procedure of open set recognition, a retrieval problem (Fig. 1
(b)). In fact, open set recognition and closed set recognition are different during testing. The latter is by nature a classification problem, because the testing images fall into the training classes. The former, however, is more of a retrieval problem, in which given a query of an unseen class, we aim to retrieve its relevant images from the testing set. Therefore, attacking on the classification layer does not directly affect the retrieval task, which relies on the intermediate deep features. Second, closed set methods attack on the classification prediction, which usually does not contain the query class in the open set problem (Fig.1(a)). Given a query image of an unseen class, the traditional attack methods may lead to inferior adversarial gradient, which compromises the attack effectiveness.
Given the potential problems of closed set approaches, this work focuses on generating adversarial examples tailored for open set recognition, which is viewed as a retrieval problem. To this end, we propose to attack query images. For a successful adversarial attack on a query, we aim that all the true matches be ranked as low as possible in the obtained rank list. To our knowledge, no well-founded method has been proposed for attacking open set recognition systems, and we fill this gap in this work. Under this new setting, an alternative solution to attacking the query image consists in attacking the database (candidate image pool). However, the database can be of large scale with millions of images. Attacking a large number of database images is very time-consuming. So in this paper we focus on crafting adversarial query images. Without knowledge of the database, we report that adversarial queries alone are sufficient to fool the open set system and that the cost of generating an adversarial query is relatively cheap.
Under the open set context, we propose a new approach for adversarial example generation, named Opposite-Direction Feature Attack (ODFA). ODFA works on the feature level, which is based on the target domain testing procedure, i.e., similarity computation between the query and database images using their respective features. Our key idea is to explicitly push away the feature of the adversarial example from its original feature. Specifically, we first define the opposite-direction feature, which, as its name implies, points at the opposite direction from the feature of the original query. During adversarial attack, We then enforce the query feature to move towards the opposite-direction feature
. Due to the revised direction the feature vector of the adversarial query, the similarity between the database true matches and adversarial query can be very low. Therefore when using the adversarial query, the retrieval model is prone to treat all the true matches as outliers.
In experiment, we show that the proposed ODFA method leads to a large accuracy drop on two open set recognition / retrieval datasets, i.e., Market-1501 and CUB-200-2011. Under various levels of image perturbation, ODFA outperforms state-of-the-art closed set attack methods such as fast-gradient sign method , basic iterative method  and iterative least-likely class method . Moreover, when we adopt ODFA to closed set recognition systems like Cifar-10, its attack effect does not show clear superiority to the same set of methods [7, 13]. This indicates that the specificity of our method on open set problems. Additionally, we observe that ODFA has good transferability under the open set scenario. That is, the adversarial queries crafted for one retrieval model remain adversarial for another model in the open set scenario. This observation is consistent with previous findings under the closed set settings [30, 23, 17, 19].
2 Related Work
Open Set Recognition.
Open set recognition is a challenging task initially proposed in face recognition task, where test faces have limited overlap IDs with the training faces. It demands a robust system with good generalizability. In this work, we view the open set recognition as a retrieval task. In some early works [14, 6, 5, 2]
, the intermediate semantic representation is usually learned from the source dataset and applied to the target dataset. Recently the progress in this field has been due to two factors: the availability of large-scale source set and the learned representation using the deep neural network. Most state-of-the-art methods apply Convolutional Neural Network (CNN) to extract the visual feature and rank the images according to the feature similarity[24, 31, 37]. Despite the impressive performance, no prior works have explored the robustness of the open set system. In this paper, we do not intend to achieve the state-of-the-art accuracy. We train the baseline CNN on several datasets, which yields competitive results and then attack these models with adversarial queries.
Adversarial Sample. Szegedy et al. 
first show that the adversarial images, while looking pretty much the same with the original ones, can mislead the CNN model to classify them into a specific class. It raises the security problem of the current state-of-the-art models[26, 4] and also provides us more insights of the CNN mechanism . Given an input image, gradient-based methods need to know the gradient of the applied model. One of the earliest works is the fast-gradient sign method , which generates adversarial examples in one step. Some works extend  to iteratively updating the adversarial images with small step sizes, i.e., basic iterative method , deep fool  and momentum iterative method . Compared with the fast-gradient sign method, the perturbation generated with iterative methods is smaller. The visual quality of adversarial samples is close to the original images. On the other hand, another line of methods relies on searching the input space. Jacobian-based saliency map attack greedily modifies the input instance . In , Narodytska et al. further shows that single pixel perturbation, which may be out of the valid image range, can successfully lead to misclassification on small-scale images. They also extend the method to large-scale images by local greedy searching.
The closest inspiring work is the iterative least-likely class method , which makes the classification model output interesting mistakes, e.g., classifying an image of the class vehicle into the class cat
. They achieve this effect by constraining to increase the predicted probability of the least-likely class. This work adopts a similar spirit. In order to fool the retrieval model into assigning the true matches with possibly low ranks, we constrain to increase the similarity of the query feature vector with a vector of an opposite direction in the feature space. Here we emphasize that our work is different from in two aspects. First, Kurakin et al.  focus on closed set recognition and rely on class predictions to obtain the least-likely class. In the open set setting, the classification model faces images from unseen classes. The inaccurate class prediction may compromise the iterative least-likely class method. In this respect, the proposed method directly works on the intermediate feature level and alleviates this problem. Second, Kurakin et al.  increase the probability of the least-likely class but do not decrease the probability of the most-likely class. So the true match images / classes may be still in the top-K prediction. In comparison, our method explicitly constrains to decrease the similarity of the adversarial image and its original image in the feature space, so that the similarity between the adversarial image and original true-matches also drops. The model is prone to rank all the true-matches out of the top-K.
We use to denote the original query image. We extract its visual feature , where denotes some nonlinear function, such as CNN, which maps an image to a feature vector. For some retrieval models [37, 28], a classifier is trained which maps a feature to a class probability vector . is a -dim score vector, where denotes the number of classes in the source set . For two images and
, we denote their cosine similarity as, where is the L2-norm, and . Moreover, We denote the objective function and the gradient as and , respectively. In order to keep each pixel of the adversarial sample within a valid value range, we follow the practice in . Specifically, we clip the pixels whose values fall out of the valid range, and remove the distortions which are larger a hyper-parameter : . Since a large will make the perturbation perceptible to the human, we set the in this work.
3.2 Victim Model
In this section, we introduce the victim model to be attacked by the proposed ODFA method. Given an annotated source dataset , the victim model is trained to learn a mapping function from raw data to the semantic space. Samples with similar content will be mapped closely. The learned model with good generalization is able to project an unseen query to the neighborhood of the true match images in the feature space. We assume that the adversaries have access to the victim model’s parameters and architecture. In this work, we deploy the widely used CNN model based on the cross-entropy loss as the victim model for retrieval [37, 28]. The model aims to predict a training sample into one of the pre-defined classes. During testing, given an image (either query or database image), we extract the intermediate feature from the CNN model, which, in ResNet-50 , denotes the 2,048-dim Pool5 output. In this victim model, a linear classifier is used to predict the class probability: , where and are learned parameters.
3.3 Adoption of Classification Attack in Open Set Recognition
Previous works in adversarial example generation usually attack the class prediction layer [7, 13]. In this manner, when the input image is changed, the activation of the fully-connected (FC) layer is also implicitly impacted. Although these methods do not directly attack the retrieval problem, we can still use the impacted intermediate features for retrieval. Therefore, in the open set scenario, we adopt these existing methods to generate the adversarial queries for retrieval.
Specifically, for the fast-gradient sign method  and basic iterative method , we deploy the label predicted by the baseline model as the pseudo label . To attack the model, the objective is to decrease the probability so that the adversarial query is classified into the pseudo class. The objective is written as,
For the iterative least-likely class method , we calculate the least-likely class The attack objective is to increase the probability so that the input is classified as the least-likely class. The objective is,
To generate adversarial samples, the weight of the model is fixed and we only update the input. For the fast-gradient sign method, . For the iterative methods, i.e., basic iterative method and iterative least-likely class method, we initial X’ with X: , and then update the adversarial samples times: , where is a relatively small hyper-parameter. Following , we set and the number of the iterations . The clip function is also added to keep pixels of the adversarial query in valid range.
Discussion. How comes that closed set attack methods work for retrieval? The retrieval system needs a projection function, mapping images to their feature space, which should be highly relevant to the semantics of the images. Closed set attack methods make changes to the class prediction of the query. According to the prediction function (note that and is fixed), the intermediate feature is also changed. Therefore, using the closed set methods, the similarity between the adversarial example and the original image is implicitly decreases in the feature space. So the similarity between the adversarial example and its true matches is also implicitly decreased.
What are the disadvantages of the classification attack for retrieval? There are two main disadvantages. First, the source set and the query set usually do not contain the same set of classes. The predefined training classes in cannot well represent the semantics of the unseen query in . So the most likely label may not really be the most-likely one, and the least-likely label may not really be the least-likely one, either. Second, the above-mentioned three classification attack methods [13, 7] work on the prediction score and do not explicitly change the visual feature. So they are limited in their adversarial performance on the retrieval system.
3.4 Opposite-Direction Feature Attack
To overcome the above disadvantages of the closed set attack methods, we propose a new method named opposite-direction feature attack (ODFA), which directly works on the intermediate feature without requiring to attack class predictions. Specifically, given a query image , the retrieval model extracts the original feature . We assume that the similarity score between query and its true match is relatively high. To attack the retrieval model, our target is to minimize the similarity score between the adversarial query and its true match image . To achieve this goal, we define the loss objective as,
This loss function aims to push the featureof the adversarial image to the opposite side of the original query feature . We name as the opposite-direction feature. When , will be close to , . The similarity score between the adversarial query and the true match images is,
Because is relatively high, we can deduce that is low. To generate an adversarial query , we adopt an iterative method to update : . The clip function is also added to keep pixels in the adversarial sample within valid range.
Discussion. We provide a 2D geometric interpretation to illustrate the difference of the gradient direction between the proposed method and previous ones (Fig. 2). The classification attacks use the class prediction , where is the learned weight and is the bias term. The weight contains weights for the classes. We use to denote the weight of most-likely class and to denote the weight of the least-likely class . For the fast-gradient sign method and the basic iterative method, the gradient on feature equals to,
Note that is a positive constant. So the direction of the gradient is the direction of . For the iterative least-likely class method, the gradient equals to,
The gradient has the same direction with . For the unseen images of new classes, i.e., query images, and are not accurate to describe the adversary of the original query, so the adversarial attack effect is limited. In this paper, instead of using class predictions, we directly attack the feature. According to the Eq. 3, the gradient of the proposed method is written as,
where is the feature of the original query image. In Fig. 2 (c), we draw the gradient direction of the first iteration. In the first iteration, , . Our method leads the feature to the opposite direction of the original feature, so the similarity of true matches drops more quickly.
3.5 Implementation Details of the Victim Model
pre-trained on ImageNet as the baseline model. During training, the pedestrian images in Market-1501 are resized to . It is a strong baseline, which even can arrive higher accuracy than the reported results in some CVPR’18 papers [35, 9]. The images in CUB-200-2011 are first resized with its shorter side , and we then apply a random crop to the images. We adopt a mini-batch size of for training the two datasets. The learning rate is
for the first 40 epochs and decay tofor the last 20 epochs. For image classification, our implementation employs the ResNet with layers for the Cifar-10 dataset . The size of the input image is and we employ horizontal flipping for data augmentation. The training policy follows the practice in [8, 38]
. Our source code will be available online. The implementation is based on Pytorch package.
Market-1501 is a large-scale pedestrian retrieval dataset . This type of retrieval task is also known as person re-identification (re-ID), which aims at spotting a person of interest in other cameras. The author collects images under the six different cameras in a university campus. There are 32,668 detected images of 1,501 identities in total. Following the standard train / test split, we use 12,936 images of 751 identities as the source set and the rest 19,732 images of another 750 identities as the target set. There is no overlapping class (identity) between the source and target sets.
CUB-200-2011 consists of images of bird species, which focuses on fine-grained recognition . Following , we use the CUB-200-2011 dataset for fine-grained image retrieval. The first 100 classes (5,864 images) are used as source set and we evaluate the model on the rest 100 classes (5,924 images).
Cifar-10 is a widely-used image recognition dataset, containing 60,000 images with the size of classes . There are 50,000 training images and 10,000 test images. We conduct the closed set recognition evaluation on this dataset.
With the limited image perturbation, we compare the methods by the drop of the accuracy. The lower accuracy is the better. For open set recognition, we use two evaluation metrics,i.e., Recall@K and mean average precision (mAP). Recall@K is the probability that the right match appears in the top K of the rank list. Given a ranking list, the average precision (AP) calculates the space under the recall-precision curve. mAP is the mean of the average precision of all queries. For closed set recognition, we use the Top-1 and Top-5 accuracy. Top-K is the probability that the right class appears in the top-K predicted classes.
4.2 Effectiveness of ODFA in Open Set Recognition / Retrieval
We first demonstrate the superior attack performance of ODFA in open set recognition / retrieval. Recall@1, Recall@10 and mAP on Market-1501 using clean and adversarial queries are summarized in Fig. 3. The victim model using clean queries arrives at Recall@1 = and mAP = , which is consistent with the numbers reported in [37, 38]. As mentioned, the closed set attack method changes the semantic prediction, which implicitly changes the retrieval features. When , the adversarial images generated by the three closed set attacks lead to more than rank-1 error. When , the iterative least-likely class method even yields a Recall@1 = . Nevertheless, these methods are not very effective to move true matches out of the top-10 rank. Although Recall@10 continues to decrease with when increasing , the best method, i.e., the iterative least-likely class method, only achieves a Recall@10 of . In comparison, the proposed ODFA achieves a lower Recall@1 and Recall@10 when . This can be attributed to the opposite gradient direction attack mechanism. Since the distance between the feature of the adversarial query and that of the original query is much larger, the true matches, which are close to the original query, are thus far from the adversarial query in the feature space. As we increase the to , the victim model yields Recall@1 = , Recall@10=, mAP = , which is lower than all the closed set attack methods.
As shown in previous works [30, 23, 21, 22, 17, 19], the adversarial images can be transferred to other models under closed set recognition, because the models learn a similar decision boundary. In this work, we also conduct an experiment to test the transferability of the open set adversarial queries. We train a stronger victim model with DenseNet-121  for person retrieval, which arrives at Recall@1 = and mAP = using “clean” images. The adversarial queries are independently generated by another ResNet-50 model (). The experiment shows that adversarial samples also compromise the performance of DenseNet-121: Recall@1 = and mAP = . The Recall@10 accuracy drop from to .
We visualize the retrieval results with the original and adversarial queries in Table 1. Since we employ an iterative policy with small steps, the adversarial queries generated by our method are visually close to the original query. In these examples, the ranking results obtained by the original queries are good. However, when using the adversarial queries, the top-10 ranked images are all false matches with a different appearance with the adversarial query. The adversarial query successfully makes the victim model produce high ranks to the false match images. For the query person in yellow (second row), the adversarial query retrieves persons with light-colored shorts. For the query in red (fourth row), the adversarial query retrieves not only pedestrians in purple but also some background distractors.
The experiment on fine-grained image retrieval indicates similar observation (Fig. 4). First, due to the subtle differences among the fine-grained classes, the baseline vicitm model does not arrive a relatively high performance: Recall@1 = , Recall@10 = and mAP = using clean queries. Using ODFA, the retrieval accuracy is made even worse. When , we arrive at Recall@1 = , Recall@10 = and mAP = . Second, compared with the three closed set methods, our method achieves larger accuracy drop. Since there are no overlapping bird classes in the source and target sets, the impact of the classification attack is limited. When , the best closed set method, i.e., fast-gradient sign method arrives at Recall@1=, Recall@10 = and mAP = . This accuracy drop is smaller than the drop of the proposed method.
4.3 Performance of ODFA in Closed Set Recognition
After confirming its attack performance in open set recognition, we further test ODFA in closed set recognition. Results are shown in Fig. 5. We can observe that our attack does not achieve the largest drop of top-1 accuracy when is small. This can be explained by the adversarial target. The iterative least-likely class method aims to make the model mis-classify the adversarial example into the least-likely class. In comparison, our method does not increase the probability of a specific class. Although the confidence score of the correct class decreases, there are no competitors to replace the correct top-1 class which already has a high confidence score. Nevertheless, as for top-5 misclassification, the proposed method converges to a lower point than other methods. Since the value of the bias term for 10 classes is close, we ignore the impact of . When our method converges, the original top-1 prediction become the lowest probability . So the correct class is moved out of the top-5 classes quickly. When , the adversarial images generated by our method compromise the top-5 accuracy from to . The attacked top-1 accuracy is also competitive to the result of iterative least-likely class method . In summary, the proposed ODFA method reports competitive performance and is not evidently superior to the competing methods as the case in open set recognition.
4.4 Attack against the state-of-the-art models
Furthermore, we evaluated our method on some state-of-the-art models, which arrive higher accuracy in the original benchmark. We observe that the open-set recognition model with good generalizability is not robust as we expected. Specifically, for person retrieval (open set recognition), we attack a recent ECCV’18 model . We follow the open-source implement in 111https://github.com/layumi/Person_reID_baseline_pytorch. On Market-1501, we arrive Recall@1 = , mAP = using clean queries for the victim model. As shown in Fig. 6(a,b), Recall@1 and mAP drops to and respectively by the proposed ODFA. Fast-gradient sign method also arrive a relatively low accuracy and , but is still smaller than the accuracy drop of the proposed method.
For image classification (close set recognition), we apply a state-of-the-art model WideResNet-28 . In our re-implementation, we arrive Top-1 accuracy and Top-5 accuracy using clean queries, respectively. As shown in Fig. 6(c,d), we have consistent observations with the baseline victim models, i.e., competitive top-1 accuracy drop and largest top-5 accuracy drop. Our method arrives Top-1 accuracy and Top-5 accuracy .
In this paper, we 1) consider a new setting for adversarial attack, i.e., open set recognition, and 2) propose a new attack method named Opposite-Direction Feature Attack (ODFA). The attack works on the intermediate feature instead of on the class prediction. The proposed method uses the opposite gradient direction to attack the retrieval feature, which directly compromises the ranking result. On two image retrieval datasets, i.e., Market-1501 and CUB-200-2011, compared with the state-of-the-art closed set methods, ODFA leads to a larger drop in ranking accuracy with limited image perturbation. For closed set recognition, the attack performance of ODFA does not clearly surpass its competitors, indicating its specificity in open set problems. In the future, we will investigate into applying the proposed attack to the shallow layers and study its effect on other tasks, such as semantic segmentation and object detection [33, 16].
-  P. P. Busto and J. Gall. Open set domain adaptation. In ICCV, 2017.
-  C. Deng, X. Liu, C. Li, and D. Tao. Active multi-kernel domain adaptation for hyperspectral image classification. Pattern Recognition, 2018.
-  Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li. Boosting adversarial attacks with momentum. CVPR, 2018.
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash,
T. Kohno, and D. Song.
Robust physical-world attacks on deep learning visual classification.In CVPR, 2018.
-  Y. Fu, T. M. Hospedales, X. Tao, and S. Gong. Transductive multi-view zero-shot learning. TPAMI, 37(11):2332–2345, 2015.
-  Y. Fu, T. M. Hospedales, T. Xiang, and S. Gong. Learning multimodal latent attributes. TPAMI, 36(2):303–316, 2014.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. ICLR, 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
-  L. He, J. Liang, H. Li, and Z. Sun. Deep spatial feature reconstruction for partial person re-identification: Alignment-free approach. In CVPR, 2018.
-  A. Hermans, L. Beyer, and B. Leibe. In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737, 2017.
-  G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, 2017.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
-  A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. ICLR Workshop, 2017.
-  C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009.
-  F. Li and H. Wechsler. Open set face recognition using transduction. TPAMI, 27(11):1686–1697, 2005.
-  X. Liang, Y. Wei, X. Shen, J. Yang, L. Lin, and S. Yan. Proposal-free network for instance-level object segmentation. TPAMI, 2017.
-  Y. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and black-box attacks. In ICLR, 2017.
-  S. M. Moosavi Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In CVPR, 2016.
-  S. M. Moosavidezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In CVPR, 2017.
-  N. Narodytska and S. P. Kasiviswanathan. Simple black-box adversarial perturbations for deep networks. CVPR Workshop, 2017.
N. Papernot, P. Mcdaniel, and I. Goodfellow.
Transferability in machine learning: from phenomena to black-box attacks using adversarial samples.2016.
-  N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017.
-  N. Papernot, P. Mcdaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. European Symposium on Security & Privacy, 2016.
-  F. Radenović, G. Tolias, and O. Chum. Cnn image retrieval learns from bow: Unsupervised fine-tuning with hard examples. In ECCV, 2016.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211–252, 2015.
-  M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016.
-  H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep metric learning via lifted structured feature embedding. In CVPR, 2016.
-  Y. Sun, L. Zheng, W. Deng, and S. Wang. Svdnet for pedestrian retrieval. In ICCV, 2017.
-  Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang. Beyond part models: Person retrieval with refined part pooling. ECCV, 2018.
-  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. ICLR, 2014.
G. Tolias, R. Sicre, and H. Jégou.
Particular object retrieval with integral max-pooling of cnn activations.In ICLR, 2016.
-  C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
-  C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille. Adversarial examples for semantic segmentation and object detection. In ICCV, 2017.
-  S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
-  Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu. Deep mutual learning. CVPR, 2018.
-  L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In ICCV, 2015.
-  L. Zheng, Y. Yang, and A. G. Hauptmann. Person re-identification: Past, present and future. arXiv:1610.02984, 2016.
-  Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang. Random erasing data augmentation. arXiv preprint arXiv:1708.04896, 2017.
Appendix A Appendix