Adversarial Metric Attack for Person Re-identification

01/30/2019 ∙ by Song Bai, et al. ∙ University of Oxford Johns Hopkins University 0

Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance, especially considering the highly strict requirement of public safety. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks, which can effectively generate adversarial examples for re-ID. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Moreover, by benchmarking various adversarial settings, we expect that our work can facilitate the development of robust feature learning with the experimental conclusions we have drawn.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, person re-identification (re-ID) [22, 50]

has attracted great attention in the computer vision community, driven by the increasing demand of video surveillance in public space. Hence, great effort has been devoted to developing robust re-ID features 

[8, 15, 38, 9, 26] and distance metrics [35, 51, 29, 7] to overcome the large intra-class variations of person images in viewpoint, pose, illumination, blur, occlusion and resolution. For example, the rank-1 accuracy of the latest state-of-the-art on the Market-1501 dataset [49] is  [28], increasing rapidly from when the dataset was first released in .

However, we draw researchers’ attention to the fact that re-ID systems can be very vulnerable to adversarial attacks. Fig. 1 shows a case where a probe image is presented. Of the two gallery images, the true positive has a large similarity value and the true negative has a small one. Nevertheless, after adding human-imperceptible perturbations to the gallery images, the metric is easily fooled even though the new gallery images appear the same as the original ones.

Figure 1: The illustration of the adversarial effect in person re-identification. Given a probe image, its similarity with the true positive is decreased from 0.829 to 0.105, and that with the true negative is increased from 0.120 to 0.803 by adding human-imperceptible noise to gallery images. The adversarial noise is resized to the range for visualization.

Adversarial examples have been extensively investigated in classification analysis, such as image classification [24, 6], object detection [44], semantic segmentation [1],  However, they have not attracted much attention in the field of re-ID, a metric analysis task whose basic goal is to learn a discriminative distance metric. A very likely reason is the existence of a natural gap between the training and testing of re-ID networks. While a re-ID model is usually trained with a certain classification loss, it discards the concept of class decision boundaries during testing and adopts a metric function to measure the pairwise distances between the probe and gallery images. Consequently, previous works on classification attacks [24, 6] do not generalize to re-ID systems, , they attempt to push images across the class decision boundaries and do not necessarily lead to a corrupted pairwise distance between images (see Fig. 2). Note that some re-ID networks are directly guided by metric losses (, contrastive loss [16]), and their output can measure the between-object distances. However, it is still infeasible to directly attack such output owing to the sampling difficulty and computational complexity. Therefore, a common practice in re-ID is to take the trained model as a feature extractor and measure the similarities in a metric space.

Figure 2: The failure case (a) of classification attack and the successful case (b) of metric attack on two clean images (blue color). The adversarial examples (red color) generated by the classification attack cross over the class decision boundary, but preserve the pairwise distance between them to a large extent.

Considering the importance of security for re-ID systems and the lack of systematic studies on its robustness towards adversarial examples, we propose Adversarial Metric Attack, an efficient and generic methodology to generate adversarial examples by attacking metric learning systems. The contributions of this work can be divided into five folds

1) Our work presents what to our knowledge is the first systematic and rigorous investigation of adversarial effects in person re-identification, which should be taken into consideration when deploying re-ID algorithms in real surveillance systems.

2) We propose adversarial metric attack, a parallel methodology to the existing adversarial classification attack [39, 14], which can be potentially applied to other safety-critical applications that rely on distance metric (, face verification [40] and tracking [18]).

3) We define and benchmark various experimental settings for metric attack in re-ID, including white-box and blackbox attack, non-targeted and targeted attack, single-model and multi-model attack, . Under those experimental settings, comprehensive experiments are carried out with different distance metrics and attack methods.

4) We present an early attempt on adversarial metric defense, and show that adversarial examples generated by attacking metrics can be used in turn to train a metric-preserving network.

5) The code will be publicly available to easily generate the adversarial version of benchmark datasets (see examples in supplementary material), which can serve as a useful testbed to evaluate the robustness of re-ID algorithms.

We hope that our work can facilitate the development of robust feature learning and accelerate the progress on adversarial attack and defense of re-ID systems with the methodology and the experimental conclusions presented.

2 Related Work

Adversarial learning [21, 42, 48, 36] has been incorporated into the training procedure of re-ID systems in many previous works. In these works, generative adversarial networks (GAN) [13] typically acts as a data augmentation strategy by generating photo-realistic person images to enhance the training set. For example, Zheng  [52] applied GAN to generate unlabeled images and assigned a uniform label distribution during training. Wei  [43] proposed Person Transfer Generative Adversarial Network (PTGAN) to bridge the gap between different datasets. Moreover, Ge  [11] propose Feature Distilling Generative Adversarial Network to learn identity-related and pose-unrelated representations. In [33], binary codes are learned for efficient pedestrian matching via the proposed Adversarial Binary Coding.

However, to the best of our knowledge, no prior work has rigorously considered the robustness of re-ID systems towards adversarial attacks, which have received wide attention in the context of classification-based tasks, including image classification [24], object detection [44] and semantic segmentation [1]. As these vision tasks aims to sort an instance into a category, they are therefore special cases of the broader classification problem. On such systems, it has been demonstrated that adding carefully generated human-imperceptible perturbations to an input image can easily cause the network to misclassify the perturbed image with high confidence. These tampered images are known as adversarial examples. Great efforts have been devoted to the generation of adversarial examples [14, 24, 6]. In contrast, our work focuses on adversarial attacks on metric learning systems, which analyze the relationship between two instances.

3 Adversarial Metric Attack

Person re-identification [12] is comprised of three sets of images, including the probe set , the gallery set , and the training set . A label set is also given to annotate the identity of each image for training and evaluation. A general re-ID pipeline is: 1) learn a feature extractor with parameters

(usually by training a neural network) by imposing a loss function

to and ; 2) extract the activations of intermediate layers for and as their visual features and , respectively; 3) compute the distance between and for indexing. When representing features, and are omitted where possible for notation simplicity.

In this paper, we aim to generate adversarial examples for re-ID models. As explained in Sec. 1, a different attack mechanism is required for metric learning systems as opposed to the existing attack methods which focus on classification-based models [14, 24]. Instead of attacking the loss function used in training the neural network as done in these previous works, we discard the training loss and propose to attack the distance metric. Such an attack mechanism directly results in the corruption of the pairwise distance between images, thus leading to guaranteed accuracy compromises of a re-ID system. This is the gist of the methodology proposed in this work, which we call adversarial metric attack.

Adversarial metric attack consists of four components, including models for attack, metrics for attack, methods for attack and adversarial settings for attack. In the first component (Sec. 3.1), we train the model (with parameters ) on the training set as existing re-ID algorithms do. The model parameters are then fixed during attacking. In the second component (Sec. 3.2), a metric loss D is determined as the attack target. In the third component (Sec. 3.3), an optimization method for producing adversarial examples is selected. In the last component (Sec. 3.4), by setting the probe set as a reference, we generate adversarial examples on the gallery set in a specific adversarial setting.

3.1 Models for Attack

In the proposed methodology, the model for attack is not limited to be classification-based as opposed to [14, 24]. Instead, most re-ID models [19, 4, 37, 31] can be used. We only review two representative baseline models, which are commonly seen in person re-identification.

Cross Entropy Loss. The re-ID model is trained with the standard cross-entropy loss and the labels are the identities of training images. It is defined as

(1)

where

is the classification probability of the

-th training sample to the -th category and is the ground-truth label of .

Triplet Loss. The re-ID model is trained with the triplet loss, defined as

(2)

where denotes the anchor point, denotes the positive point and denotes the negative point. The motivation is that the positive belonging to the same identity as the anchor is closer to than the negative belonging to another identity, by at least a margin .

3.2 Metrics for Attack

Metric learning (, XQDA [29], KISSME [23]) has dominated the landscape of re-ID for a long time. Mathematically, a metric defined between the probe set and the gallery set is a function , which assigns non-negative values for each pair of and . We also use notation to denote the distance between and in the metric space . In this section, we give the formal definition of metric loss used in adversarial metric attack. It should be mentioned that any differentiable metric (or similarity) function can be used as the target loss.

Euclidean distance is a widely used distance metric. The metric loss is defined as

(3)

which computes the squared Euclidean distance between and .

Mahalanobis distance is a generalization of the Euclidean distance that considers the correlation of different feature dimensions. Accordingly, we can have a metric loss as

(4)

where is a positive semidefinite matrix.

3.3 Methods for Attack

Given a metric loss defined above, we aim at learning an adversarial example , where denotes a certain gallery image and denotes the adversarial perturbation. norm is used to measure the perceptibility of the perturbation, ,  and is a small constant.

To this end, we introduce the following three attack methods, including:

Fast Gradient Sign Method (FGSM) [14] is a single step attack method. It generates adversarial examples by

(5)

where measures the maximum magnitude of adversarial perturbation and denotes the signum function.

Iterative Fast Gradient Sign Method (I-FGSM) [24] is an iterative version of FGSM, defined as

(6)

where denotes the iteration number and is the step size. is a clip function that ensures the generated adversarial example is within the -ball of the original image.

Momentum Iterative Fast Gradient Sign Method (MI-FGSM) [6] adds the momentum term on top of I-FGSM to stabilize update directions. It is defined as

(7)

where is the decay factor of the momentum term and is the accumulated gradient at the -th iteration.

3.4 Benchmark Adversarial Settings

In this section, we benchmark the experimental settings for adversarial metric attack in re-ID .

3.4.1 White-box and Black-box Attack

White-box attack requires the attackers to have prior knowledge of the target networks, which means that the adversarial examples are generated with and tested on the same network having parameters .

It should be mentioned that for adversarial metric attack, the loss layer used during training is replaced by the metric loss when attacking the network.

Black-box attack means that the attackers do not know the structures or the weights of the target network. That is to say, the adversarial examples are generated with a network having parameters and used to attack metric on another network which differs in structures, parameters or both.

3.4.2 Targeted and Non-targeted Attack

Non-targeted attack aims to widen the metric distance between image pairs of the same identity. Given a probe image and a gallery image , where , their distance is ideally small. After imposing a non-targeted attack on the distance metric, the distance between and the generated adversarial example is enlarged. Hence, when serves as the query, will not be ranked high in the ranking list of (see Fig. 4).

Non-targeted attack can be achieved by applying the attack methods described in Sec. 3.3 to the metric losses described in Sec. 3.2.

Targeted attack aims to draw the gallery image towards the probe image in the metric space. This type of attack is usually performed on image pairs with different identities, ,  , which correspond to a large value. The generated becomes closer to the query image in the metric space, deceiving the network into predicting . Hence, one can frequently observe adversarial examples generated by a targeted attack in top positions of the ranking list of (see Fig. 4).

Unlike non-targeted attack where adversarial examples do not steer the network towards a specific identity, targeted attack finds adversarial perturbations with pre-determined target labels during the learning procedure and tries to decrease the value of objective function. This incurs a slight modification to the attack methods described in Sec. 3.3. For example, the formulation of FGSM [14] is changed to

(8)

The update procedure of I-FGSM [24] and MI-FGSM [6] can be modified similarly.

3.4.3 Single-model and Multi-model Attack

Single-model attack differs from Multi-model attack in that the former only uses one network to learn the adversarial examples, while the latter uses multiple ones. It has been shown in the context of adversarial classification attacks [32] that an ensemble of multiple models is crucial to the transferability of the adversarial examples. Thus, multi-model methods generally perform better under the black-box setting.

To ensemble multiple networks, we suggest to average the metric losses defined in Sec. 3.2

. The logits or predictions of networks are not used in the multi-model metric attack, since they do not necessarily have the same dimension in contrast with the case in classification attack 

[6].

4 Adversarial Metric Defense

Here we present an early attempt on training a metric-preserving network to defend a distance metric.

The procedure is divided into four steps: 1) learn a clean model with parameters by imposing a loss function to and ; 2) perform adversarial metric attack described in Sec. 3 on with the training set , then obtain the adversarial version of training set ; 3) merge and , and re-train a metric-preserving model ; 4) use as the testing model in replace of .

As for the performance, we find that closely matches when testing on the original (clean) gallery set , but significantly outperforms when testing on the adversarial version of gallery set . In this sense, re-ID systems gain the robustness to adversarial metric attacks.

5 Experiments

The section evaluates the proposed adversarial metric attack and adversarial metric defense.

Datasets. Market-1501 dataset [49] is a widely accepted benchmark for person re-ID. It consists of identities, in which identities ( images) are used for training, identities ( images) for testing and images for querying. DukeMTMC-reID dataset [52] has images taken by cameras. The training set has images ( identities). The testing set has probe images ( identities) and gallery images.

Both Cumulative Matching Characteristics (CMC) scores and mean average precision (mAP) are used for performance evaluation.

Baselines. Four base models are implemented. Specifically, we take ResNet-50 [17], ResNeXt-50 [45] and DenseNet-121 [20]

pretrained on ImageNet 

[5] as the backbone models. The three networks are supervised by the cross-entropy loss, yielding three base models denoted as B1B2 and B3, respectively. Meanwhile, we also supervise ResNet-50 [17] with triplet loss [19] and obtain the base model B4.

All the models are trained using the Adam optimizer for epochs with a batch size of . When testing, we extract the normalized activations from the networks before the loss layer as the image features.

State-of-the-art Methods. As there exists a huge number of re-ID algorithms [3, 2, 27, 10, 30, 47, 46], it is unrealistic to evaluate all of them. Here, we reproduce two representatives which achieve the latest state-of-the-art performances, , Harmonious Attention CNN (HACNN) [28] and Multi-task Attentional Network with Curriculum Sampling (Mancs) [41]. Both of them employ attention mechanisms to address person misalignment. We follow the default settings correspondingly and report their performances as well as those of the four base models in Table 1.

Methods Market-1501 DukeMTMC-reID
Rank-1 mAP Rank-1 mAP
B1 91.30 77.52 82.85 67.72
B2 91.44 78.21 83.03 67.85
B3 91.95 79.08 83.34 68.30
B4 84.29 67.86 76.57 57.31
[28] 90.56 75.28 80.70 64.44
[41] 93.17 82.50 85.23 72.89
Table 1: The performances of the four base models and two state-of-the-art methods implemented in this work. The reproduced performances of [28, 41] are slightly different from those reported in the original work.

Experimental Design. The design of experiments involves various settings, including different distance metrics, different attack methods, white-box and black-box attack, non-targeted and targeted attack and single-model and multi-model attack described in Sec. 3.

If not specified otherwise, we use the Euclidean distance defined in Eq. (3) as the metric and I-FGSM defined in Eq. (6) with as the attack method and perform white-box non-targeted attacks on base model B1. For other parameters, we set in Eq. (6) and in Eq. (7). The iteration number is set to following [24].

5.1 White-box and Black-box Attack

Adversarial metric attack is first evaluated with a single model. For each query class, we generate adversarial examples on the corresponding gallery set. Thus, an adversarial version of the gallery set can be stored off-line and used for performance evaluation. The maximum magnitude of adversarial perturbation is set to on the Market-1501 dataset in Table 2 and the DukeMTMC-reID dataset in Table 3, which are still imperceptible to human vision (examples shown in Fig. 1). Therein, we present the networks that we attack in rows and networks that we test on in columns.

Model Attack B1 B2 B3 B4 HACNN [28] Mancs [41]
B1 FGSM 7.054 33.73 36.43 30.53 44.00 43.49
I-FGSM 0.367 25.12 29.42 24.11 43.51 34.68
MI-FGSM 0.757 22.18 25.53 21.43 37.98 30.90
B2 FGSM 35.83 10.47 42.71 37.24 48.45 51.23
I-FGSM 26.87 0.458 35.96 32.91 48.06 45.69
MI-FGSM 23.01 0.960 30.77 28.47 41.84 39.80
B3 FGSM 32.84 36.89 9.178 33.91 44.40 45.95
I-FGSM 24.72 28.89 0.519 29.26 43.99 39.49
MI-FGSM 22.29 26.13 1.022 26.26 38.92 35.31
B4 FGSM 41.17 47.13 48.23 4.320 51.10 51.87
I-FGSM 38.68 47.08 48.89 0.211 54.72 50.89
MI-FGSM 32.31 39.58 41.47 0.430 47.57 43.16
Table 2: The mAP comparison of white-box attack (in shadow) and black-box attack (others) when on the Market-1501 dataset. For each combination of settings, the worst performances are marked in bold.
Model Attack B1 B2 B3 B4 HACNN [28] Mancs [41]
B1 FGSM 4.469 27.15 31.05 21.27 39.50 36.63
I-FGSM 0.178 22.65 28.37 17.96 41.71 32.12
MI-FGSM 0.315 17.16 22.01 14.08 33.96 24.74
B2 FGSM 27.87 5.775 34.28 27.42 41.42 42.87
I-FGSM 24.58 0.159 32.44 26.50 43.59 42.39
MI-FGSM 18.47 0.342 25.39 20.58 36.09 33.37
B3 FGSM 26.54 28.36 5.223 25.15 38.32 39.30
I-FGSM 22.43 23.96 0.191 23.30 39.98 37.11
MI-FGSM 18.12 19.28 0.387 18.93 33.29 29.85
B4 FGSM 32.02 38.37 40.77 2.071 45.52 43.70
I-FGSM 33.93 42.06 44.93 0.074 50.61 47.34
MI-FGSM 24.99 32.34 35.32 0.137 42.71 36.57
Table 3: The mAP comparison of white-box attack (in shadow) and black-box attack (others) when on the DukeMTMC-reID dataset. For each combination of settings, the worst performances are marked in bold.

At first glance, one can clearly observe the adversarial effect of different metrics. For instance, the performance of B1 decreases sharply from mAP to in white-box attack, and to in black-box attack on the Market-1501 dataset. On the DukeMTMC-reID dataset, its performance drops from to in white-box attack, and to in black-box attack. The state-of-the-art methods HACNN [28] and Mancs [41] are subjected to a dramatic performance decrease from mAP to and from to , respectively on the Market-1501 dataset.

Second, the performance of white-box attack is much lower than that of black-box attack. It is easy to understand as the attack methods can generate adversarial examples that overfit the attacked model. Among the three attack methods, I-FGSM [24] delivers the strongest white-box attacks. Comparatively, MI-FGSM [6] is the most capable of learning adversarial examples for black-box attack. This observation is consistent across different base models, different state-of-the-art methods, different magnitudes of adversarial perturbation111The results when are put in the supplementary material due to the space limitation. and different datasets. This conclusion is somehow contrary to that drawn by classification attack [25], where non-iterative algorithms like FGSM [14] can generally generalize better. In summary, we suggest integrating iteration-based attack methods for adversarial metric attack as they have a higher attack rate.

Moreover, HACNN [28] and Mancs [41] are more robust to adversarial examples compared with the four base models. When attacked by the same set of adversarial examples, they outperform baselines by a large margin, although Table 1 shows that they only achieve comparable or even worse performances with clean images. For instance in Table 2, when attacking B1 using MI-FGSM in black-box setting, the best mAP achieved by the baselines is on the Market-1501 dataset. In comparison, HACNN reports an mAP of and Mancs reports an mAP of . A possible reason is that they both have more sophisticated modules and computational mechanisms, , attention selection. However, it remains unclear and needs to be investigated in the future which kind of modules are robust and why they manifest robustness to adversary.

At last, the robustness of HACNN [28] and Mancs [41] to adversary are also quite different. In most adversarial settings, HACNN outperforms Mancs remarkably, revealing that it is less vulnerable to adversary. Only when attacking B2 or B3 using FGSM on the DukeMTMC-reID dataset, Mancs seems to be better than HACNN (mAP vs. ). However, it should be emphasized that the baseline performance of HACNN is much worse than that of Mancs with clean images as presented in Table 1 (mAP vs. on the Market-1501 dataset and mAP vs. on the DukeMTMC-reID dataset). To eliminate the influence of the differences in baseline performance, we adopt a relative measurement of accuracy using the mAP ratio, , the ratio of mAP on adversarial examples to that on clean images. A large mAP ratio indicates that the performance decrease is smaller, thus the model is more robust to adversary. We compare the mAP ratio of HACNN and Mancs in Fig. 3. As shown, HACNN consistently achieves a higher mAP ratio than Mancs in the adversarial settings.

From another point of view, achieving better performances on benchmark datasets does not necessarily mean that the algorithm has better generalization capacity. Therefore, it would be helpful to evaluate re-ID algorithms under the same adversarial settings to justify the potential of deploying them in real environments.

Figure 3: The ratio of mAP on adversarial examples to that on clean images on the DukeMTMC-reID dataset.

5.2 Single-model and Multi-model Attack

As shown in Sec. 5.1, black-box attacks yield much higher mAP than white-box attacks, which means that the generated adversarial examples do not transfer well to other models for testing. Attacking multiple models simultaneously can be helpful to improve the transferability.

Model Attack Market-1501 DukeMTMC-reID
Ensemb. Hold-out HACNN [28] Mancs [41] Ensemb. Hold-out HACNN [28] Mancs [41]
-B1 FGSM 20.61 26.52 40.68 39.37 13.65 20.52 35.11 32.26
I-FGSM 3.839 14.55 35.62 26.35 2.058 12.72 32.11 23.48
MI-FGSM 5.900 14.94 33.15 25.88 3.213 11.78 28.23 20.81
-B2 FGSM 19.76 29.12 39.43 36.89 13.32 22.83 34.33 30.21
I-FGSM 3.801 17.03 34.40 22.87 2.019 14.11 31.31 20.04
MI-FGSM 5.840 17.48 32.21 23.04 3.125 13.18 27.60 18.18
-B3 FGSM 20.64 32.62 40.62 38.64 13.99 26.88 35.34 31.26
I-FGSM 3.839 21.20 35.58 24.75 2.089 19.46 32.80 21.42
MI-FGSM 5.905 20.73 32.89 24.44 3.265 17.44 28.45 18.86
-B4 FGSM 21.37 26.07 38.15 36.61 13.96 17.72 32.37 29.28
I-FGSM 4.521 16.47 32.64 22.27 2.483 11.65 28.99 18.52
MI-FGSM 6.847 16.66 30.45 22.75 3.693 10.93 25.46 17.14
Table 4: The mAP comparison of multi-model attack (white-box in shadow) when . The symbol “-” indicates the name of the hold-out base model. For each combination of settings, the worst performances are marked in bold.

To achieve this, we perform adversarial metric attack on an ensemble of three out of the four base models. Then, the evaluation is done on the ensembled network and the hold-out network. Note that in this case, attacks on the “ensembled network” correspond to white-box attacks as the base models in the ensemble have been seen by the attacker during adversarial metric attack. In contrast, attacks on the “hold-out network” correspond to black-box attacks as this network is not used to generate adversarial examples.

We list the performances of multi-model attacks in Table 4. As indicated clearly, the identification rate of black-box attacks continues to degenerate. For example, Table 2 shows that the worst performance of B1 is mAP when attacking the single model B3 via MI-FGSM on the Market-1501 dataset. Under the same adversarial setting, the performance of B1 becomes when attacking an ensemble of B2, B3 and B4. When attacking multiple models, the lowest mAP of HACNN [28] is merely on the Market-1501 dataset, a sharp decrease of from as reported in Table 2 under the same adversarial settings.

5.3 Targeted and Non-targeted Attack

From Fig. 4, one can clearly observe the different effects of non-targeted and targeted attacks.

The goal of non-targeted metric attack is to maximize the distances (minimize the similarities) between a given probe and adversarial gallery images. Consequently, true positives are pushed down in the ranking list as shown in the first two rows of Fig. 4. However, it is indeterminable beforehand what the top-ranked images will be and to which probe the adversary will be similar as shown in the third row. In comparison, a targeted metric attack tries to minimize the distances between the given probe and the adversarial gallery images. Therefore, we find a large portion of adversarial images in top-ranked candidates in the third row of Fig. 4. And it is surprising to see that the metric is so easy to be fooled, which incorrectly retrieves male person images when a female person image serves as the probe.

For real applications in video surveillance, the non-targeted metric attack prevents the system from correctly retrieving desired results, while the targeted metric attack deliberately tricks the system into retrieving person images of a wrong identity.

Figure 4: Two representative ranking lists of two probe images for non-targeted attack (a) and targeted attack (b). We mark the ranking position of each gallery image on its top and do not elaborately exclude the distractor images and those captured by the same camera as the probe. The gallery images with proper ranking positions (, true positives and false negatives) are marked in blue, otherwise in red.
Figure 5: The mAP comparison of FGSM (a) and I-FGSM (b) by varying the maximum magnitude of adversarial perturbation and a selection of distance metric. In the legend, the part before symbol “/” denotes the metric loss used for metric attack and the part after “/” denotes the metric used to evaluate the performance.
Gallery B1 B2 B3 B4
#N #M #I #N #M #I #N #M #I #N #M #I
Original 77.52 76.69 -1.07% 78.21 76.74 -1.87% 79.08 77.25 -2.31% 67.86 59.87 -11.7%
Adv. (B1) 0.367 74.14 +2.0e4% 25.12 70.54 +180% 29.42 72.45 +146% 24.11 53.58 +122%
Adv. (B2) 26.87 72.64 +170% 0.458 76.23 +1.6e4% 35.96 72.82 +102% 32.91 52.81 +60.4%
Adv. (B3) 24.72 70.46 +185% 28.89 68.67 +137% 0.519 76.93 +1.4e4% 29.26 51.03 +74.4%
Adv. (B4) 38.68 72.65 +87.8% 47.08 72.14 +53.2% 48.89 73.41 +50.1% 0.211 57.45 +2.7e4%
Table 5: The mAP comparison between normally trained models (denoted by #N) and metric-preserving models (denoted by #M) on the Market-1501 dataset. #I means the relative improvement.

5.4 Euclidean and Mahalanobis Metric

Fig. 5 plots the mAP comparison of FGSM and I-FGSM by varying the maximum magnitude of adversarial perturbation and a selection of distance metric on the Market-1501 dataset.

Within our framework, distance metrics can be used in two phases, that is, the one used to perform adversarial metric attack and the one used to evaluate the performance. For the Mahalanobis distance, we use a representative called Cross-view Quadratic Discriminant Analysis (XQDA) [29]

. Unfortunately, by integrating metric learning with deep features, we do not observe an improvement of baseline performance, despite the fact that metric learning is extensively proven to be compatible with non-deep features (, LOMO 

[29], GOG [34]). We obtain a rank-1 accuracy of and an mAP of using XQDA, lower than the rank-1 accuracy of and mAP of achieved by the Euclidean distance reported in Table 1.

From Fig. 5, it is unsurprising to observe that the performance of different metric combinations decreases quickly as the maximum magnitude of adversarial perturbation increases. We also note that the iteration-based attack methods such as I-FGSM and MI-FGSM can severely mislead the distance metric with -pixel perturbations.

Second, we observe an interesting phenomenon which is consistent with different attack methods. When attacking the Euclidean distance and testing with XQDA, the performance is better than the setting where attacking and testing are both carried out with the Euclidean distance. This is also the case when we attack XQDA and test with the Euclidean distance. In other words, it is beneficial to adversarial metric defense if we use different metrics for metric attack and performance evaluation. From another perspective, it can be interpreted by the conclusion drawn in Sec. 5.1, , we can take the change of metrics as a kind of black-box attack. In other words, we are using adversarial examples generated with a model using a certain distance metric to test another model which differs from the original model in its choice of distance metric.

5.5 Evaluating Adversarial Metric Defense

In Table 5

, we evaluate metric defense by comparing the performance of normally trained models with metric-preserving models on the Market-1501 dataset. When testing the original clean gallery set, a slight performance decrease, generally smaller than

, is observed after using metric-preserving models. However, when purely testing the adversarial version of gallery images, the performance is significantly improved. For instance, when attacking B3 and testing on B1, the performance is originally , then improved to with a relative improvement of . In real video surveillance, it can improve the robustness of re-ID systems by deploying metric-preserving models.

6 Conclusion

In this work, we have studied the adversarial effects in person re-identification (re-ID). By observing that most existing works on adversarial examples only perform classification attacks, we propose the adversarial metric attack as a parallel methodology to be used in metric analysis.

By performing metric attack, adversarial examples can be easily generated for person re-identification. The latest state-of-the-art re-ID algorithms suffer a dramatic performance drop when they are attacked by the adversarial examples generated in this work, exposing the potential security issue of deploying re-ID algorithms in real video surveillance systems. To facilitate the development of metric attack in person re-identification, we have benchmarked and introduced various adversarial settings, including white-box and black-box attack, targeted and non-targeted attack, single-model and multi-model attack, . Extensive experiments on two large scale re-ID datasets have reached some useful conclusions, which can be a helpful reference for future works. Moreover, benefiting from adversarial metric attack, we present an early attempt of training metric-preserving networks to significantly improve the robustness of re-ID models to adversary.

References

  • [1] A. Arnab, O. Miksik, and P. H. Torr. On the robustness of semantic segmentation models to adversarial attacks. In CVPR, 2018.
  • [2] D. Chen, Z. Yuan, B. Chen, and N. Zheng. Similarity learning with spatial constraints for person re-identification. In CVPR, pages 1268–1277, 2016.
  • [3] D. Chen, Z. Yuan, G. Hua, N. Zheng, and J. Wang. Similarity learning on an explicit polynomial kernel feature map for person re-identification. In CVPR, pages 1565–1573, 2015.
  • [4] W. Chen, X. Chen, J. Zhang, and K. Huang. Beyond triplet loss: a deep quadruplet network for person re-identification. In CVPR, number 8, 2017.
  • [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009.
  • [6] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li. Boosting adversarial attacks with momentum. In CVPR, 2018.
  • [7] Y. Duan, W. Zheng, X. Lin, J. Lu, and J. Zhou. Deep adversarial metric learning. In CVPR, 2018.
  • [8] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani. Person re-identification by symmetry-driven accumulation of local features. In CVPR, pages 2360–2367, 2010.
  • [9] Y. Fu, Y. Wei, Y. Zhou, H. Shi, G. Huang, X. Wang, Z. Yao, and T. Huang. Horizontal pyramid matching for person re-identification. arXiv preprint arXiv:1804.05275, 2018.
  • [10] J. Garcia, N. Martinel, C. Micheloni, and A. Gardel. Person re-identification ranking optimisation by discriminant context information analysis. In ICCV, pages 1305–1313, 2015.
  • [11] Y. Ge, Z. Li, H. Zhao, G. Yin, S. Yi, X. Wang, et al. Fd-gan: Pose-guided feature distilling gan for robust person re-identification. In NeurIPS, pages 1230–1241, 2018.
  • [12] S. Gong, M. Cristani, S. Yan, and C. C. Loy. Person re-identification. Springer, 2014.
  • [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
  • [14] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014.
  • [15] D. Gray and H. Tao. Viewpoint invariant pedestrian recognition with an ensemble of localized features. In ECCV, pages 262–275, 2008.
  • [16] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.
  • [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [18] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista. High-speed tracking with kernelized correlation filters. TPAMI, 37(3):583–596, 2015.
  • [19] A. Hermans, L. Beyer, and B. Leibe. In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737, 2017.
  • [20] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, 2017.
  • [21] H. Huang, D. Li, Z. Zhang, X. Chen, and K. Huang. Adversarially occluded samples for person re-identification. In CVPR, pages 5098–5107, 2018.
  • [22] S. Karanam, M. Gou, Z. Wu, A. Rates-Borras, O. Camps, and R. J. Radke. A systematic evaluation and benchmark for person re-identification: Features, metrics, and datasets. arXiv preprint arXiv:1605.09653, 2016.
  • [23] M. Köstinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof. Large scale metric learning from equivalence constraints. In CVPR, pages 2288–2295, 2012.
  • [24] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. In arXiv preprint arXiv:1607.02533, 2016.
  • [25] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
  • [26] M. Li, X. Zhu, and S. Gong. Unsupervised tracklet person re-identification. TPAMI, 2019.
  • [27] W. Li, R. Zhao, T. Xiao, and X. Wang. Deepreid: Deep filter pairing neural network for person re-identification. In CVPR, pages 152–159, 2014.
  • [28] W. Li, X. Zhu, and S. Gong. Harmonious attention network for person re-identification. In CVPR, 2018.
  • [29] S. Liao, Y. Hu, X. Zhu, and S. Z. Li. Person re-identification by local maximal occurrence representation and metric learning. In CVPR, pages 2197–2206, 2015.
  • [30] C. Liu, C. Change Loy, S. Gong, and G. Wang. Pop: Person re-identification post-rank optimisation. In ICCV, pages 441–448, 2013.
  • [31] J. Liu, B. Ni, Y. Yan, P. Zhou, S. Cheng, and J. Hu. Pose transferrable person re-identification. In CVPR, 2018.
  • [32] Y. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016.
  • [33] Z. Liu, J. Qin, A. Li, Y. Wang, and L. Van Gool. Adversarial binary coding for efficient person re-identification. arXiv preprint arXiv:1803.10914, 2018.
  • [34] T. Matsukawa, T. Okabe, E. Suzuki, and Y. Sato. Hierarchical gaussian descriptor for person re-identification. In CVPR, pages 1363–1372, 2016.
  • [35] B. Prosser, W.-S. Zheng, S. Gong, T. Xiang, and Q. Mary.

    Person re-identification by support vector ranking.

    In BMVC, page 6, 2010.
  • [36] X. Qian, Y. Fu, T. Xiang, W. Wang, J. Qiu, Y. Wu, Y.-G. Jiang, and X. Xue. Pose-normalized image generation for person re-identification. In ECCV, pages 650–667, 2018.
  • [37] C. Song, Y. Huang, W. Ouyang, and L. Wang.

    Mask-guided contrastive attention model for person re-identification.

    In CVPR, 2018.
  • [38] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In ECCV, pages 480–496, 2018.
  • [39] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  • [40] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In CVPR, pages 1701–1708, 2014.
  • [41] C. Wang, Q. Zhang, C. Huang, W. Liu, and X. Wang. Mancs: A multi-task attentional network with curriculum sampling for person re-identification. In ECCV, pages 365–381, 2018.
  • [42] Z. Wang, M. Ye, F. Yang, X. Bai, and S. Satoh. Cascaded sr-gan for scale-adaptive low resolution person re-identification. In IJCAI, pages 3891–3897, 2018.
  • [43] L. Wei, S. Zhang, W. Gao, and Q. Tian. Person transfer gan to bridge domain gap for person re-identification. In CVPR, 2018.
  • [44] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille. Adversarial examples for semantic segmentation and object detection. In ICCV, pages 1378–1387, 2017.
  • [45] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In CVPR, pages 5987–5995, 2017.
  • [46] M. Ye, X. Lan, and P. C. Yuen. Robust anchor embedding for unsupervised video person re-identification in the wild. In ECCV, 2018.
  • [47] M. Ye, A. J. Ma, L. Zheng, J. Li, and P. C. Yuen. Dynamic label graph matching for unsupervised video re-identification. In ICCV, 2017.
  • [48] Z. Yin, W. Zheng, A. Wu, H. Yu, H. Wan, X. Guo, F. Huang, and J. Lai. Adversarial attribute-image person re-identification. In IJCAI, pages 1100–1106, 2018.
  • [49] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In ICCV, pages 1116–1124, 2015.
  • [50] L. Zheng, Y. Yang, and A. G. Hauptmann. Person re-identification: Past, present and future. arXiv preprint arXiv:1610.02984, 2016.
  • [51] W.-S. Zheng, S. Gong, and T. Xiang. Reidentification by relative distance comparison. TPAMI, 35(3):653–668, 2013.
  • [52] Z. Zheng, L. Zheng, and Y. Yang. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In ICCV, pages 3754–3762, 2017.