Label Universal Targeted Attack

05/27/2019 ∙ by Naveed Akhtar, et al. ∙ 1

We introduce Label Universal Targeted Attack (LUTA) that makes a deep model predict a label of attacker's choice for `any' sample of a given source class with high probability. Our attack stochastically maximizes the log-probability of the target label for the source class with first order gradient optimization, while accounting for the gradient moments. It also suppresses the leakage of attack information to the non-source classes for avoiding the attack suspicions. The perturbations resulting from our attack achieve high fooling ratios on the large-scale ImageNet and VGGFace models, and transfer well to the Physical World. Given full control over the perturbation scope in LUTA, we also demonstrate it as a tool for deep model autopsy. The proposed attack reveals interesting perturbation patterns and observations regarding the deep models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 12

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Adversarial examples [1] are carefully manipulated inputs that appear natural to humans but cause deep models to misbehave. Recent years have seen multiple methods to generate manipulative signals (i.e. perturbations) for fooling deep models on individual input samples [1], [2], [3] or a large number of samples with high probability [4], [5] - termed ‘universal’ perturbations. The former sometimes also launch ‘targeted’ attacks, where the model ends up predicting a desired target label for the input adversarial example. The existence of adversarial examples is being widely perceived as a threat to deep learning [6]. Nevertheless, given appropriate control over the underlying manipulative signal, adversarial examples may also serve as empirical tools for analyzing deep models.

This work introduces a technique to generate manipulative signals that can essentially fool a deep model to confuse ‘an entire class label’ with another label of choice. The resulting Label Universal Targeted Attack (LUTA)111The source code is provided here. LUTA is intended to be eventually incorporated in public attack libraries, e.g. foolbox [7]. is of high relevance in practical settings. It allows pre-computed perturbations that can change an object’s category or a person’s identity for a deployed model on-the-fly, where the attacker has also the freedom to choose the target label, and there is no particular constraint over the input. Moreover, the convenient control over the manipulative signal in LUTA encourages the fresh perspective of seeing adversarial examples as model analysis tools. Controlling the perturbation scope to individual classes reveals insightful patterns and meaningful information about the classification regions learned by the deep models.

The proposed LUTA is an iterative algorithm that performs a stochastic gradient-based optimization to maximize the log-probability of the target class prediction for the perturbed source class. It also inhibits fooling of the model on non-source classes to mitigate suspicions about the attack. The algorithm performs careful adaptive learning of the perturbation parameters based on their first and second moments. This paper explores three major variants of LUTA. The first two bound the perturbations in and norms, whereas the third allows unbounded perturbations to freely explore the classification regions of the target model. Extensive experiments for fooling VGG-16 [8], ResNet-50 [9], Inception-V3 [10], MobileNet-V2 [11] on ImageNet dataset [12] and ResNet-50 on large-scale VGG-Face2 dataset [13] ascertain the effectiveness of our attack. The attack is also demonstrated in the Physical World. The unbound LUTA variant is shown to reveal interesting perturbation patterns and insightful observations regarding deep model classification regions.

2 Prior art

Adversarial attacks is currently a highly active research direction. For a comprehensive review, we refer to [6]. Here, we discuss the key contributions that relate to this work more closely.

Szegedy et al. [1] were the first to report the vulnerability of modern deep learning to adversarial attacks. They showed the possibility of altering images with imperceptible additive perturbations to fool deep models. Goodfellow et al. [2] later proposed the Fast Gradient Sign Method (FGSM) to efficiently estimate such perturbations. FGSM computes the desired signal using the sign of the network’s cost function gradient

w.r.t

the input image. The resulting perturbation performs a one-step gradient ascent over the network loss for the input. Instead of a single step, Kurakin et al. [14] took multiple small steps for more effective perturbations. They additionally proposed to take steps in the direction that maximizes the prediction probability of the least-likely class for the image. Madry et al. [15] noted that the ‘projected gradient descent on the negative loss function’ strategy adopted by Kurakin et al. results in highly effective attacks. DeepFool [3] is another popular attack that computes perturbations iteratively by linearizing the model’s decision boundaries near the input images.

For the above, the domain of the computed perturbation is restricted to a single image. Moosavi-Dezfooli et al. [4] introduced an image-agnostic perturbation to fool a model into misclassifying ‘any’ image. Similar ‘universal’ adversarial perturbations are also computed in [16], [17]. These attacks are non-targeted, i.e. the adversarial input is allowed to be misclassified into any class. Due to their broader domain, universal perturbations are able to reveal interesting geometric correlations among the decision boundaries of the deep models [4], [18]. However, both the perturbation domain and the model prediction remain unconstrained for the universal perturbations. Manipulative signals are expected to be more revealing with appropriate scoping at those ends. This motivates the need of our label-universal attack that provides control over the source and target labels for model fooling. Such an attack has also high practical relevance, because it enables the attacker to conveniently manipulate the semantics learned by a deep model in an unrestricted manner.

3 Problem formulation

In line with the main stream of research in adversarial attacks, this work considers natural images as the data and model domain. However, the proposed attack is generic under white-box settings.

Let denote the distribution of natural images, and ‘’ be the label of its random sample . Let

be the classifier that maps

with high probability. We restrict the classifier to be a deep neural network with cross-entropy loss. To fool

, we seek a perturbation that satisfies the following constraint:

(1)

where ‘’ is the target label we want to predict for with probability ‘’ or higher, ‘’ controls the -norm of the perturbation, which is denoted by . In the above constraint, the same perturbation must fool the classifier on all samples of the source class (labelled ‘’) with probability . At the same time, ‘’ can be any label that is known to . This formulation inspires the name Label Universal Targeted Attack.

Allowing ‘’ to be a random label while ignoring the label of the input generalizes Eq. (1) to the universal perturbation constraint [4]. On the other end, restricting to a single image results in an image-specific targeted attack. In that case, the notion of probability can be ignored. In the spectrum of adversarial attacks forming special cases of Eq. (1), other intermediate choices may include expanding the input domain to a few classes, or using multiple target labels for fooling. Whereas these alternates are not our focus, our algorithm is readily extendable to these cases.

4 Computing the perturbation

We compute the perturbations for Label Universal Targeted Attack (LUTA) as shown in Algorithm 1. The abstract concept of the algorithm is intuitive. For a given source class, we compute the desired perturbation by taking small steps over the model’s cost surface in the directions that increase the log-probability of the target label for the source class. The directions are computed stochastically, and the steps are only taken in the trusted regions that are governed by the first and (raw) second moment estimates of the directions. While computing a direction, we ensure that it also suppresses the prediction of non-source classes as the target class. To bound the perturbation norm, we keep projecting the accumulated signal to the -ball of the desired norm at each iteration. The text below sequentially explains each step of the algorithm in detail. Henceforth, we alternatively refer to the proposed algorithm as LUTA.

0:  Classifier , source class samples , non-source class samples , target label , perturbation norm , mini-batch size , fooling ratio .
0:  Targeted label universal perturbation .
1:  Initialize , ,

to zero vectors in

and . Set , and .
2:  while fooling ratio  do
3:      :            get random samples from the source and other classes
4:          perturb and clip samples with the current estimate
5:                                                  increment
6:                             compute scaling factor for gradient normalization
7:                 compute Expected gradient
8:                            first moment estimate
9:                   raw second moment estimate
10:                  bias corrected moment ratio
11:                                    update perturbation
12:                                              project on ball
13:  end while
14:  return
Algorithm 1 Label Universal Targeted Attack

Due to its white-box nature, LUTA expects the target classifier as one of its inputs. It also requires a set of the source class samples, and a set that contains samples of the non-source classes. Other input parameters include the desired -norm ‘’ of the perturbation, target label ‘’, mini-batch size ‘’ for the underlying stochastic optimization, and the desired fooling ratio ‘’ - defined as the percentage of the source class samples predicted as the target class instances.

We momentarily defer the discussion on hyper-parameters ‘’ and ‘’ on line 1 of the algorithm. In a given iteration, LUTA first constructs sets and by randomly sampling the source and non-source classes, respectively. The cardinality of these sets is fixed to ‘’ to keep the mini-batch size to ‘’ (line 3). Each element of both sets is then perturbed with the current estimate of the perturbation - operation denoted by symbol on line 4. The chosen symbol emphasizes that is subtracted in our algorithm from all the samples to perturb them. The ‘Clip(.)’ function clips the perturbed samples to its valid range, in our case of 8-bit image representation.

Lemma 4.1: For with cross-entropy cost , the log-probability of classified as ‘’ increases in the direction , where denotes the model parameters222The model parameters remain fixed throughout, hence we ignore in Algorithm 1 and its description..
Proof: We can write for . Linearizing the cost and inverting the sign, the log-probability maximizes along . With , -normalization re-scales in the same direction of increasing .

Under Lemma 4.1, LUTA strives to take steps along the cost function’s gradient w.r.t. an input . Since the domain of spans multiple samples in our case, we must take steps along the ‘Expected’ direction of those samples. However, it has to be ensured that the computed direction is not too generic to also cause log-probability rise for the irrelevant (i.e. non-source class) samples. From the practical view point, perturbations causing samples of ‘any’ class to be misclassified into the target class are less interesting, and can easily raise suspicions. Moreover, they also compromise our control over the perturbation scope, which is not desired. To refrain from general fooling directions, we nudge the computed direction such that it also inhibits the fooling of non-source class samples. Lines 6 and 7 of the algorithm implement these steps as follows.

On line 6, we estimate the ratio between the Expected norms of the source sample gradients and the non-source sample gradients. Notice that we compute the respective gradients using different prediction labels. In the light of Lemma 4.1, gives us the direction (ignoring the negative sign) for fooling a model into predicting label ‘’ for , where the sample is from the source class. On the other hand, provides the direction that improves the model confidence on the correct prediction of , where the sample is from non-source class. The diverse nature of the computed gradients can result in significant difference between their norms. The scaling factor ‘’ on line 6 is computed to account for that difference in the subsequent steps. For the iteration, we compute the Expected gradient of our mini-batch on line 7. At this point, it is worth noting that the effective mini-batch for the underlying stochastic optimization in LUTA comprises clipped samples in the set . The vector is computed as the weighted average of the Expected gradients of the source and non-source samples. Under the linearity of the Expectation operator and preservation of the vector direction with scaling, it is straightforward to see that encodes the Expected direction to achieve the targeted fooling of the source samples into the label ‘’, while inhibiting the fooling of non-source samples by increasing their prediction confidence for their correct classes.

Owing to the diversity of the samples in its mini-batch, LUTA steps in the direction of computed gradient cautiously. On line 8 and line 9

, it respectively estimates the first and the raw second moment (i.e. un-centered variance) of the computed gradient using exponential moving averages. The hyper-parameters ‘

’ and ‘’ decide the decay rates of these averages, whereas denotes the Hadamard product. The use of moving averages as the moment estimates in LUTA is inspired by the Adam algorithm [19] that efficiently performs stochastic optimization. However, instead of using the moving averages of gradients to update the parameters (i.e. model weights) as in [19], we compute those for the Expected gradient and capitalize on the directions for perturbation estimation. Nevertheless, due to the similar physical significance of the hyper-parameters in LUTA and Adam, the performance of both algorithms largely remains insensitive to small changes to the values of these parameters. Following [19], we fix (line 1). We refer to [19] for further details on the choice of these values for the gradient based stochastic optimization.

The gradient moment estimates in LUTA are exploited in stepping along the cost surface. Effectiveness of the moments as stepping guides for stochastic optimization is already well-established [19], [20]. Briefly ignoring the expression for on line 10 of the algorithm, we compute this guide as the ratio between the moment estimates , where the square-root accounts for representing the ‘second’ moment. Note that, we slightly abuse the notation here as both values are vectors. On line 10, we use the mathematically correct expression, where diag(.) converts a vector into a diagonal matrix, or a diagonal matrix into a vector, and the inverse is performed element-wise. Another improvement in line 10 is that we use the ‘bias-corrected’ ratio of the moment estimates instead. Moving averages are known to get heavily biased at early iterations. This becomes a concern when the algorithm can benefit from well-estimated initial points. In our experiments (§5), we also use LUTA in that manner. Hence, bias-correction is accounted for in our technique. We provide a detailed derivation to arrive at the expression on line 10 of Algorithm 1 in §A-1 of the supplementary material.

Let us compactly write , where tilde indicates the bias-corrected vectors. It is easy to see that for a large second moment estimate , shrinks. This is desirable because we eventually take a step along , and a smaller step is preferable along the components that have larger variance. The perturbation update step on line 11 of the algorithm further restricts to unit -norm. To an extent, this relates to computing the gradient’s sign in FGSM [2]. However, most coefficients of get restricted to smaller values in our case instead of . As a side remark, we note that simply computing the sign of for perturbation update eventually nullifies the advantages of the second moment estimate due to the squared terms. The normalization is able to preserve the required direction in our case, while taking full advantage of the second moment estimate.

LUTA variants: As seen in Algorithm 1, LUTA accumulates the signals computed at each iteration. To restrict the norm of the accumulated perturbation, -ball projection is used. The use of different types of balls results in different variants of the algorithm. For the -ball projection, we implement on line 12. In the case of -ball projection, we use . These projections respectively bound the and norms of the perturbations. We bound these norms to reduce the perturbation’s perceptibility, which is in line with the existing literature. However, we also employ a variant in which , where is the identity mapping. We refer to this particular variant as LUTA-U, for the ‘Unbounded’ perturbation norm. In contrast to the typical use of perturbations in adversarial attacks, we employ LUTA-U perturbations to explore the classification regions of the target model without restricting their norm. Owing to the ‘label-universality’ of the perturbations, LUTA-U exploration promises to reveal interesting information regarding the classification regions of the deep models.

5 Evaluation

We evaluate the proposed LUTA as an attack in § 5.1 and as an exploration tool in § 5.2. For the latter, the unbounded version (LUTA-U) is used.

5.1 LUTA as attack

Setup:

We first demonstrate the success of label-universal targeted fooling under LUTA by attacking VGG-16 [8], ResNet-50 [9], Inception-V3 [10] and MobileNet-V2 [11] trained on ImageNet dataset [12]. We use Keras provided public models, where selection of the networks is based on their established performance and diversity. We use the training set of ILSVRC2012 for perturbation estimation, whereas the validation set of this data (50 samples per class) is used as our test set. For the non-source classes, we only use the correctly classified samples during training, with a lower bound of on the prediction confidence. This filtration is performed for computational purpose. It still ensures useful gradient directions with fewer non-source samples. We do not filter the source class data. We compute a perturbation using a two step strategy. First, we alter Algorithm 1 to disregard the non-source class data. This is achieved by replacing the non-source class set with the source class set and using ‘’ instead of ‘’ for the gradient computation. In the second step, we initialize LUTA with the perturbation computed in the first step. This procedure is also adopted for computational gain under better initialization. In the first step, we let the algorithm run for 100 iterations, while is set to in the second step. We ensure at least 100 additional iterations in the second step. We empirically set ‘’ to 64 for the first step and 128 for the second. In the text to follow, we discuss the setup details only when those are different from what is described here.

Besides fooling the ImageNet models, we also attack VGGFace model [13] (ResNet-50 architecture) trained on the large-scale VGG-Face2 dataset [13]. In our experiments, Keras provided model weights are used that are converted from the original Caffe implementation. We use the training set of VGG-Face2 and crop the faces using the bounding box

meta data. Random 50 images for an identity are used as the test set, while the remaining images are used for perturbation estimation.

Fooling ImageNet models: We randomly choose ten source classes from ImageNet and make another random selection of ten target labels, resulting in ten label transforming (i.e. fooling) experiments for a single model. Both and -norm bounded perturbations are then considered, letting and respectively. The ‘’ values are chosen based on perturbation perceptibility.

We summarize the results in Table 1. Note that the reported fooling ratios are on ‘test’ data that is previously unseen by both the targeted model and our algorithm. Successful fooling of the large-scale models is apparent from the Table. The last column reports ‘Leakage’, which is defined as the average fooling ratio of the non-source classes into the target label. Hence, lower Leakage values are more desirable. It is worth mentioning that in a separate experiment where we alter our algorithm so that it does not suppress fooling of the non-source classes, a significant rise in the Leakage was observed. We provide results of that experiment in §A-2 of the supplementary material. Table 1 caption provides the label information for the source  target transformation employing the commonly used nouns. We refer to §A-3 of the supplementary material for the exact labels and original WordNet IDs of the ImageNet dataset.

Bound Model T T T T T T T T T T Avg. Leak.
-norm VGG-16 [8] 92 76 80 74 82 78 82 80 74 88 80.65.8 29.9
ResNet-50 [9] 92 78 80 72 76 84 78 76 82 78 79.65.4 31.1
Inception-V3 [10] 84 60 70 60 68 90 68 62 72 76 71.09.9 24.1
MobileNet-V2 [11] 92 94 88 78 88 86 74 86 84 94 86.46.5 37.1
-norm VGG-16 [8] 90 84 80 84 94 86 82 92 86 96 87.45.3 30.4
ResNet-50 [9] 96 94 88 84 90 86 86 94 90 90 89.83.9 38.0
Inception-V3 [10] 86 68 62 62 74 72 74 68 66 76 70.87.2 45.6
MobileNet-V2 [11] 94 98 92 76 94 92 76 92 92 96 90.27.7 56.0
Table 1: Fooling ratios (%) with for and for -norm bounded label-universal perturbations for ImageNet models. The label transformations are as follows. T: Airship School Bus, T: Ostrich Zebra, T: Lion Orangutang, T: Bustard Camel, T: Jelly Fish Killer Whale, T: Life Boat White Shark, T: Scoreboard Freight Car, T: Pickelhaube Stupa, T: Space Shuttle Steam Locomotive, T: Rapeseed Butterfly. Leakage (last column) is the average fooling of non-source classes into the target label.
Figure 1: Representative perturbations and adversarial images for -bounded case, . Each row shows perturbations for the same source target fooling for the mentioned networks. An adversarial example for a model is also shown for reference (on left), reporting the model confidence on the target label. Following [1], the perturbations are visualized by 10x magnification, shifted by 128 and clamped to 0-255. Refer to §A-4 of the supplementary document for more examples.

In Fig. 1, we show perturbations for representative label foolings. The figure also presents a sample adversarial example for each network. In our experiments, it was frequently observed that the models show high confidence on the adversarial samples, as it is also clear from the figure. We provide further images for both and -norm perturbations in §A-4 of the supplementary material. From the images, we can see that the perturbations are often not easy to perceive by the Human visual system. It is emphasized that this perceptibility and the fooling ratio in Table 1 is based on the selected ‘’ values. Allowing larger ‘’ results in even higher fooling ratios at the cost of larger perceptibility.

-norm bounded -norm bounded
F F F F F Avg. Leak. F F F F F Avg. Leak.
88 76 74 86 84 81.66.2 1.9 76 80 78 76 84 78.83.3 1.8
Table 2: Switching face identities for VGGFace model on test set with LUTA (% fooling): The switched identities in the original dataset are, F: n000234 n008779, F: n000282 n006494, F: n000314 n007087, F: n000558 n001800, F: n005814 n006402. The and -norms of the perturbation are upper bounded to 15 and 4,500 respectively.

Fooling VGGFace model: We also test our algorithm for switching face identities in the large-scale VGGFace model [13]. Table 2 reports the results on five identity switches that are randomly chosen from the VGG-Face2 dataset. Considering the variety of expression, appearance, ambient conditions etc. for a given subject in VGG-Face2, the results in Table 2 imply that LUTA enables an attacker to change their identity on-the-fly with high probability, without worrying about the image capturing conditions. Moreover, leakage of the target label to the non-source classes also remains remarkably low. We conjecture that this happens because the target objects (i.e. faces) occupy major regions of the images in the dataset, which mitigates the influence of identity-irrelevant information in perturbation estimation, resulting in a more specific manipulation of the source to target conversion. Figure 2 illustrates representative adversarial examples resulting from LUTA for the face ID switches. Further images can also be found in §A-5 of the supplementary material. The results demonstrate successful identity switching on unseen images by LUTA.

Figure 2: Representative face ID switching examples for VGGFace model. Sample clean target ID image is provided for reference. Same setup as Table 2 is used. Perturbation visualization follows [1].

5.2 LUTA-U as network autopsy tool

Keeping aside the success of LUTA as an attack, it is intriguing to investigate the patterns that eventually change the semantics of a whole class for a network. For that, we let LUTA-U run to achieve 100% test accuracy and observe the perturbation patterns. We notice a repetition of the characteristic visual features of the target class in the perturbations thus created, see Fig. 3.

Figure 3: Patterns emergence with LUTA-U achieving 100% test accuracy for German Shephered Ostrich. -norms of perturbations are given. Clean samples are shown for reference.

Another observation we make is that multiple runs of LUTA lead to different perturbations, nevertheless, those perturbations preserve the characteristic features of the target label. We refer to §A-6 of the supplementary material for the corroborating visualizations. Besides advancing the proposition that perturbations with broader input domain are able to exploit geometric correlations between the decision boundaries of the classifier [4], these observations also foretell (possibly) non-optimization based targeted fooling techniques in the future, where salient visual features of the target class may be cheaply embedded in the adversarial images.

Another interesting use of LUTA-U is in exploring the classification regions induced by the deep models. We employ MobileNet-V2 [11], and let LUTA-U achieve fooling rate on the training samples in each experiment. We choose five ImageNet classes from Table 1 and convert their labels into each other. We keep the number of training samples the same for each class, i.e. 965 as allowed by the dataset. In our experiment, the perturbation vector’s -norm is used as the representative distance covered by the source class samples to cross over and stay in the target class region. Experiments are repeated three times and the mean distances are reported in Table 3. Interestingly, the differences between the distances for and are significant. On the other hand, we can see particularly lower values for ‘Airship’ and larger values for ‘School Bus’ for all transformations. These observations are explainable under the hypothesis that  the remaining classes, the classification region for ‘Airship’ is more like a blob in the high dimensional space that lets majority of the samples in it move (due to perturbation) more coherently towards other class regions. On the other end, ‘School Bus’ occupies a relatively flat but well-spread region that is farther from ‘Space Shuttle’ as compared to e.g. ‘Life Boat’.

Target Space Shuttle Steam Locomotive Airship School Bus Life Boat
Source
Space Shuttle - 4364.481.1 4118.374.5 4679.4179.5 5039.1230.7
Steam Locomotive 5406.857.7 - 4954.756.5 5845.2300.4 5680.240.0
Airship 3586.459.4 3992.7291.1 - 3929.550.4 3937.833.7
School Bus 7448.8200.9 6322.889.5 6586.8165.1 - 5976.5112.1
Life Boat 5290.443.1 5173.071.8 5121.5154.1 5690.947.4 -
Table 3: Average -norms of the perturbations to achieve fooling on MobileNet-V2 [11].

LUTA makes the source class samples collectively move towards the target class region of a model with perturbations. Hence, LUTA-U iterations also provide a unique opportunity to examine this migration through the classification regions. For the Table 3 experiment, we monitor the top-1 predictions during the iterations and record the maximally predicted labels (excluding the source label) during training. In Fig. 4, we show this information as ‘max-label hopping’ for six representative transformations. The acute observer will notice that both Table 3 and Fig. 4 consider ‘transportation means’ as the source and target classes. This is done intentionally to illustrate the clustering of model classification regions for semantically similar classes. Notice in Fig. 4, the hopping mostly involves intermediate classes related to transportation/carriage means. Exceptions occur when ‘School Bus’ is the target class. This confirms our hypothesis that this class has a well-spread region. Consequently, it attracts a variety of intermediate labels as the target when perturbed, including those that live (relatively) far from its main cluster.

Figure 4: Max-label hopping during transformations using LUTA-U. Setup of Table 3 is employed.

Our analysis only scratches the surface of the model exploration and exploitation possibilities enabled by LUTA, promising many interesting future research directions to which the community is invited.

5.3 Physical World attack

Label universal targeted attacks have serious implications if they transfer well to the Physical World. To evaluate LUTA as the Physical World attack, we observe model label prediction on a live webcam stream of the printed adversarial images. No enhancement/transformation is applied other than color printing the adversarial images. This setup is considerably more challenging than e.g. fooling on the digital scans of printed adversarial images [14]. Despite that, LUTA perturbations are found surprisingly effective for label-universal fooling in the Physical World. The exact details of our experiments are provided in §A-7 of the supplementary material. We also provide a video here, capturing the live streaming examples.

5.4 Hyper-parameters and training time

In Algorithm 1, the desired fooling ratio ‘’ controls the total number of iterations, given fixed mini-batch size ‘’ and ‘’. Also, the mini-batch size plays its typical role in the underlying

Figure 5: Effects of varying ‘’ on fooling ratio (left). Efficacy of moments in optimization (right), 15 and 4500 for and .

stochastic optimization problem. Hence, we mainly focus on the parameter ‘’ in this section. Fig. 5 (left) shows the effects of varying ‘’ on the fooling ratios for the considered four ImageNet models for both and -norm bounded perturbations. Only the values of T are included for clarity, as other transformations show qualitatively similar behavior. Here, we cut-off the training after 200 iterations. The rise in fooling ratio with larger ‘’ is apparent. On average, 100 iterations of our Python 3 LUTA implementation requires 18.8, 20.9, 33.6 and 19.5 minutes for VGG-16, ResNet-50, Inception-V3 and MobileNet-V2 on NVIDIA Titan Xp GPU with 12 GB RAM using ‘’. Fig. 5 (right) also illustrates the role of the first and second moments in achieving the desired fooling rates more efficiently. For clarity, we show it for T for MobileNet-V2. Similar qualitative behavior was observed in all our experiments. It is apparent that both moments significantly improve the efficiency of LUTA by estimating the desired perturbation in fewer iterations. We use allowing at most 450 iterations.

6 Conclusion

We present the first of its kind attack that changes the label of a whole class into another label of choice. Our white-box attack computes perturbations based on the samples of the source and non-source classes, while stepping in the directions that are guided by the first two moments of the computed gradients. The estimated perturbations are found to be effective for fooling large-scale ImageNet and VGGFace models, while remaining largely imperceptible. We also show that the label-universal perturbations are able to transfer well to the Physical World. The proposed attack is additionally demonstrated to be an effective tool to empirically explore the classification regions of deep models, revealing insightful modelling details. Moreover, LUTA perturbations exhibit interesting target label patterns, which opens possibilities for their blackbox extensions.

Acknowledgement

This work is supported by Australian Research Council Grant ARC DP19010244. The GPU used for this work was donated by NVIDIA Corporation.

References

[1] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R., 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.

[2] Goodfellow, I.J., Shlens, J. and Szegedy, C., 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.

[3] Moosavi-Dezfooli, S.M., Fawzi, A. and Frossard, P., 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proc. IEEE CVPR (pp. 2574-2582).change

[4] Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O. and Frossard, P., 2017. Universal adversarial perturbations. In Proc. IEEE CVPR (pp. 1765-1773).

[5] Reddy Mopuri, K., Ojha, U., Garg, U. and Venkatesh Babu, R., 2018. NAG: Network for adversary generation. In Proc. IEEE CVPR (pp. 742-751).

[6] Akhtar, N. and Mian, A., 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, pp.14410-14430.

[7] Rauber, J., Wieland, B., and Matthias, B., 2017. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131.

[8] Simonyan, K. and Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.

[9] He, K., Zhang, X., Ren, S. and Sun, J., 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

[10] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. and Wojna, Z., 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818-2826).

[11] Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. and Chen, L.C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4510-4520).

[12] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K. and Fei-Fei, L., 2009, June. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255).

[13] Cao, Q., Shen, L., Xie, W., Parkhi, O.M. and Zisserman, A., 2018, May. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018) (pp. 67-74). IEEE.

[14] Kurakin, A., Goodfellow, I. and Bengio, S., 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.

[15] Madry, A., Makelov, A., Schmidt, L., Tsipras, D. and Vladu, A., 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.

[16] Khrulkov, V. and Oseledets, I., 2018. Art of singular vectors and universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8562-8570).

[17] Mopuri, K.R., Garg, U., and Radhakrishnan, V.B. 2017. Fast Feature Fool: A data independent approach to universal adversarial perturbations. arXiv preprint arXiv:1707.05572.

[18] Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P. and Soatto, S., 2017. Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554.

[19] Kingma, D.P. and Ba, J., 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

[20] Tielman, T. and Hinton, G. Lecture 6.5 - RMSProp, COURSERA: Neural Networks for Machine Learning, Teachnical Report, 2012.

Supplementary Material

(Label Universal Targeted Attack)

A-1: Computing the bias corrected moments ratio

To derive the expression for the bias corrected moment ratio in Algorithm 1, we first focus on the moving average expression of :

In , we ignore the subscript of ‘’ for clarity. We can write  
 at t = 1:  ,  
 at t = 2:  ,  
 at t = 3:  , resulting in the expression:

(2)

Using Eq. (2), we can relate the Expected value of to the Expected value of the true first moment as follows:

(3)

where for the value of ‘’ assigning very low weights to the more distant time stamps in the past (e.g. ). Ignoring ‘’, the remaining expression gets simplified to:

(4)

Simplification of Eq. (3) to Eq. (4) is verifiable by choosing a small value of ‘’ and expanding the former. In Eq. (4), the term causes a bias for a larger and smaller , which is especially true for the early iterations of the algorithm. Hence, to account for the bias, must be used instead of directly employing . Analogously, we can correct the bias for by using .

Since denotes the moving average of bias corrected second moment estimate, we use the ratio

(5)

Considering that and are vectors in Eq. (5), we re-write the above as the following mathematically meaningful expression:

(6)

where diag(.) forms a diagonal matrix of the vector in its argument or forms a vector of the diagonal matrix provided to it. The inverse in the above equation is element-wise.

A-2: Label leakage suppression in LUTA
To demonstrate the effect of leakage suppression using non-source classes, we perform the following experiment. In Algorithm 1, we replace the set of the non-source classes with set of the source class samples, and use ‘’ instead of ‘’ to compute the gradients. This removes any role the non-source classes play in the original LUTA. Under the identical setup as for Table 1 of the paper, we observe the following average percentage Leakage ‘rise’ for the perturbations. VGG-16: 18.5%, ResNet-50: 21.9%, Inception-V3: 63.6% and MobileNet-V2: 26.7%. On the other hand, the changes in the fooling ratios on the test data are not significant. Concretely, the average test data fooling ratio changes are, VGG-16: , ResNet-50: , Inception-V3: , and MobileNet-V2: . Here, ‘’ indicates that the fooling ratio on the test set actually decreases when label leakage is not suppressed. Conversely, ‘’ indicates a gain, which occurs only in the case of Inception-V3. However, the label leakage rise for the same network is also the maximum. This experiment conclusively demonstrates the successful label leakage suppression by the original algorithm.

A-3: Label details for ImageNet model fooling

For the labels used in Table 1 of the paper, Table 4 provides the detailed names and WordNet IDs.

Transformation ImageNet Label WordNet ID
T source: airship, dirigible n02692877
target: school bus n04146614
T source: ostrich, Struthio camelus n01518878
target: zebra n02391049
T source: lion, king of beasts, Panthera leo n02129165
target: orangutan, orang, orangutang, Pongo pygmaeus n02480495
T source: bustard n02018795
target: Arabian camel, dromedary, Camelus dromedarius n02437312
T source: jellyfish n01910747
target: killer whale, killer, orca, …, Orcinus orca n02071294
T source: lifeboat n03662601
target: great white shark, white shark, man-eater, …, carc1harias n01484850
T source: scoreboard n04149813
target: freight car n03393912
T source: pickelhaube n03929855
target: stupa, tope n04346328
T source: space shuttle n04266014
target: steam locomotive n04310018
T source: rapeseed n11879895
target: sulphur butterfly, sulfur butterfly n02281406
Table 4: Detailed labels and WordNet IDs of ImageNet for Table 1 of the paper.

A-4: Further illustrations of perturbations for ImageNet model fooling

Fig. 6 shows further examples of -norm bounded perturbations with ‘’. We also show -norm bounded perturbation examples in Fig. 7 and 8.

Figure 6: -norm bounded perturbations with ‘’. A row contains perturbations for the same source target fooling. Representative adversarial samples are also shown. We follow [1] for visualizing the perturbation. The perturbations are generally hard to perceive for humans.
Figure 7: -norm bounded perturbations with ‘’.
Figure 8: Further examples of -norm bounded perturbations with ‘’.

A-5: Further images of face identity switches:

Figure 9: Representative and -norm bounded perturbations for face identity switching on VGGFace model. Example clean images of the target classes are provided for reference only.

A-6: Perturbation patterns:

With different LUTA runs for the same source target transformations, we achieve different perturbations due to stochasticity introduced by the mini-batches. However, all those perturbations preserve the characteristic visual features of the target class. Fig. 10 illustrates this fact. The shown perturbations are for VGG-16. We choose this network for more clear target class patterns in the perturbations. This phenomenon is generic for the models. However, for more complex models, regularities are relatively harder to perceive. Fig. 11 provides few more VGG-16 perturbation examples in which visual appearance of the target classes are clear.

Figure 10: Multiple runs of LUTA result in different perturbation patterns. However, each patterns contains the dominant visual features of the target class. Clean samples are shown for reference only.
Figure 11: Further examples of perturbations for VGG-16. Distinct visual features of the target class are apparent in the perturbation patterns.

A-7: LUTA in the Physical World:

Label-universal targeted fooling has a challenging objective of mapping a large variety of inputs to a single (incorrect) target label. Considering that, a straightforward extension of this attack to the Physical World seems hard. However, our experiments demonstrate that label-universal targeted fooling is achievable in the Physical World using the adversarial inputs computed by the proposed LUTA.

To show the network fooling in the Physical World, we adopt the following settings. A image (from ImageNet) is expanded to the maximum allowable area of A4-size paper in the landscape mode. We perform the expansion with a commonly used image organizing software designed for personal photo management for the GNOME desktop environment, ‘Shotwell’ (click here for more details on the software). The software choice is random, and we prefer a common software because an actual attacker may also use something similar. After the expansion, we print the image on a plain A4 paper using the commercial bizhub-c458 color printer from Konica-Minolta. Default printer setting is used in our experiments. We use the same settings to print both clean and adversarial images. The printed images are shown to a regular laptop webcam and its live video stream is fed to our target model that runs on Matlab 2018b using the ‘deep learning toolbox’. We use VGG-16 for this experiment. We use a square grid for the video to match our square images. Note that, we are directly fooling a classifier here (no detector), hence the correct aspect ratio of the image is important in our case.

In the video we provide here, it is clear that the perturbations are able to fool the model into the desired target labels quite successfully. For this experiment, we intentionally selected those adversarial images in which the perturbations were relatively more perceptible, as they must be visible to the webcam (albeit slightly) to take effect. Nevertheless, all the shown images use for the underlying -norm bounded perturbations. Perceptibility of the same perturbation can be different for different images, based on image properties (e.g. brightness, contrast). For the images where the perturbation perceptibiltiy is low for the Physical World attack, a simple scaling of the perturbation works well (instead of allowing larger in the algorithm). However, we do not show any such case in the provided video. All the used image perturbations are directly computed for .

It is also worth mentioning that a Physical World attack setup similar to [14] was also tested in our experiments, where instead of a live video stream, we classify digitally scanned and cropped adversarial images (originally printed in the same manner as described above). However, for the tested images with quasi-imperceptible perturbations (with ), 100% successful fooling was observed. Hence, that setup was not deemed interesting enough to be reported. Our current setup is more challenging because it does not assume static, perfectly cropped, uniformly illuminated and absolutely plane adversarial images. These assumptions are implicit in the other setup.