Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses

04/01/2019 ∙ by Yingwei Li, et al. ∙ 4

This paper focuses on learning transferable adversarial examples specifically against defense models (models to defense adversarial attacks). In particular, we show that a simple universal perturbation can fool a series of state-of-the-art defenses. Adversarial examples generated by existing attacks are generally hard to transfer to defense models. We observe the property of regional homogeneity in adversarial perturbations and suggest that the defenses are less robust to regionally homogeneous perturbations. Therefore, we propose an effective transforming paradigm and a customized gradient transformer module to transform existing perturbations into regionally homogeneous ones. Without explicitly forcing the perturbations to be universal, we observe that a well-trained gradient transformer module tends to output input-independent gradients (hence universal) benefiting from the under-fitting phenomenon. Thorough experiments demonstrate that our work significantly outperforms the prior art attacking algorithms (either image-dependent or universal ones) by an average improvement of 14.0 cross-model transferability, we also verify that regionally homogeneous perturbations can well transfer across different vision tasks (attacking with the semantic segmentation task and testing on the object detection task).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 7

page 11

page 12

Code Repositories

Regional-Homogeneity

Source code for Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks are demonstrated vulnerable to adversarial examples 

[55]

, crafted by adding visually imperceptible perturbations to clean images, which casts a security threat when deploying commercial machine learning systems. To mitigate this, large efforts have been devoted to adversarial defense 

[7, 13, 36, 43, 44, 49, 52], via adversarial training [32, 37, 56, 62], input transformation [11, 21] or randomization [34, 61].

The focus of this work is to attack defense models, especially in the black-box setting where models’ architectures and parameters remain unknown to attackers. In this case, the adversarial examples generated for one model, which possess the property of “transferability”, may also be misclassified by other models. To the best of our knowledge, learning transferable adversarial examples for attacking defense models is still an open problem.

Our work stems from the observation of regional homogeneity on adversarial perturbations in the white-box setting. As Figure 1(a) shows, we plot the adversarial perturbations generated by attacking a naturally trained Resnet-152 [23] model (top) and an representative defense one (, an adversarially trained model [37, 62]). It suggests that the patterns of two kinds of perturbations are visually different. Concretely, the perturbations of defense models reveal a coarser level of granularity, and are more locally correlated and more structured than that of the naturally trained model. The observation also holds when attacking different defense models (, adversarial training with feature denoising [62], see Figure 1(b)), generating different types of adversarial examples (image-dependent perturbations or universal ones [50], see Figure 1(c)), or tested on different data domains (CT scans from NIH pancreas segmentation dataset [46], see Figure 1(d)).

Figure 1: Illustration of region homogeneity property of adversarial perturbations by white-box attacking naturally trained models (top row) and adversarially trained models (bottom row). The adversarially trained models are acquired by (a) vanilla adversarial training [37, 62], (b) adversarial training with feature denoising [62], (c) universal adversarial training [50], and (d) adversarial training [37] for medical image segmentation [65].

Motivated by this observation, we suggest that regionally homogeneous perturbations are strong in attacking defense models, which is especially helpful to learn transferable adversarial examples in the black-box setting. Hence, we propose to transform the existing perturbations (those derive from differentiating naturally trained models) to the regionally homogeneous ones. To this end, we develop a novel transforming paradigm (see Figure 2) to craft regionally homogeneous perturbations, and accordingly a gradient transformer module (see Figure 3), to encourage local correlations within the pre-defined regions.

The proposed gradient transformer module is quite light-weight, with only trainable parameters in total, where a convolutional layer (bias enabled) incurs parameters and

is the number of region partitions. According to our experiments, it leads to under-fitting (large bias and small variance) if the module is trained with a large number of images. In general vision tasks, an under-fitting model is undesirable. However in our case, once the gradient transformer module becomes quasi-input-independent (, aforementioned large bias and small variance), it will output a nearly fixed pattern whatever the input is. Then, our work is endowed with a desirable property, , seemingly training to generate image-dependent perturbations, yet get the universal ones. We note our mechanism is different from other universal adversarial generations 

[38, 42] as we do not explicitly force the perturbation to be universal.

Comprehensive experiments are conducted to verify the effectiveness of the proposed regionally homogeneous perturbation (RHP). Under the black-box setting, RHP successfully attacks 9 latest defenses [21, 26, 32, 37, 56, 61, 62] and improves the top-1 error rates by in average, where three of them are the top submissions in the NeurIPS 2017 defense competition [29] and the Competition on Adversarial Attacks and Defenses 2018. Compared with the state-of-the-art attack methods, RHP not only outperforms universal adversarial perturbations (, UAP [38] by and GAP [42] by ), but also outperforms image-dependent perturbations (FGSM [18] by , MIM [14] by and DIM [14, 63] by

). The achievement over image-dependent perturbations is especially valuable as it is known that image-dependent perturbations generally perform better as they utilized information from the original images. Since it is universal, RHP is more general (natural noises are not related to the target image), more efficient (without additional computational power), and more flexible (, without knowing the target image, people can stick a pattern on the lens to attack artificial intelligence surveillance cameras).

Moreover, we also evaluate the cross-task transferability of RHP and demonstrate that RHP generalizes well in cross-task attack, , attacking with the semantic segmentation task and testing on the object detection task.

2 Related Work

Black-box attack.

In black-box setting, attackers cannot access the target model. A typical solution is to generate adversarial examples with strong transferability. Szegedy  [55] first discuss the transferability of adversarial examples that the same input can successfully attack different models. Taking advantage of transferability, Papernot  [41, 40] examine constructing a substitute model to attack a black-box target model. Liu  [35] extend the black-box attack method to a large scale and successfully attack an online image classification system clarifai.com. Based on one of the most well-known attack methods, Fast Gradient Sign Method (FGSM) [18], its iteration-based version (I-FGSM) [30], Dong  [14], Zhou  [64], Xie  [63] and Li  [31] improve the transferability by adopting momentum term, smoothing perturbation, input transformation, and model augmentation, respectively. [4, 42, 60] also suggest to train generative models for creating adversarial examples.

Besides transfer-based attacks, query-based [5, 9] and decision-based [1, 20] attacks are also in the field of black-box attack. However, these families of methods need to access model outputs and query the target models for a large number of times. For example, Guo  [20] limit the search space to a low-frequency domain and fool the Google Cloud Vision platform with an unprecedented 1000 model queries.

Universal adversarial perturbations.

Above are all image-dependent perturbation attacks. Moosavi-Dezfooli  [38]

craft universal perturbations which can be directly added to any test images to fool the classifier with a high success rate. Poursaeed  

[42] propose to train a neural network for generating adversarial examples by explicitly feeding random noise to the network during training. After obtaining a well-trained model, they use a fixed input to generate universal adversarial perturbations. Researchers also explore to produce universal adversarial perturbations by different methods [27, 39] or on different tasks [24, 42]. All these methods construct universal adversarial perturbations explicitly or data-independently. Unlike them, we provide an implicit data-driven alternative to generate universal adversarial perturbations.

Defense methods.

Xie  [61] and Guo  [21]

break transferability by applying input transformation such as random padding/resizing 

[61], JPEG compression [15], and total variance minimization [48]. Injecting adversarial examples during training improves the robustness of deep neural network, termed as adversarial training. These adversarial examples can be pre-generated [32, 56] or generated on-the-fly during training [26, 37, 62]. Adversarial training is also applied to universal adversarial perturbations [2, 50]. Tsipras  [57] suggested, for an adversarially trained model, loss gradients in the input space align well with human perception. Shafahi  [50] also have a similar observation on the universal adversarially trained models.

Normalization.

To induce regionally homogeneous perturbations, our work resorts to a new normalization strategy. This strategy appears similar to some normalization techniques, such as batch normalization 

[25], layer normalization [3], instance normalization [58], group normalization [59], . While these techniques aim to help the model converge faster and speed up the learning procedure for different tasks, the goal of our work is to explicitly enforce the region structure and build homogeneity within regions.

Figure 2: Schematic illustration of the transforming paradigm, where is an original image with the ground truth label , and the gradient is computed from the naturally trained model . Our work learns a gradient transformer module from to .

3 Regionally Homogeneous Perturbations

Figure 3: Structure of the gradient transformer module, which has a newly proposed Region Norm (RN) layer, convolutional layer (bias enabled) and identity mapping. We insert four probes (a, b, c and d) to assist analysis in Section 3.3 and Section 4.2.

As shown in Section 1, regionally homogeneous perturbations appear to be strong in attacking defense models. To acquire regionally homogeneous adversarial examples, we propose a gradient transformer module to generate regionally homogeneous perturbations from existing regionally non-homogeneous perturbations (, perturbations in the top row of Figure 1). In the following, we detail the transforming paradigm in Section 3.1 and the core component called gradient transformer module in Section 3.2, respectively. In Section 3.3, we observe an under-fitting phenomenon and illustrate that the proposed gradient transformer module becomes quasi-input-independent, which benefits crafting universal adversarial perturbations.

3.1 Transforming Paradigm

To learn regionally homogeneous adversarial perturbations, we propose to use a shallow network , which we call gradient transformer module, to transform the gradients that are generated by attacking naturally trained models.

Concretely, we consider Fast Gradient Sign Method (FGSM) [18] which generates adversarial examples by

(1)

where

is the loss function of the model

, and sign denotes the sign function. is the ground-truth label of the original image . FGSM ensures that the generated adversarial example is within the -ball of in the space.

Based on FGSM, we build pixel-wise connections via the additional gradient transformer module , so that we may have regionally homogeneous perturbations. Therefore, Eqn. (1) becomes

(2)

where is trainable parameter of gradient transformer module , and we omit where possible for simplification. The challenge we are facing now is how to train the gradient transformer module with the limited supervision. We address this by proposing a new transforming paradigm illustrated in Figure 2. It consists of four steps, as we 1) compute the gradient by attacking the naturally trained model ; 2) get the transformed gradient via the gradient transformer module; 3) construct the adversarial image by adding the transformed perturbation to the clean image , forward to the same model , and obtain the classification loss ; and 4) freeze the clean image and the model , and update the parameters of by maximizing . The last step is implemented via stochastic gradient ascent (, we use the Adam optimizer [28] in our experiments).

With the new transforming paradigm, one can potentially embed desirable properties via using the gradient transformer module , and in the meantime, keep a high error rate on the model . As we will show below, is customized to generate regionally homogeneous perturbations specially against defense models. Meanwhile, since we freeze the most part of the computation graph and leave a limited number of parameters (that is ) to optimize, the learning procedure is very fast.

3.2 Gradient Transformer Module

With the transforming paradigm aforementioned, we introduce the architecture of the core module, termed as gradient transformer module. The gradient transformer module aims at increasing the correlation of pixels in the same region, therefore inducing regionally homogeneous perturbations. As Figure 3 shows, given a loss gradient as the input, the gradient transformer module is defined as

(3)

where is a convolutional layer and is the newly proposed region norm layer. is the module parameters, which goes to the region norm layer ( and

below) and the convolutional layer. A residual connection 

[23] is also incorporated. Since is initialized as zero [19], the residual connection allows us to insert the gradient transformer module into any gradient-based attack methods without breaking its initial behavior (, the transformed gradient initially equals to ). Since the initial gradient is able to craft stronger adversarial example (compared with random noises), the gradient transformer module has a proper initialization.

The region norm layer consists of two parts, including a region split function and a region norm operator.

Region split function splits an image (or equivalently, a convolutional feature map) into regions. Let denote the region split function. The input of is a pixel coordinate while the output is an index of the region which the pixel belongs to. With a region split function, we can get a partition of an image, where .

In Figure 4, we show 4 representatives of region split functions on a toy image, including 1) vertical partition , 2) horizontal partition , 3) grid partition , and 4) slash partition (parallel to an increasing line with the slope equal to 0.5).

Figure 4: Toy examples of region split functions, including (a) vertical partition, (b) horizontal partition, (c) grid partition, and (d) slash partition. (e) illustrates the region norm operator with the region split function (a), where is the channel axis, and are the spatial axes. Each pixel indicates an

-dimensional vector, where

is the batch size.

Region norm operator links pixels within the same region , defined as

(4)

where and are the -th input and output, respectively. And is a 4D vector indexing the features in order, where is the batch axis, is the channel axis, and and are the spatial height and width axes. We define as a set of pixels that belong to the region , that is, .

and in Eqn. (4

) are the mean and standard deviation (std) of the

region, computed by

(5)

where const is a small constant for numerical stability. is the size of . Here and is the cardinality of a given set. In the testing phase, the moving mean and moving std during training are used instead. Since we split the image to regions, the trainable scale and shift in Eqn. (4) are also computed per-region.

We illustrate the region norm operator in Figure 4(e). To analyze the benefit, we compute the derivatives as

(6)

where is the loss to optimize, and . It is not surprising that the gradient of or is computed by all pixels in the related region. However, the gradient of a pixel with an index is also computed by all pixels in the same region. More significantly in Eqn. (6), the second term, and the third term, are shared by all pixels in the same region. Therefore, the pixel-wise connections within the same region are much denser after inserting the region norm layer.

Comparison with other normalizations.

Compared with existing normalizations (, Batch Norm [25], Layer Norm [3], Instance Norm [58] and Group Norm [59]), which aims to speed up learning, there are two main difference: 1) the goal of Region Norm is to generate regionally homogeneous perturbations, while existing methods mainly aim to stabilize and speed up training; 2) the formulation of Region Norm is splitting an image to regions and normalize each region individually, while other methods do not have spatial operations.

3.3 Universal Analysis

By analyzing the magnitude of four probes (, , , and ) in Figure 3, we observe that and in a well-trained gradient transformer module (more results in Section 4.2). Consequently, such a well-trained module becomes quasi-input-independent, , the output is nearly fixed and less related to the input. Note that the output is still a little bit related to the input which is the reason why we use “quasi-”.

Here, we first build the connection between that observation and under-fitting to explain the reason. Then, we convert the quasi-input-independent module to an input-independent module in order to generate universal adversarial perturbations.

Under-fitting and the quasi-input-independent module.

People figure out the trade-off between bias and variance of a model, , the price for achieving a small bias is a large variance, and vice versa [6, 22]. Under-fitting occurs when the model shows low variance (but inevitable bias). An extremely low variance function gives a nearly fixed output whatever the input, which we term as quasi-input-independent. Although in the most machine learning situation people do not expect this case, the quasi-input-independent function is desirable for generating universal adversarial perturbation.

Therefore, to encourage under-fitting, we go to the opposite direction of preventing under-fitting suggestions in [17]. On the one hand, to minimize the model capacity, our gradient transformer module only has parameters, where a convolutional layer (bias enabled) incurs 12 parameters and is the number of region partitions. On the other hand, we use a large training data set (5k images or more) so that the model capacity is relatively small. We then will have a quasi-input-independent module.

From quasi-input-independent to input-independent.

According to the analysis above, we already have a quasi-input-independent module. To generate a universal adversarial perturbation, following the post-process strategy of Poursaeed  [42], we use a fixed vector as input of the module. Then following FGSM [18], the final universal perturbation will be , where is a fixed input. Recall that denotes the sign function, and denotes the gradient transformer module.

4 Experiments

In this section, we demonstrate the effectiveness of the proposed regionally homogeneous perturbation (RHP) by attacking a series of defense models.

4.1 Experimental Setup

Dataset and evaluation metric.

We randomly select images from the validation set of ILSVRC 2012 [12] to evaluate the transferability of attack methods.

As for the evaluation metric, we use the improvement of top-

error rate after attacking, , the difference between the error rate of adversarial images and that of clean images.

Attack methods.

For performance comparison, we reproduce five representative attack methods, including fast gradient sign method (FGSM) [18], momentum iterative fast gradient sign method (MIM) [14], momentum diverse inputs iterative fast gradient sign method (DIM) [14, 63], universal adversarial perturbations (UAP) [38], and the universal version of generative adversarial perturbations (GAP) [42]. If not specified otherwise, we follow the default parameter setup in each method respectively.

To keep the perturbation quasi-imperceptible, we generate adversarial examples in the -ball of original images in the space. The maximum perturbation is set as or . The adversarial examples are generated by attacking a naturally trained network, Inception v3 (IncV3) [54], Inception v4 (IncV4) or Inception Resnet v2 (IncRes) [53]. In the default setting, IncV3 is used and .

Defenses TVM HGD R&P Incens3 Incens4 IncResens PGD ALP FD
Error Rate 37.4 18.6 19.9 25.0 24.5 21.3 40.9 48.6 35.1
Table 1: The error rates (%) of defense methods on our dataset which contains 5000 randomly selected ILSVRC 2012 validation images.

Defense methods.

As our method is to attack defense models, we reproduce nine defense methods for performance evaluation, including input transformation [21] through total variance minimization (TVM), high-level representation guided denoiser (HGD) [32], input transformation through random resizing and padding (R&P) [61], three ensemble adversarially trained models (Incens3, Incens4 and IncResens[56], adversarial training with project gradient descent white-box attacker (PGD) [37, 62]

, adversarial logits pairing (ALP) 

[26], and feature denoising adversarially trained ResNeXt-101 (FD) [62].

Among them, HGD [32] and R&P [61] are the rank-1 submission and rank-2 submission in the NeurIPS 2017 defense competition [29], respectively. FD [62] is the rank-1 submission in the Competition on Adversarial Attacks and Defenses 2018. The top-1 error rates of these methods on our dataset are shown in Table 1.

Implementation details.

To train the gradient transformer module, we randomly select another images from the validation set of ILSVRC 2012 [12] as the training set. Note that the training set and the testing set are disjoint.

For the region split function, we choose as default, and will discuss different region split functions in Section 4.4

. We train the gradient transformer module for 50 epochs. When testing, we use a zero array as the input of the gradient transformer module to get universal adversarial perturbations,  the fixed input

.

Figure 5: Universal analysis of RHP. In (a), we plot the ratio of the number of variables in probe pairs (, ) satisfying that to the total number of variables when training with 4, 5k or 45k images. In (b), we plot the case of . In (c), we compare the performance of universal (denoted by -U) inference and image dependent inference (denoted by -I) by varying the number of training images (4, 5k or 45k).

4.2 Under-fitting and Universal

To verify the connections between under-fitting and universal adversarial perturbations, we change the number of training images so that the models are supposed to be under-fitting (due to the model capacity becomes low compared to large dataset) or not. Specifically, we select 4, 5k or 45k images from the validation set of ILSVRC 2012 as the training set. We insert four probes , , , and in the gradient transformer module as shown in Figure 3 and compare their values in Figure 5 with respect to the training iterations.

When the gradient transformer module is well trained with 5k or 45k images, we observe that: 1) overwhelms , indicating the residual learning branch dominates the final output, , ; and 2) overwhelms , indicating the output of the convolutional layer is less related to the input gradient . Based on the two observations, we conclude that the gradient transformer module is quasi-input-independent when the module is under-fitted by a large number of training images in this case. Such a property is beneficial to generate universal adversarial perturbations (see Section 3.3).

When the number of training images is limited (say 4 images), we observe that does not overwhelm , indicating the output of the convolutional layer is related to the input gradient , since a small training set cannot lead to under-fitting.

This conclusion is further supported by Figure 5(c): when training with 4 images, the performance gap between universal inference (use a fixed zero as the input of the gradient transformer module) and image dependent inference (use the loss gradient as the input) is quite large. The gap is reduced when using more data for training. Figure 5(c) also illustrates 5k images are enough for under-fitting. Hence, we set the size of the training set to 5k in the following experiments.

To provide a better understanding of our implicit universal adversarial perturbation generating mechanism, we present an ablation study by comparing our method with other 3 strategies of generating universal adversarial perturbation with the same region split function. The compared includes 1) RP: Randomly assigns the Perturbation as and for each region; 2) OP: iteratively Optimize the Perturbation to maximize classification loss on the naturally trained model (the idea of [38]); 3) TU: explicitly Trains a Universal adversarial perturbations. The only difference between TU and our proposed RHP is that random noises take the place of the loss gradient g in Figure 2 (following [42]) and are fed to the gradient transformer module. RHP is our proposed implicitly method, and the gradient transformer module becomes quasi-input-independent without taking random noise as the training input.

We evaluate above four settings on IncResens, the error rates increase by , , , and for RP, OP, TU, and RHP respectively. Since our implicit method has a proper initialization (discussed in Section 3.2), we observe that our implicit method constructs stronger universal adversarial perturbations.

4.3 Transferability toward Defenses

We first conduct the comparison in Table 2 when the maximum perturbation and , respectively.

A first glance shows that compared with other representatives, the proposed RHP provides much stronger attack toward defenses. For example, when attacking HGD [32] with , RHP outperforms FGSM [18] by , MIM [14] by , DIM [14, 63] by , UAP [38] by , and GAP [42] by , respectively. Second, universal methods generally perform worse than image-dependent methods as the latter can access and utilize the information from the clean images. Nevertheless, RHP, as a universal method, still beats those image-dependent methods by a large margin. At last, we observe that our method gains more when the maximum perturbation becomes larger.

Methods TVM HGD R&P Incens3 Incens4 IncResens PGD ALP FD
FGSM [18] 21.9/45.3 2.84/20.7 6.80/13.9 10.0/17.9 9.34/15.9 6.86/13.3 1.90/12.8 17.0/32.3 1.62/13.3
MIM [14] 18.2/37.1 7.30/18.7 7.52/13.7 11.4/17.3 10.9/16.5 7.76/13.6 1.36/6.86 15.3/24.4 1.00/7.48
DIM [14, 63] 21.9/41.0 11.9/32.1 12.0/21.9 16.7/26.1 16.2/25.0 10.8/19.6 1.84/7.70 15.5/24.7 1.34/8.22
UAP [38] 4.78/12.1 1.94/11.3 2.42/6.66 1.00/7.82 1.80/8.34 1.88/5.60 0.04/1.04 7.98/11.5 -0.1/0.40
GAP [42] 18.5/50.1 1.34/37.9 3.52/26.9 5.48/33.3 4.14/29.4 3.76/22.5 1.28/10.2 15.6/30.0 0.56/11.1
RHP (ours) 33.0/56.9 26.8/57.5 23.3/56.1 32.5/60.8 31.6/58.7 24.6/57.0 2.40/25.8 17.8/39.4 2.38/24.5
Table 2: The increase of error rates (%) after attacking. The adversarial examples are generated with IncV3. In each cell, we show the results when the maximum perturbation , respectively. The top 3 rows (FGSM, MIM and DIM) are image-dependent methods while the bottom 3 rows are (UAP, GAP and RHP) are universal methods.

The performance comparison is also done when generating adversarial examples by attacking IncV4 or IncRes. Here we do not report the performance of GAP, because the official code does not support generating adversarial examples with IncV4 or IncRes. As shown in Table 3 and Table 4, RHP still keeps strong against defense models. Meanwhile, it should be mentioned that when the model for generating adversarial perturbations is changed, RHP still generates universal adversarial examples. The only difference is that the gradients used in the training phase are changed, which then leads to a different set of parameters in the gradient transformer module.

Methods TVM HGD R&P Incens3 Incens4 IncResens PGD ALP FD
FGSM [18] 22.4/46.3 4.00/21.1 8.68/15.1 10.1/18.3 9.72/17.4 7.58/14.7 2.02/12.8 17.3/32.1 1.42/13.4
MIM [14] 20.1/40.4 10.0/23.9 10.2/17.4 13.4/20.3 13.1/19.0 9.96/16.6 1.50/7.54 14.8/25.1 1.24/8.18
DIM [14, 63] 22.7/42.9 16.3/37.1 14.7/25.0 18.7/28.6 17.9/26.5 13.6/22.1 1.82/8.02 15.2/24.8 1.62/8.74
UAP [38] 6.28/18.2 1.42/9.94 2.42/6.52 2.08/7.68 1.94/6.92 2.34/6.78 0.28/2.12 10.1/15.9 0.16/1.18
RHP (ours) 37.1/58.4 23.4/59.8 20.2/57.6 27.5/60.3 26.7/62.5 21.2/58.5 2.20/29.7 20.3/42.1 1.90/31.8
Table 3: The increase of error rates (%) after attacking. The adversarial examples are generated with IncV4. In each cell, we show the results when the maximum perturbation , respectively. The top 3 rows (FGSM, MIM and DIM) are image-dependent methods while the bottom 2 rows are (UAP and RHP) are universal methods.
Methods TVM HGD R&P Incens3 Incens4 IncResens PGD ALP FD
FGSM [18] 20.6/44.1 5.34/22.3 10.1/15.8 11.7/19.4 10.5/17.2 10.4/16.3 2.06/13.8 17.5/32.6 1.72/14.7
MIM [14] 20.3/39.4 15.0/28.1 13.4/22.1 17.4/24.6 15.1/22.5 13.6/22.6 1.84/8.80 12.3/25.9 1.62/9.48
DIM [14, 63] 24.6/44.0 23.7/44.1 22.5/34.5 25.8/37.1 22.4/33.7 20.2/32.5 2.36/9.26 12.6/25.9 1.78/10.1
UAP [38] 7.10/24.7 2.14/10.6 2.50/8.36 1.88/8.28 1.74/7.22 1.96/8.18 0.40/3.78 7.12/17.0 -0.1/3.06
RHP (ours) 37.1/57.4 26.9/62.1 25.1/61.4 29.7/62.3 29.8/63.3 26.8/62.8 2.20/28.3 22.8/43.5 2.20/32.2
Table 4: The increase of error rates (%) after attacking. The adversarial examples are generated with IncRes. In each cell, we show the results when the maximum perturbation , respectively. The top 3 rows (FGSM, MIM and DIM) are image-dependent methods while the bottom 2 rows are (UAP and RHP) are universal methods.

4.4 Region Split Functions

In this section, we discuss the choice of region split functions, , vertical partition, horizontal partition, grid partition and slash partition (parallel to an increasing line with the slope equal to ). Figure 6 shows the transferability to the defenses, which demonstrates that different region split functions are almost equivalently effective and all are stronger than our strongest baseline (DIM). Moreover, we observe an interesting phenomenon as presented in Figure 7.

Figure 6: Performance comparison among four split functions, including vertical partition, horizontal partition, grid partition and slash partition.
Figure 7: Four universal adversarial perturbations generated by different region split functions, and the corresponding top-3 target categories.

In each row of Figure 7, we exhibit the universal adversarial perturbation generated by a certain kind of region split functions, followed by the top-3 categories to which the generated adversarial examples are most likely to be misclassified. For each category, we show a clean image as an exemplar. Note that our experiments are about the non-targeted attack, indicating the target class is undetermined and solely relies on the region split function.

As can be seen, the regionally homogeneous perturbations with different region split functions seem to be targeting at different categories, with an inherent connection between the low-level cues (, texture, shape) they share. For example, when using grid partition, the top-3 target categories are quilt, shower curtain, and container ship, respectively, and one can observe that images in the three categories generally have grid-structured patterns.

Motivated by these qualitative results, we have a preliminary hypothesis that the regionally homogeneous perturbations tend to attack the low-level part of a model. The claim is not supported by a theoretical proof, however, it inspires us to test the cross-task transferability of RHP. As it is a common strategy to share the low-level CNN architecture/information in multi-task learning systems [47], we conjecture that RHP can well transfer between different tasks (see below).

4.5 Cross-task Transferability

To demonstrate the cross-task transferability of RHP, we attack with the semantic segmentation task and test on the object detection task.

In more detail, we attack a semantic segmentation model (an Xception-65 [10] based deeplab-v3+ [8]) on the Pascal VOC 2012 segmentation val [16], and obtain the adversarial examples. Then, we take a VGG16 [51] based Faster-RCNN model [45], trained on MS COCO [33] and VOC2007 trainval, as the testing model. To avoid testing images occurred in the training set of detection model, the testing set is the union of VOC2012 segmentation val and VOC2012 detection trainval, then we remove the images in VOC2007 dataset. The baseline performance of the clean images is mAP . Here mAP score is the average of the precisions at different recall values.

As shown in Table 5, RHP reports the lowest mAP with object detection, which demonstrates the stronger cross-task transferability than the baseline image-dependent perturbations, , FGSM [18], MIM [14], and DIM [14, 63].

Attacks - FGSM MIM DIM RHP (ours)
mAP 69.2 43.1 41.6 36.2 31.6
Table 5: Comparison of cross-task transferability. We attack segmentation model and test on the detection model Faster R-CNN, and report the value of mAP (lower is better for attacking methods). “-” denotes the baseline performance without attacks.

5 Conclusion

By white-box attacking naturally trained models and defense models, we observe the regional homogeneity of adversarial perturbations. Motivated by this observation, we propose a transforming paradigm and a gradient transformer module to generate the regionally homogeneous perturbation (RHP) specifically for attacking defenses. RHP possesses three merits, including 1) transferability: we demonstrate that RHP well transfers across different models (, black-box attack) and different tasks; 2) universal: taking advantage of the under-fitting of the gradient transformer module, RHP generates universal adversarial examples without explicitly enforcing the learning procedure towards it; 3) strong: RHP successfully attacks representative defenses and outperforms the state-of-the-art attacking methods by a large margin.

Recent studies [32, 62] show that the mechanism of some defense models can be interpreted as a “denoising” procedure. Since RHP is less like noise compared with other perturbations, it would be interesting to reveal the property of RHP from a denoising perspective in future works. Meanwhile, although evaluated with the non-targeted attack, RHP is supposed to be strong targeted attack as well, which requires further exploration and validation.

References

  • [1] W. B. *, J. R. *, and M. Bethge (2018) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In ICLR, Cited by: §2.
  • [2] N. Akhtar, J. Liu, and A. Mian (2018) Defense against universal adversarial perturbations. In CVPR, Cited by: §2.
  • [3] J. L. Ba, J. R. Kiros, and G. E. Hinton (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §2, §3.2.
  • [4] S. Baluja and I. Fischer (2018)

    Learning to attack: adversarial transformation networks

    .
    In AAAI, Cited by: §2.
  • [5] A. N. Bhagoji, W. He, B. Li, and D. Song (2018) Practical black-box attacks on deep neural networks using efficient query mechanisms. In ECCV, Cited by: §2.
  • [6] C. M. Bishop (2006) The bias-variance decomposition. In Pattern recognition and machine learning, pp. 147–152. Cited by: §3.3.
  • [7] J. Buckman, A. Roy, C. Raffel, and I. Goodfellow (2018) Thermometer encoding: one hot way to resist adversarial examples. In ICLR, Cited by: §1.
  • [8] L. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, Cited by: §4.5.
  • [9] P. Chen, H. Zhang, Y. Sharma, J. Yi, and C. Hsieh (2017) Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Cited by: §2.
  • [10] F. Chollet (2017)

    Xception: deep learning with depthwise separable convolutions

    .
    In ICCV, Cited by: §4.5.
  • [11] N. Das, M. Shanbhogue, S. Chen, F. Hohman, S. Li, L. Chen, M. E. Kounavis, and D. H. Chau (2018) Shield: fast, practical defense and vaccination for deep learning using jpeg compression. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 196–204. Cited by: §1.
  • [12] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, Cited by: §4.1, §4.1.
  • [13] G. S. Dhillon, K. Azizzadenesheli, J. D. Bernstein, J. Kossaifi, A. Khanna, Z. C. Lipton, and A. Anandkumar (2018) Stochastic activation pruning for robust adversarial defense. In ICLR, Cited by: §1.
  • [14] Y. Dong, F. Liao, T. Pang, H. Su, X. Hu, J. Li, and J. Zhu (2018) Boosting adversarial attacks with momentum. In CVPR, Cited by: §1, §2, §4.1, §4.3, §4.5, Table 2, Table 3, Table 4.
  • [15] G. K. Dziugaite, Z. Ghahramani, and D. M. Roy (2016) A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv:1608.00853. Cited by: §2.
  • [16] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2015) The pascal visual object classes challenge: a retrospective. IJCV 111 (1), pp. 98–136. Cited by: §4.5.
  • [17] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT Press. Note: http://www.deeplearningbook.org Cited by: §3.3.
  • [18] I. J. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In ICLR, Cited by: §1, §2, §3.1, §3.3, §4.1, §4.3, §4.5, Table 2, Table 3, Table 4.
  • [19] P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He (2017) Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677. Cited by: §3.2.
  • [20] C. Guo, J. S. Frank, and K. Q. Weinberger (2018) Low frequency adversarial perturbation. arXiv preprint arXiv:1809.08758. Cited by: §2.
  • [21] C. Guo, M. Rana, M. Cissé, and L. van der Maaten (2018) Countering adversarial images using input transformations. In ICLR, Cited by: Table 6, §1, §1, §2, §4.1.
  • [22] S. S. Haykin (2009) Finite sample-size considerations. In Neural networks and learning machines, Vol. 3, pp. 82–86. Cited by: §3.3.
  • [23] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1, §3.2.
  • [24] J. Hendrik Metzen, M. Chaithanya Kumar, T. Brox, and V. Fischer (2017) Universal adversarial perturbations against semantic image segmentation. In ICCV, Cited by: §2.
  • [25] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, Cited by: §2, §3.2.
  • [26] H. Kannan, A. Kurakin, and I. Goodfellow (2018) Adversarial logit pairing. In NIPS, Cited by: Table 6, §1, §2, §4.1.
  • [27] V. Khrulkov and I. Oseledets (2018) Art of singular vectors and universal adversarial perturbations. In CVPR, Cited by: §2.
  • [28] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.1.
  • [29] A. Kurakin, I. Goodfellow, S. Bengio, Y. Dong, F. Liao, M. Liang, T. Pang, J. Zhu, X. Hu, C. Xie, et al. (2018) Adversarial attacks and defences competition. arXiv preprint arXiv:1804.00097. Cited by: §1, §4.1.
  • [30] A. Kurakin, I. Goodfellow, and S. Bengio (2017) Adversarial examples in the physical world. In ICLR Workshop, Cited by: §2.
  • [31] Y. Li, S. Bai, Y. Zhou, C. Xie, Z. Zhang, and A. Yuille (2018) Learning transferable adversarial examples via ghost networks. arXiv preprint arXiv:1812.03413. Cited by: §2.
  • [32] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu (2018) Defense against adversarial attacks using high-level representation guided denoiser. In CVPR, Cited by: Table 6, Appendix A, §1, §1, §2, §4.1, §4.1, §4.3, §5.
  • [33] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In ECCV, pp. 740–755. Cited by: §4.5.
  • [34] X. Liu, M. Cheng, H. Zhang, and C. Hsieh (2018) Towards robust neural networks via random self-ensemble. In ECCV, Cited by: §1.
  • [35] Y. Liu, X. Chen, C. Liu, and D. Song (2017) Delving into transferable adversarial examples and black-box attacks. In ICLR, Cited by: §2.
  • [36] X. Ma, B. Li, Y. Wang, S. M. Erfani, S. Wijewickrema, G. Schoenebeck, M. E. Houle, D. Song, and J. Bailey (2018) Characterizing adversarial subspaces using local intrinsic dimensionality. In ICLR, Cited by: §1.
  • [37] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2018) Towards deep learning models resistant to adversarial attacks. In ICLR, Cited by: Table 6, Appendix A, Figure 1, §1, §1, §1, §2, §4.1.
  • [38] S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard (2017) Universal adversarial perturbations. In CVPR, Cited by: §1, §1, §2, §4.1, §4.2, §4.3, Table 2, Table 3, Table 4.
  • [39] K. R. Mopuri, U. Garg, and R. V. Babu (2017) Fast feature fool: a data independent approach to universal adversarial perturbations. In BMVC, Cited by: §2.
  • [40] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami (2017) Practical black-box attacks against machine learning. In AsiaCCS, Cited by: §2.
  • [41] N. Papernot, P. McDaniel, and I. Goodfellow (2016) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277. Cited by: §2.
  • [42] O. Poursaeed, I. Katsman, B. Gao, and S. Belongie (2017) Generative adversarial perturbations. In CVPR, Cited by: §1, §1, §2, §2, §3.3, §4.1, §4.2, §4.3, Table 2.
  • [43] A. Prakash, N. Moran, S. Garber, A. DiLillo, and J. Storer (2018) Deflecting adversarial attacks with pixel deflection. In CVPR, Cited by: §1.
  • [44] A. Raghunathan, J. Steinhardt, and P. Liang (2018) Certified defenses against adversarial examples. In ICLR, Cited by: §1.
  • [45] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In NIPS, Cited by: §4.5.
  • [46] H. R. Roth, L. Lu, A. Farag, H. Shin, J. Liu, E. B. Turkbey, and R. M. Summers (2015) Deeporgan: multi-level deep convolutional networks for automated pancreas segmentation. In MICCAI, Cited by: §1.
  • [47] S. Ruder (2017) An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098. Cited by: §4.4.
  • [48] L. I. Rudin, S. Osher, and E. Fatemi (1992) Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60 (1-4), pp. 259–268. Cited by: §2.
  • [49] P. Samangouei, M. Kabkab, and R. Chellappa (2018) Defense-GAN: protecting classifiers against adversarial attacks using generative models. In ICLR, Cited by: §1.
  • [50] A. Shafahi, M. Najibi, Z. Xu, J. Dickerson, L. S. Davis, and T. Goldstein (2018) Universal adversarial training. arXiv preprint arXiv:1811.11304. Cited by: Figure 1, §1, §2.
  • [51] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §4.5.
  • [52] Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman (2018) PixelDefend: leveraging generative models to understand and defend against adversarial examples. In ICLR, Cited by: §1.
  • [53] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI, Cited by: §4.1.
  • [54] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016)

    Rethinking the inception architecture for computer vision

    .
    In CVPR, Cited by: §4.1.
  • [55] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2014) Intriguing properties of neural networks. In ICLR, Cited by: §1, §2.
  • [56] F. Tramèr, A. Kurakin, N. Papernot, D. Boneh, and P. McDaniel (2018) Ensemble adversarial training: attacks and defenses. In ICLR, Cited by: Table 6, §1, §1, §2, §4.1.
  • [57] D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry (2019)

    Robustness may be at odds with accuracy

    .
    In ICLR, Cited by: §2.
  • [58] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §2, §3.2.
  • [59] Y. Wu and K. He (2018) Group normalization. In ECCV, pp. 3–19. Cited by: §2, §3.2.
  • [60] C. Xiao, B. Li, J. Zhu, W. He, M. Liu, and D. Song (2018) Generating adversarial examples with adversarial networks. In IJCAI, Cited by: §2.
  • [61] C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille (2018) Mitigating adversarial effects through randomization. In ICLR, Cited by: Table 6, §1, §1, §2, §4.1, §4.1.
  • [62] C. Xie, Y. Wu, L. van der Maaten, A. Yuille, and K. He (2018) Feature denoising for improving adversarial robustness. arXiv preprint arXiv:1812.03411. Cited by: Table 6, Appendix A, Figure 1, §1, §1, §1, §2, §4.1, §4.1, §5.
  • [63] C. Xie, Z. Zhang, J. Wang, Y. Zhou, Z. Ren, and A. Yuille (2018) Improving transferability of adversarial examples with input diversity. arXiv preprint arXiv:1803.06978. Cited by: §1, §2, §4.1, §4.3, §4.5, Table 2, Table 3, Table 4.
  • [64] W. Zhou, X. Hou, Y. Chen, M. Tang, X. Huang, X. Gan, and Y. Yang (2018) Transferable adversarial perturbations. In ECCV, Cited by: §2.
  • [65] Z. Zhu, Y. Xia, W. Shen, E. Fishman, and A. Yuille (2018) A 3d coarse-to-fine framework for volumetric medical image segmentation. In 3DV, Cited by: Figure 1.

Appendix A Number of Regions

In this section, we study the effect of , the number of region partitions. Specifically, we split the image into , , , , , , , , , , or regions. We show the learned universal perturbations in Figure 8. Due to the limitation of GPU memory, we study at most regions. The performance comparison is presented in Table 6. We observe that for stronger defenses (, PGD [37] and FD [62]), the optimal value of is relatively small. We explain that these strong defenses have stronger ability to denoise (Some work [32, 62] interprets the defense procedure as denoising), while the perturbations with small are less like a noise, thereby serving as strong perturbations for the strong defenses.

Figure 8: Regionally homogeneous universal perturbations with different number of region partitions.
#regions TVM [21] HGD [32] R&P [61] Incens3 [56] Incens4 [56] IncResens [56] PGD [37] ALP [26] FD [62]
1196 32.9 24.6 21.4 30.4 29.6 21.5 1.88 19.0 1.68
598 34.0 27.2 23.1 32.1 32.0 24.4 1.86 19.2 1.86
299 33.0 26.8 23.3 32.5 31.6 24.6 2.40 17.8 2.38
150 37.1 25.5 23.3 31.0 30.6 24.0 2.06 20.3 1.84
100 37.2 23.9 20.8 26.2 26.7 22.3 2.10 18.8 2.50
75 39.0 25.3 20.9 26.5 26.6 24.3 2.66 19.8 2.84
50 33.5 19.0 19.0 22.2 24.7 20.1 3.26 17.1 3.06
38 26.4 11.0 11.9 14.9 14.2 11.7 3.88 14.6 3.62
25 28.3 15.6 16.4 17.3 20.2 18.1 3.32 19.1 3.04
17 21.0 7.82 8.86 9.02 9.32 9.26 3.20 11.8 3.16
8 4.90 0.66 1.28 1.52 1.26 0.76 1.34 3.14 0.88
4 5.92 0.80 1.18 0.12 0.66 1.26 1.34 7.26 1.06
Table 6: The increase of error rates (%) after attacking. The adversarial examples are generated with IncV3. In each row, we show the performance when splitting the images into a different number of regions.

Appendix B Quantitative Universal Results

Besides the quantitative results included in the main manuscript, we show some qualitative results in Figure 9. Specifically, we show and compare the perturbations generated by universal inference (use a fixed zero as the input of the gradient transformer module) and image-dependent (use the loss gradient as the input) inference via using the gradient transformer module trained with different numbers of images. We arrive at the same conclusion as the main manuscript, , the gradient transformer module becomes quasi-input-independent when training with a large number of images (, 5k images or more).

Figure 9: A comparison of perturbations between image dependent inference (img. dep.) and universal inference (universal) when training with 4, 5k or 45k images. In the case of “4 images”, the difference between image dependent inference and universal inference is large, while that is small when training with more images.