Source code for Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses
This paper focuses on learning transferable adversarial examples specifically against defense models (models to defense adversarial attacks). In particular, we show that a simple universal perturbation can fool a series of state-of-the-art defenses. Adversarial examples generated by existing attacks are generally hard to transfer to defense models. We observe the property of regional homogeneity in adversarial perturbations and suggest that the defenses are less robust to regionally homogeneous perturbations. Therefore, we propose an effective transforming paradigm and a customized gradient transformer module to transform existing perturbations into regionally homogeneous ones. Without explicitly forcing the perturbations to be universal, we observe that a well-trained gradient transformer module tends to output input-independent gradients (hence universal) benefiting from the under-fitting phenomenon. Thorough experiments demonstrate that our work significantly outperforms the prior art attacking algorithms (either image-dependent or universal ones) by an average improvement of 14.0 cross-model transferability, we also verify that regionally homogeneous perturbations can well transfer across different vision tasks (attacking with the semantic segmentation task and testing on the object detection task).READ FULL TEXT VIEW PDF
Regional adversarial attacks often rely on complicated methods for gener...
Recent work has extensively shown that randomized perturbations of a neu...
Deep neural networks (DNNs) are known to be vulnerable to adversarial
Deep learning models are known to be vulnerable not only to input-depend...
The growing interest for adversarial examples, i.e. maliciously modified...
Input transformation based defense strategies fall short in defending ag...
Machine learning models are susceptible to adversarial perturbations: sm...
Source code for Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses
Deep neural networks are demonstrated vulnerable to adversarial examples
, crafted by adding visually imperceptible perturbations to clean images, which casts a security threat when deploying commercial machine learning systems. To mitigate this, large efforts have been devoted to adversarial defense[7, 13, 36, 43, 44, 49, 52], via adversarial training [32, 37, 56, 62], input transformation [11, 21] or randomization [34, 61].
The focus of this work is to attack defense models, especially in the black-box setting where models’ architectures and parameters remain unknown to attackers. In this case, the adversarial examples generated for one model, which possess the property of “transferability”, may also be misclassified by other models. To the best of our knowledge, learning transferable adversarial examples for attacking defense models is still an open problem.
Our work stems from the observation of regional homogeneity on adversarial perturbations in the white-box setting. As Figure 1(a) shows, we plot the adversarial perturbations generated by attacking a naturally trained Resnet-152  model (top) and an representative defense one (, an adversarially trained model [37, 62]). It suggests that the patterns of two kinds of perturbations are visually different. Concretely, the perturbations of defense models reveal a coarser level of granularity, and are more locally correlated and more structured than that of the naturally trained model. The observation also holds when attacking different defense models (, adversarial training with feature denoising , see Figure 1(b)), generating different types of adversarial examples (image-dependent perturbations or universal ones , see Figure 1(c)), or tested on different data domains (CT scans from NIH pancreas segmentation dataset , see Figure 1(d)).
Motivated by this observation, we suggest that regionally homogeneous perturbations are strong in attacking defense models, which is especially helpful to learn transferable adversarial examples in the black-box setting. Hence, we propose to transform the existing perturbations (those derive from differentiating naturally trained models) to the regionally homogeneous ones. To this end, we develop a novel transforming paradigm (see Figure 2) to craft regionally homogeneous perturbations, and accordingly a gradient transformer module (see Figure 3), to encourage local correlations within the pre-defined regions.
The proposed gradient transformer module is quite light-weight, with only trainable parameters in total, where a convolutional layer (bias enabled) incurs parameters and
is the number of region partitions. According to our experiments, it leads to under-fitting (large bias and small variance) if the module is trained with a large number of images. In general vision tasks, an under-fitting model is undesirable. However in our case, once the gradient transformer module becomes quasi-input-independent (, aforementioned large bias and small variance), it will output a nearly fixed pattern whatever the input is. Then, our work is endowed with a desirable property, , seemingly training to generate image-dependent perturbations, yet get the universal ones. We note our mechanism is different from other universal adversarial generations[38, 42] as we do not explicitly force the perturbation to be universal.
Comprehensive experiments are conducted to verify the effectiveness of the proposed regionally homogeneous perturbation (RHP). Under the black-box setting, RHP successfully attacks 9 latest defenses [21, 26, 32, 37, 56, 61, 62] and improves the top-1 error rates by in average, where three of them are the top submissions in the NeurIPS 2017 defense competition  and the Competition on Adversarial Attacks and Defenses 2018. Compared with the state-of-the-art attack methods, RHP not only outperforms universal adversarial perturbations (, UAP  by and GAP  by ), but also outperforms image-dependent perturbations (FGSM  by , MIM  by and DIM [14, 63] by
). The achievement over image-dependent perturbations is especially valuable as it is known that image-dependent perturbations generally perform better as they utilized information from the original images. Since it is universal, RHP is more general (natural noises are not related to the target image), more efficient (without additional computational power), and more flexible (, without knowing the target image, people can stick a pattern on the lens to attack artificial intelligence surveillance cameras).
Moreover, we also evaluate the cross-task transferability of RHP and demonstrate that RHP generalizes well in cross-task attack, , attacking with the semantic segmentation task and testing on the object detection task.
In black-box setting, attackers cannot access the target model. A typical solution is to generate adversarial examples with strong transferability. Szegedy  first discuss the transferability of adversarial examples that the same input can successfully attack different models. Taking advantage of transferability, Papernot [41, 40] examine constructing a substitute model to attack a black-box target model. Liu  extend the black-box attack method to a large scale and successfully attack an online image classification system clarifai.com. Based on one of the most well-known attack methods, Fast Gradient Sign Method (FGSM) , its iteration-based version (I-FGSM) , Dong , Zhou , Xie  and Li  improve the transferability by adopting momentum term, smoothing perturbation, input transformation, and model augmentation, respectively. [4, 42, 60] also suggest to train generative models for creating adversarial examples.
Besides transfer-based attacks, query-based [5, 9] and decision-based [1, 20] attacks are also in the field of black-box attack. However, these families of methods need to access model outputs and query the target models for a large number of times. For example, Guo  limit the search space to a low-frequency domain and fool the Google Cloud Vision platform with an unprecedented 1000 model queries.
Above are all image-dependent perturbation attacks. Moosavi-Dezfooli 
craft universal perturbations which can be directly added to any test images to fool the classifier with a high success rate. Poursaeed propose to train a neural network for generating adversarial examples by explicitly feeding random noise to the network during training. After obtaining a well-trained model, they use a fixed input to generate universal adversarial perturbations. Researchers also explore to produce universal adversarial perturbations by different methods [27, 39] or on different tasks [24, 42]. All these methods construct universal adversarial perturbations explicitly or data-independently. Unlike them, we provide an implicit data-driven alternative to generate universal adversarial perturbations.
break transferability by applying input transformation such as random padding/resizing, JPEG compression , and total variance minimization . Injecting adversarial examples during training improves the robustness of deep neural network, termed as adversarial training. These adversarial examples can be pre-generated [32, 56] or generated on-the-fly during training [26, 37, 62]. Adversarial training is also applied to universal adversarial perturbations [2, 50]. Tsipras  suggested, for an adversarially trained model, loss gradients in the input space align well with human perception. Shafahi  also have a similar observation on the universal adversarially trained models.
To induce regionally homogeneous perturbations, our work resorts to a new normalization strategy. This strategy appears similar to some normalization techniques, such as batch normalization, layer normalization , instance normalization , group normalization , . While these techniques aim to help the model converge faster and speed up the learning procedure for different tasks, the goal of our work is to explicitly enforce the region structure and build homogeneity within regions.
As shown in Section 1, regionally homogeneous perturbations appear to be strong in attacking defense models. To acquire regionally homogeneous adversarial examples, we propose a gradient transformer module to generate regionally homogeneous perturbations from existing regionally non-homogeneous perturbations (, perturbations in the top row of Figure 1). In the following, we detail the transforming paradigm in Section 3.1 and the core component called gradient transformer module in Section 3.2, respectively. In Section 3.3, we observe an under-fitting phenomenon and illustrate that the proposed gradient transformer module becomes quasi-input-independent, which benefits crafting universal adversarial perturbations.
To learn regionally homogeneous adversarial perturbations, we propose to use a shallow network , which we call gradient transformer module, to transform the gradients that are generated by attacking naturally trained models.
Concretely, we consider Fast Gradient Sign Method (FGSM)  which generates adversarial examples by
is the loss function of the model, and sign denotes the sign function. is the ground-truth label of the original image . FGSM ensures that the generated adversarial example is within the -ball of in the space.
Based on FGSM, we build pixel-wise connections via the additional gradient transformer module , so that we may have regionally homogeneous perturbations. Therefore, Eqn. (1) becomes
where is trainable parameter of gradient transformer module , and we omit where possible for simplification. The challenge we are facing now is how to train the gradient transformer module with the limited supervision. We address this by proposing a new transforming paradigm illustrated in Figure 2. It consists of four steps, as we 1) compute the gradient by attacking the naturally trained model ; 2) get the transformed gradient via the gradient transformer module; 3) construct the adversarial image by adding the transformed perturbation to the clean image , forward to the same model , and obtain the classification loss ; and 4) freeze the clean image and the model , and update the parameters of by maximizing . The last step is implemented via stochastic gradient ascent (, we use the Adam optimizer  in our experiments).
With the new transforming paradigm, one can potentially embed desirable properties via using the gradient transformer module , and in the meantime, keep a high error rate on the model . As we will show below, is customized to generate regionally homogeneous perturbations specially against defense models. Meanwhile, since we freeze the most part of the computation graph and leave a limited number of parameters (that is ) to optimize, the learning procedure is very fast.
With the transforming paradigm aforementioned, we introduce the architecture of the core module, termed as gradient transformer module. The gradient transformer module aims at increasing the correlation of pixels in the same region, therefore inducing regionally homogeneous perturbations. As Figure 3 shows, given a loss gradient as the input, the gradient transformer module is defined as
where is a convolutional layer and is the newly proposed region norm layer. is the module parameters, which goes to the region norm layer ( and
below) and the convolutional layer. A residual connection is also incorporated. Since is initialized as zero , the residual connection allows us to insert the gradient transformer module into any gradient-based attack methods without breaking its initial behavior (, the transformed gradient initially equals to ). Since the initial gradient is able to craft stronger adversarial example (compared with random noises), the gradient transformer module has a proper initialization.
The region norm layer consists of two parts, including a region split function and a region norm operator.
Region split function splits an image (or equivalently, a convolutional feature map) into regions. Let denote the region split function. The input of is a pixel coordinate while the output is an index of the region which the pixel belongs to. With a region split function, we can get a partition of an image, where .
In Figure 4, we show 4 representatives of region split functions on a toy image, including 1) vertical partition , 2) horizontal partition , 3) grid partition , and 4) slash partition (parallel to an increasing line with the slope equal to 0.5).
Region norm operator links pixels within the same region , defined as
where and are the -th input and output, respectively. And is a 4D vector indexing the features in order, where is the batch axis, is the channel axis, and and are the spatial height and width axes. We define as a set of pixels that belong to the region , that is, .
and in Eqn. (4
) are the mean and standard deviation (std) of theregion, computed by
where const is a small constant for numerical stability. is the size of . Here and is the cardinality of a given set. In the testing phase, the moving mean and moving std during training are used instead. Since we split the image to regions, the trainable scale and shift in Eqn. (4) are also computed per-region.
We illustrate the region norm operator in Figure 4(e). To analyze the benefit, we compute the derivatives as
where is the loss to optimize, and . It is not surprising that the gradient of or is computed by all pixels in the related region. However, the gradient of a pixel with an index is also computed by all pixels in the same region. More significantly in Eqn. (6), the second term, and the third term, are shared by all pixels in the same region. Therefore, the pixel-wise connections within the same region are much denser after inserting the region norm layer.
Compared with existing normalizations (, Batch Norm , Layer Norm , Instance Norm  and Group Norm ), which aims to speed up learning, there are two main difference: 1) the goal of Region Norm is to generate regionally homogeneous perturbations, while existing methods mainly aim to stabilize and speed up training; 2) the formulation of Region Norm is splitting an image to regions and normalize each region individually, while other methods do not have spatial operations.
By analyzing the magnitude of four probes (, , , and ) in Figure 3, we observe that and in a well-trained gradient transformer module (more results in Section 4.2). Consequently, such a well-trained module becomes quasi-input-independent, , the output is nearly fixed and less related to the input. Note that the output is still a little bit related to the input which is the reason why we use “quasi-”.
Here, we first build the connection between that observation and under-fitting to explain the reason. Then, we convert the quasi-input-independent module to an input-independent module in order to generate universal adversarial perturbations.
People figure out the trade-off between bias and variance of a model, , the price for achieving a small bias is a large variance, and vice versa [6, 22]. Under-fitting occurs when the model shows low variance (but inevitable bias). An extremely low variance function gives a nearly fixed output whatever the input, which we term as quasi-input-independent. Although in the most machine learning situation people do not expect this case, the quasi-input-independent function is desirable for generating universal adversarial perturbation.
Therefore, to encourage under-fitting, we go to the opposite direction of preventing under-fitting suggestions in . On the one hand, to minimize the model capacity, our gradient transformer module only has parameters, where a convolutional layer (bias enabled) incurs 12 parameters and is the number of region partitions. On the other hand, we use a large training data set (5k images or more) so that the model capacity is relatively small. We then will have a quasi-input-independent module.
According to the analysis above, we already have a quasi-input-independent module. To generate a universal adversarial perturbation, following the post-process strategy of Poursaeed , we use a fixed vector as input of the module. Then following FGSM , the final universal perturbation will be , where is a fixed input. Recall that denotes the sign function, and denotes the gradient transformer module.
In this section, we demonstrate the effectiveness of the proposed regionally homogeneous perturbation (RHP) by attacking a series of defense models.
We randomly select images from the validation set of ILSVRC 2012  to evaluate the transferability of attack methods.
As for the evaluation metric, we use the improvement of top-error rate after attacking, , the difference between the error rate of adversarial images and that of clean images.
For performance comparison, we reproduce five representative attack methods, including fast gradient sign method (FGSM) , momentum iterative fast gradient sign method (MIM) , momentum diverse inputs iterative fast gradient sign method (DIM) [14, 63], universal adversarial perturbations (UAP) , and the universal version of generative adversarial perturbations (GAP) . If not specified otherwise, we follow the default parameter setup in each method respectively.
To keep the perturbation quasi-imperceptible, we generate adversarial examples in the -ball of original images in the space. The maximum perturbation is set as or . The adversarial examples are generated by attacking a naturally trained network, Inception v3 (IncV3) , Inception v4 (IncV4) or Inception Resnet v2 (IncRes) . In the default setting, IncV3 is used and .
As our method is to attack defense models, we reproduce nine defense methods for performance evaluation, including input transformation  through total variance minimization (TVM), high-level representation guided denoiser (HGD) , input transformation through random resizing and padding (R&P) , three ensemble adversarially trained models (Incens3, Incens4 and IncResens) , adversarial training with project gradient descent white-box attacker (PGD) [37, 62]
, adversarial logits pairing (ALP), and feature denoising adversarially trained ResNeXt-101 (FD) .
Among them, HGD  and R&P  are the rank-1 submission and rank-2 submission in the NeurIPS 2017 defense competition , respectively. FD  is the rank-1 submission in the Competition on Adversarial Attacks and Defenses 2018. The top-1 error rates of these methods on our dataset are shown in Table 1.
To train the gradient transformer module, we randomly select another images from the validation set of ILSVRC 2012  as the training set. Note that the training set and the testing set are disjoint.
For the region split function, we choose as default, and will discuss different region split functions in Section 4.4
. We train the gradient transformer module for 50 epochs. When testing, we use a zero array as the input of the gradient transformer module to get universal adversarial perturbations, the fixed input.
To verify the connections between under-fitting and universal adversarial perturbations, we change the number of training images so that the models are supposed to be under-fitting (due to the model capacity becomes low compared to large dataset) or not. Specifically, we select 4, 5k or 45k images from the validation set of ILSVRC 2012 as the training set. We insert four probes , , , and in the gradient transformer module as shown in Figure 3 and compare their values in Figure 5 with respect to the training iterations.
When the gradient transformer module is well trained with 5k or 45k images, we observe that: 1) overwhelms , indicating the residual learning branch dominates the final output, , ; and 2) overwhelms , indicating the output of the convolutional layer is less related to the input gradient . Based on the two observations, we conclude that the gradient transformer module is quasi-input-independent when the module is under-fitted by a large number of training images in this case. Such a property is beneficial to generate universal adversarial perturbations (see Section 3.3).
When the number of training images is limited (say 4 images), we observe that does not overwhelm , indicating the output of the convolutional layer is related to the input gradient , since a small training set cannot lead to under-fitting.
This conclusion is further supported by Figure 5(c): when training with 4 images, the performance gap between universal inference (use a fixed zero as the input of the gradient transformer module) and image dependent inference (use the loss gradient as the input) is quite large. The gap is reduced when using more data for training. Figure 5(c) also illustrates 5k images are enough for under-fitting. Hence, we set the size of the training set to 5k in the following experiments.
To provide a better understanding of our implicit universal adversarial perturbation generating mechanism, we present an ablation study by comparing our method with other 3 strategies of generating universal adversarial perturbation with the same region split function. The compared includes 1) RP: Randomly assigns the Perturbation as and for each region; 2) OP: iteratively Optimize the Perturbation to maximize classification loss on the naturally trained model (the idea of ); 3) TU: explicitly Trains a Universal adversarial perturbations. The only difference between TU and our proposed RHP is that random noises take the place of the loss gradient g in Figure 2 (following ) and are fed to the gradient transformer module. RHP is our proposed implicitly method, and the gradient transformer module becomes quasi-input-independent without taking random noise as the training input.
We evaluate above four settings on IncResens, the error rates increase by , , , and for RP, OP, TU, and RHP respectively. Since our implicit method has a proper initialization (discussed in Section 3.2), we observe that our implicit method constructs stronger universal adversarial perturbations.
We first conduct the comparison in Table 2 when the maximum perturbation and , respectively.
A first glance shows that compared with other representatives, the proposed RHP provides much stronger attack toward defenses. For example, when attacking HGD  with , RHP outperforms FGSM  by , MIM  by , DIM [14, 63] by , UAP  by , and GAP  by , respectively. Second, universal methods generally perform worse than image-dependent methods as the latter can access and utilize the information from the clean images. Nevertheless, RHP, as a universal method, still beats those image-dependent methods by a large margin. At last, we observe that our method gains more when the maximum perturbation becomes larger.
|DIM [14, 63]||21.9/41.0||11.9/32.1||12.0/21.9||16.7/26.1||16.2/25.0||10.8/19.6||1.84/7.70||15.5/24.7||1.34/8.22|
The performance comparison is also done when generating adversarial examples by attacking IncV4 or IncRes. Here we do not report the performance of GAP, because the official code does not support generating adversarial examples with IncV4 or IncRes. As shown in Table 3 and Table 4, RHP still keeps strong against defense models. Meanwhile, it should be mentioned that when the model for generating adversarial perturbations is changed, RHP still generates universal adversarial examples. The only difference is that the gradients used in the training phase are changed, which then leads to a different set of parameters in the gradient transformer module.
|DIM [14, 63]||22.7/42.9||16.3/37.1||14.7/25.0||18.7/28.6||17.9/26.5||13.6/22.1||1.82/8.02||15.2/24.8||1.62/8.74|
|DIM [14, 63]||24.6/44.0||23.7/44.1||22.5/34.5||25.8/37.1||22.4/33.7||20.2/32.5||2.36/9.26||12.6/25.9||1.78/10.1|
In this section, we discuss the choice of region split functions, , vertical partition, horizontal partition, grid partition and slash partition (parallel to an increasing line with the slope equal to ). Figure 6 shows the transferability to the defenses, which demonstrates that different region split functions are almost equivalently effective and all are stronger than our strongest baseline (DIM). Moreover, we observe an interesting phenomenon as presented in Figure 7.
In each row of Figure 7, we exhibit the universal adversarial perturbation generated by a certain kind of region split functions, followed by the top-3 categories to which the generated adversarial examples are most likely to be misclassified. For each category, we show a clean image as an exemplar. Note that our experiments are about the non-targeted attack, indicating the target class is undetermined and solely relies on the region split function.
As can be seen, the regionally homogeneous perturbations with different region split functions seem to be targeting at different categories, with an inherent connection between the low-level cues (, texture, shape) they share. For example, when using grid partition, the top-3 target categories are quilt, shower curtain, and container ship, respectively, and one can observe that images in the three categories generally have grid-structured patterns.
Motivated by these qualitative results, we have a preliminary hypothesis that the regionally homogeneous perturbations tend to attack the low-level part of a model. The claim is not supported by a theoretical proof, however, it inspires us to test the cross-task transferability of RHP. As it is a common strategy to share the low-level CNN architecture/information in multi-task learning systems , we conjecture that RHP can well transfer between different tasks (see below).
To demonstrate the cross-task transferability of RHP, we attack with the semantic segmentation task and test on the object detection task.
In more detail, we attack a semantic segmentation model (an Xception-65  based deeplab-v3+ ) on the Pascal VOC 2012 segmentation val , and obtain the adversarial examples. Then, we take a VGG16  based Faster-RCNN model , trained on MS COCO  and VOC2007 trainval, as the testing model. To avoid testing images occurred in the training set of detection model, the testing set is the union of VOC2012 segmentation val and VOC2012 detection trainval, then we remove the images in VOC2007 dataset. The baseline performance of the clean images is mAP . Here mAP score is the average of the precisions at different recall values.
As shown in Table 5, RHP reports the lowest mAP with object detection, which demonstrates the stronger cross-task transferability than the baseline image-dependent perturbations, , FGSM , MIM , and DIM [14, 63].
By white-box attacking naturally trained models and defense models, we observe the regional homogeneity of adversarial perturbations. Motivated by this observation, we propose a transforming paradigm and a gradient transformer module to generate the regionally homogeneous perturbation (RHP) specifically for attacking defenses. RHP possesses three merits, including 1) transferability: we demonstrate that RHP well transfers across different models (, black-box attack) and different tasks; 2) universal: taking advantage of the under-fitting of the gradient transformer module, RHP generates universal adversarial examples without explicitly enforcing the learning procedure towards it; 3) strong: RHP successfully attacks representative defenses and outperforms the state-of-the-art attacking methods by a large margin.
Recent studies [32, 62] show that the mechanism of some defense models can be interpreted as a “denoising” procedure. Since RHP is less like noise compared with other perturbations, it would be interesting to reveal the property of RHP from a denoising perspective in future works. Meanwhile, although evaluated with the non-targeted attack, RHP is supposed to be strong targeted attack as well, which requires further exploration and validation.
Learning to attack: adversarial transformation networks. In AAAI, Cited by: §2.
Xception: deep learning with depthwise separable convolutions. In ICCV, Cited by: §4.5.
Rethinking the inception architecture for computer vision. In CVPR, Cited by: §4.1.
Robustness may be at odds with accuracy. In ICLR, Cited by: §2.
In this section, we study the effect of , the number of region partitions. Specifically, we split the image into , , , , , , , , , , or regions. We show the learned universal perturbations in Figure 8. Due to the limitation of GPU memory, we study at most regions. The performance comparison is presented in Table 6. We observe that for stronger defenses (, PGD  and FD ), the optimal value of is relatively small. We explain that these strong defenses have stronger ability to denoise (Some work [32, 62] interprets the defense procedure as denoising), while the perturbations with small are less like a noise, thereby serving as strong perturbations for the strong defenses.
|#regions||TVM ||HGD ||R&P ||Incens3 ||Incens4 ||IncResens ||PGD ||ALP ||FD |
Besides the quantitative results included in the main manuscript, we show some qualitative results in Figure 9. Specifically, we show and compare the perturbations generated by universal inference (use a fixed zero as the input of the gradient transformer module) and image-dependent (use the loss gradient as the input) inference via using the gradient transformer module trained with different numbers of images. We arrive at the same conclusion as the main manuscript, , the gradient transformer module becomes quasi-input-independent when training with a large number of images (, 5k images or more).