Regional-Homogeneity
Source code for Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses
view repo
This paper focuses on learning transferable adversarial examples specifically against defense models (models to defense adversarial attacks). In particular, we show that a simple universal perturbation can fool a series of state-of-the-art defenses. Adversarial examples generated by existing attacks are generally hard to transfer to defense models. We observe the property of regional homogeneity in adversarial perturbations and suggest that the defenses are less robust to regionally homogeneous perturbations. Therefore, we propose an effective transforming paradigm and a customized gradient transformer module to transform existing perturbations into regionally homogeneous ones. Without explicitly forcing the perturbations to be universal, we observe that a well-trained gradient transformer module tends to output input-independent gradients (hence universal) benefiting from the under-fitting phenomenon. Thorough experiments demonstrate that our work significantly outperforms the prior art attacking algorithms (either image-dependent or universal ones) by an average improvement of 14.0 cross-model transferability, we also verify that regionally homogeneous perturbations can well transfer across different vision tasks (attacking with the semantic segmentation task and testing on the object detection task).
READ FULL TEXT VIEW PDFSource code for Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses
Deep neural networks are demonstrated vulnerable to adversarial examples
[55], crafted by adding visually imperceptible perturbations to clean images, which casts a security threat when deploying commercial machine learning systems. To mitigate this, large efforts have been devoted to adversarial defense
[7, 13, 36, 43, 44, 49, 52], via adversarial training [32, 37, 56, 62], input transformation [11, 21] or randomization [34, 61].The focus of this work is to attack defense models, especially in the black-box setting where models’ architectures and parameters remain unknown to attackers. In this case, the adversarial examples generated for one model, which possess the property of “transferability”, may also be misclassified by other models. To the best of our knowledge, learning transferable adversarial examples for attacking defense models is still an open problem.
Our work stems from the observation of regional homogeneity on adversarial perturbations in the white-box setting. As Figure 1(a) shows, we plot the adversarial perturbations generated by attacking a naturally trained Resnet-152 [23] model (top) and an representative defense one (, an adversarially trained model [37, 62]). It suggests that the patterns of two kinds of perturbations are visually different. Concretely, the perturbations of defense models reveal a coarser level of granularity, and are more locally correlated and more structured than that of the naturally trained model. The observation also holds when attacking different defense models (, adversarial training with feature denoising [62], see Figure 1(b)), generating different types of adversarial examples (image-dependent perturbations or universal ones [50], see Figure 1(c)), or tested on different data domains (CT scans from NIH pancreas segmentation dataset [46], see Figure 1(d)).
Motivated by this observation, we suggest that regionally homogeneous perturbations are strong in attacking defense models, which is especially helpful to learn transferable adversarial examples in the black-box setting. Hence, we propose to transform the existing perturbations (those derive from differentiating naturally trained models) to the regionally homogeneous ones. To this end, we develop a novel transforming paradigm (see Figure 2) to craft regionally homogeneous perturbations, and accordingly a gradient transformer module (see Figure 3), to encourage local correlations within the pre-defined regions.
The proposed gradient transformer module is quite light-weight, with only trainable parameters in total, where a convolutional layer (bias enabled) incurs parameters and
is the number of region partitions. According to our experiments, it leads to under-fitting (large bias and small variance) if the module is trained with a large number of images. In general vision tasks, an under-fitting model is undesirable. However in our case, once the gradient transformer module becomes quasi-input-independent (, aforementioned large bias and small variance), it will output a nearly fixed pattern whatever the input is. Then, our work is endowed with a desirable property, , seemingly training to generate image-dependent perturbations, yet get the universal ones. We note our mechanism is different from other universal adversarial generations
[38, 42] as we do not explicitly force the perturbation to be universal.Comprehensive experiments are conducted to verify the effectiveness of the proposed regionally homogeneous perturbation (RHP). Under the black-box setting, RHP successfully attacks 9 latest defenses [21, 26, 32, 37, 56, 61, 62] and improves the top-1 error rates by in average, where three of them are the top submissions in the NeurIPS 2017 defense competition [29] and the Competition on Adversarial Attacks and Defenses 2018. Compared with the state-of-the-art attack methods, RHP not only outperforms universal adversarial perturbations (, UAP [38] by and GAP [42] by ), but also outperforms image-dependent perturbations (FGSM [18] by , MIM [14] by and DIM [14, 63] by
). The achievement over image-dependent perturbations is especially valuable as it is known that image-dependent perturbations generally perform better as they utilized information from the original images. Since it is universal, RHP is more general (natural noises are not related to the target image), more efficient (without additional computational power), and more flexible (, without knowing the target image, people can stick a pattern on the lens to attack artificial intelligence surveillance cameras).
Moreover, we also evaluate the cross-task transferability of RHP and demonstrate that RHP generalizes well in cross-task attack, , attacking with the semantic segmentation task and testing on the object detection task.
In black-box setting, attackers cannot access the target model. A typical solution is to generate adversarial examples with strong transferability. Szegedy [55] first discuss the transferability of adversarial examples that the same input can successfully attack different models. Taking advantage of transferability, Papernot [41, 40] examine constructing a substitute model to attack a black-box target model. Liu [35] extend the black-box attack method to a large scale and successfully attack an online image classification system clarifai.com. Based on one of the most well-known attack methods, Fast Gradient Sign Method (FGSM) [18], its iteration-based version (I-FGSM) [30], Dong [14], Zhou [64], Xie [63] and Li [31] improve the transferability by adopting momentum term, smoothing perturbation, input transformation, and model augmentation, respectively. [4, 42, 60] also suggest to train generative models for creating adversarial examples.
Besides transfer-based attacks, query-based [5, 9] and decision-based [1, 20] attacks are also in the field of black-box attack. However, these families of methods need to access model outputs and query the target models for a large number of times. For example, Guo [20] limit the search space to a low-frequency domain and fool the Google Cloud Vision platform with an unprecedented 1000 model queries.
Above are all image-dependent perturbation attacks. Moosavi-Dezfooli [38]
craft universal perturbations which can be directly added to any test images to fool the classifier with a high success rate. Poursaeed
[42] propose to train a neural network for generating adversarial examples by explicitly feeding random noise to the network during training. After obtaining a well-trained model, they use a fixed input to generate universal adversarial perturbations. Researchers also explore to produce universal adversarial perturbations by different methods [27, 39] or on different tasks [24, 42]. All these methods construct universal adversarial perturbations explicitly or data-independently. Unlike them, we provide an implicit data-driven alternative to generate universal adversarial perturbations.break transferability by applying input transformation such as random padding/resizing
[61], JPEG compression [15], and total variance minimization [48]. Injecting adversarial examples during training improves the robustness of deep neural network, termed as adversarial training. These adversarial examples can be pre-generated [32, 56] or generated on-the-fly during training [26, 37, 62]. Adversarial training is also applied to universal adversarial perturbations [2, 50]. Tsipras [57] suggested, for an adversarially trained model, loss gradients in the input space align well with human perception. Shafahi [50] also have a similar observation on the universal adversarially trained models.To induce regionally homogeneous perturbations, our work resorts to a new normalization strategy. This strategy appears similar to some normalization techniques, such as batch normalization
[25], layer normalization [3], instance normalization [58], group normalization [59], . While these techniques aim to help the model converge faster and speed up the learning procedure for different tasks, the goal of our work is to explicitly enforce the region structure and build homogeneity within regions.As shown in Section 1, regionally homogeneous perturbations appear to be strong in attacking defense models. To acquire regionally homogeneous adversarial examples, we propose a gradient transformer module to generate regionally homogeneous perturbations from existing regionally non-homogeneous perturbations (, perturbations in the top row of Figure 1). In the following, we detail the transforming paradigm in Section 3.1 and the core component called gradient transformer module in Section 3.2, respectively. In Section 3.3, we observe an under-fitting phenomenon and illustrate that the proposed gradient transformer module becomes quasi-input-independent, which benefits crafting universal adversarial perturbations.
To learn regionally homogeneous adversarial perturbations, we propose to use a shallow network , which we call gradient transformer module, to transform the gradients that are generated by attacking naturally trained models.
Concretely, we consider Fast Gradient Sign Method (FGSM) [18] which generates adversarial examples by
(1) |
where
is the loss function of the model
, and sign denotes the sign function. is the ground-truth label of the original image . FGSM ensures that the generated adversarial example is within the -ball of in the space.Based on FGSM, we build pixel-wise connections via the additional gradient transformer module , so that we may have regionally homogeneous perturbations. Therefore, Eqn. (1) becomes
(2) |
where is trainable parameter of gradient transformer module , and we omit where possible for simplification. The challenge we are facing now is how to train the gradient transformer module with the limited supervision. We address this by proposing a new transforming paradigm illustrated in Figure 2. It consists of four steps, as we 1) compute the gradient by attacking the naturally trained model ; 2) get the transformed gradient via the gradient transformer module; 3) construct the adversarial image by adding the transformed perturbation to the clean image , forward to the same model , and obtain the classification loss ; and 4) freeze the clean image and the model , and update the parameters of by maximizing . The last step is implemented via stochastic gradient ascent (, we use the Adam optimizer [28] in our experiments).
With the new transforming paradigm, one can potentially embed desirable properties via using the gradient transformer module , and in the meantime, keep a high error rate on the model . As we will show below, is customized to generate regionally homogeneous perturbations specially against defense models. Meanwhile, since we freeze the most part of the computation graph and leave a limited number of parameters (that is ) to optimize, the learning procedure is very fast.
With the transforming paradigm aforementioned, we introduce the architecture of the core module, termed as gradient transformer module. The gradient transformer module aims at increasing the correlation of pixels in the same region, therefore inducing regionally homogeneous perturbations. As Figure 3 shows, given a loss gradient as the input, the gradient transformer module is defined as
(3) |
where is a convolutional layer and is the newly proposed region norm layer. is the module parameters, which goes to the region norm layer ( and
below) and the convolutional layer. A residual connection
[23] is also incorporated. Since is initialized as zero [19], the residual connection allows us to insert the gradient transformer module into any gradient-based attack methods without breaking its initial behavior (, the transformed gradient initially equals to ). Since the initial gradient is able to craft stronger adversarial example (compared with random noises), the gradient transformer module has a proper initialization.The region norm layer consists of two parts, including a region split function and a region norm operator.
Region split function splits an image (or equivalently, a convolutional feature map) into regions. Let denote the region split function. The input of is a pixel coordinate while the output is an index of the region which the pixel belongs to. With a region split function, we can get a partition of an image, where .
In Figure 4, we show 4 representatives of region split functions on a toy image, including 1) vertical partition , 2) horizontal partition , 3) grid partition , and 4) slash partition (parallel to an increasing line with the slope equal to 0.5).
-dimensional vector, where
is the batch size.Region norm operator links pixels within the same region , defined as
(4) |
where and are the -th input and output, respectively. And is a 4D vector indexing the features in order, where is the batch axis, is the channel axis, and and are the spatial height and width axes. We define as a set of pixels that belong to the region , that is, .
and in Eqn. (4
) are the mean and standard deviation (std) of the
region, computed by(5) |
where const is a small constant for numerical stability. is the size of . Here and is the cardinality of a given set. In the testing phase, the moving mean and moving std during training are used instead. Since we split the image to regions, the trainable scale and shift in Eqn. (4) are also computed per-region.
We illustrate the region norm operator in Figure 4(e). To analyze the benefit, we compute the derivatives as
(6) |
where is the loss to optimize, and . It is not surprising that the gradient of or is computed by all pixels in the related region. However, the gradient of a pixel with an index is also computed by all pixels in the same region. More significantly in Eqn. (6), the second term, and the third term, are shared by all pixels in the same region. Therefore, the pixel-wise connections within the same region are much denser after inserting the region norm layer.
Compared with existing normalizations (, Batch Norm [25], Layer Norm [3], Instance Norm [58] and Group Norm [59]), which aims to speed up learning, there are two main difference: 1) the goal of Region Norm is to generate regionally homogeneous perturbations, while existing methods mainly aim to stabilize and speed up training; 2) the formulation of Region Norm is splitting an image to regions and normalize each region individually, while other methods do not have spatial operations.
By analyzing the magnitude of four probes (, , , and ) in Figure 3, we observe that and in a well-trained gradient transformer module (more results in Section 4.2). Consequently, such a well-trained module becomes quasi-input-independent, , the output is nearly fixed and less related to the input. Note that the output is still a little bit related to the input which is the reason why we use “quasi-”.
Here, we first build the connection between that observation and under-fitting to explain the reason. Then, we convert the quasi-input-independent module to an input-independent module in order to generate universal adversarial perturbations.
People figure out the trade-off between bias and variance of a model, , the price for achieving a small bias is a large variance, and vice versa [6, 22]. Under-fitting occurs when the model shows low variance (but inevitable bias). An extremely low variance function gives a nearly fixed output whatever the input, which we term as quasi-input-independent. Although in the most machine learning situation people do not expect this case, the quasi-input-independent function is desirable for generating universal adversarial perturbation.
Therefore, to encourage under-fitting, we go to the opposite direction of preventing under-fitting suggestions in [17]. On the one hand, to minimize the model capacity, our gradient transformer module only has parameters, where a convolutional layer (bias enabled) incurs 12 parameters and is the number of region partitions. On the other hand, we use a large training data set (5k images or more) so that the model capacity is relatively small. We then will have a quasi-input-independent module.
According to the analysis above, we already have a quasi-input-independent module. To generate a universal adversarial perturbation, following the post-process strategy of Poursaeed [42], we use a fixed vector as input of the module. Then following FGSM [18], the final universal perturbation will be , where is a fixed input. Recall that denotes the sign function, and denotes the gradient transformer module.
In this section, we demonstrate the effectiveness of the proposed regionally homogeneous perturbation (RHP) by attacking a series of defense models.
We randomly select images from the validation set of ILSVRC 2012 [12] to evaluate the transferability of attack methods.
As for the evaluation metric, we use the improvement of top-
error rate after attacking, , the difference between the error rate of adversarial images and that of clean images.For performance comparison, we reproduce five representative attack methods, including fast gradient sign method (FGSM) [18], momentum iterative fast gradient sign method (MIM) [14], momentum diverse inputs iterative fast gradient sign method (DIM) [14, 63], universal adversarial perturbations (UAP) [38], and the universal version of generative adversarial perturbations (GAP) [42]. If not specified otherwise, we follow the default parameter setup in each method respectively.
To keep the perturbation quasi-imperceptible, we generate adversarial examples in the -ball of original images in the space. The maximum perturbation is set as or . The adversarial examples are generated by attacking a naturally trained network, Inception v3 (IncV3) [54], Inception v4 (IncV4) or Inception Resnet v2 (IncRes) [53]. In the default setting, IncV3 is used and .
Defenses | TVM | HGD | R&P | Incens3 | Incens4 | IncResens | PGD | ALP | FD |
Error Rate | 37.4 | 18.6 | 19.9 | 25.0 | 24.5 | 21.3 | 40.9 | 48.6 | 35.1 |
As our method is to attack defense models, we reproduce nine defense methods for performance evaluation, including input transformation [21] through total variance minimization (TVM), high-level representation guided denoiser (HGD) [32], input transformation through random resizing and padding (R&P) [61], three ensemble adversarially trained models (Incens3, Incens4 and IncResens) [56], adversarial training with project gradient descent white-box attacker (PGD) [37, 62]
, adversarial logits pairing (ALP)
[26], and feature denoising adversarially trained ResNeXt-101 (FD) [62].Among them, HGD [32] and R&P [61] are the rank-1 submission and rank-2 submission in the NeurIPS 2017 defense competition [29], respectively. FD [62] is the rank-1 submission in the Competition on Adversarial Attacks and Defenses 2018. The top-1 error rates of these methods on our dataset are shown in Table 1.
To train the gradient transformer module, we randomly select another images from the validation set of ILSVRC 2012 [12] as the training set. Note that the training set and the testing set are disjoint.
For the region split function, we choose as default, and will discuss different region split functions in Section 4.4
. We train the gradient transformer module for 50 epochs. When testing, we use a zero array as the input of the gradient transformer module to get universal adversarial perturbations, the fixed input
.To verify the connections between under-fitting and universal adversarial perturbations, we change the number of training images so that the models are supposed to be under-fitting (due to the model capacity becomes low compared to large dataset) or not. Specifically, we select 4, 5k or 45k images from the validation set of ILSVRC 2012 as the training set. We insert four probes , , , and in the gradient transformer module as shown in Figure 3 and compare their values in Figure 5 with respect to the training iterations.
When the gradient transformer module is well trained with 5k or 45k images, we observe that: 1) overwhelms , indicating the residual learning branch dominates the final output, , ; and 2) overwhelms , indicating the output of the convolutional layer is less related to the input gradient . Based on the two observations, we conclude that the gradient transformer module is quasi-input-independent when the module is under-fitted by a large number of training images in this case. Such a property is beneficial to generate universal adversarial perturbations (see Section 3.3).
When the number of training images is limited (say 4 images), we observe that does not overwhelm , indicating the output of the convolutional layer is related to the input gradient , since a small training set cannot lead to under-fitting.
This conclusion is further supported by Figure 5(c): when training with 4 images, the performance gap between universal inference (use a fixed zero as the input of the gradient transformer module) and image dependent inference (use the loss gradient as the input) is quite large. The gap is reduced when using more data for training. Figure 5(c) also illustrates 5k images are enough for under-fitting. Hence, we set the size of the training set to 5k in the following experiments.
To provide a better understanding of our implicit universal adversarial perturbation generating mechanism, we present an ablation study by comparing our method with other 3 strategies of generating universal adversarial perturbation with the same region split function. The compared includes 1) RP: Randomly assigns the Perturbation as and for each region; 2) OP: iteratively Optimize the Perturbation to maximize classification loss on the naturally trained model (the idea of [38]); 3) TU: explicitly Trains a Universal adversarial perturbations. The only difference between TU and our proposed RHP is that random noises take the place of the loss gradient g in Figure 2 (following [42]) and are fed to the gradient transformer module. RHP is our proposed implicitly method, and the gradient transformer module becomes quasi-input-independent without taking random noise as the training input.
We evaluate above four settings on IncResens, the error rates increase by , , , and for RP, OP, TU, and RHP respectively. Since our implicit method has a proper initialization (discussed in Section 3.2), we observe that our implicit method constructs stronger universal adversarial perturbations.
We first conduct the comparison in Table 2 when the maximum perturbation and , respectively.
A first glance shows that compared with other representatives, the proposed RHP provides much stronger attack toward defenses. For example, when attacking HGD [32] with , RHP outperforms FGSM [18] by , MIM [14] by , DIM [14, 63] by , UAP [38] by , and GAP [42] by , respectively. Second, universal methods generally perform worse than image-dependent methods as the latter can access and utilize the information from the clean images. Nevertheless, RHP, as a universal method, still beats those image-dependent methods by a large margin. At last, we observe that our method gains more when the maximum perturbation becomes larger.
Methods | TVM | HGD | R&P | Incens3 | Incens4 | IncResens | PGD | ALP | FD |
---|---|---|---|---|---|---|---|---|---|
FGSM [18] | 21.9/45.3 | 2.84/20.7 | 6.80/13.9 | 10.0/17.9 | 9.34/15.9 | 6.86/13.3 | 1.90/12.8 | 17.0/32.3 | 1.62/13.3 |
MIM [14] | 18.2/37.1 | 7.30/18.7 | 7.52/13.7 | 11.4/17.3 | 10.9/16.5 | 7.76/13.6 | 1.36/6.86 | 15.3/24.4 | 1.00/7.48 |
DIM [14, 63] | 21.9/41.0 | 11.9/32.1 | 12.0/21.9 | 16.7/26.1 | 16.2/25.0 | 10.8/19.6 | 1.84/7.70 | 15.5/24.7 | 1.34/8.22 |
UAP [38] | 4.78/12.1 | 1.94/11.3 | 2.42/6.66 | 1.00/7.82 | 1.80/8.34 | 1.88/5.60 | 0.04/1.04 | 7.98/11.5 | -0.1/0.40 |
GAP [42] | 18.5/50.1 | 1.34/37.9 | 3.52/26.9 | 5.48/33.3 | 4.14/29.4 | 3.76/22.5 | 1.28/10.2 | 15.6/30.0 | 0.56/11.1 |
RHP (ours) | 33.0/56.9 | 26.8/57.5 | 23.3/56.1 | 32.5/60.8 | 31.6/58.7 | 24.6/57.0 | 2.40/25.8 | 17.8/39.4 | 2.38/24.5 |
The performance comparison is also done when generating adversarial examples by attacking IncV4 or IncRes. Here we do not report the performance of GAP, because the official code does not support generating adversarial examples with IncV4 or IncRes. As shown in Table 3 and Table 4, RHP still keeps strong against defense models. Meanwhile, it should be mentioned that when the model for generating adversarial perturbations is changed, RHP still generates universal adversarial examples. The only difference is that the gradients used in the training phase are changed, which then leads to a different set of parameters in the gradient transformer module.
Methods | TVM | HGD | R&P | Incens3 | Incens4 | IncResens | PGD | ALP | FD |
---|---|---|---|---|---|---|---|---|---|
FGSM [18] | 22.4/46.3 | 4.00/21.1 | 8.68/15.1 | 10.1/18.3 | 9.72/17.4 | 7.58/14.7 | 2.02/12.8 | 17.3/32.1 | 1.42/13.4 |
MIM [14] | 20.1/40.4 | 10.0/23.9 | 10.2/17.4 | 13.4/20.3 | 13.1/19.0 | 9.96/16.6 | 1.50/7.54 | 14.8/25.1 | 1.24/8.18 |
DIM [14, 63] | 22.7/42.9 | 16.3/37.1 | 14.7/25.0 | 18.7/28.6 | 17.9/26.5 | 13.6/22.1 | 1.82/8.02 | 15.2/24.8 | 1.62/8.74 |
UAP [38] | 6.28/18.2 | 1.42/9.94 | 2.42/6.52 | 2.08/7.68 | 1.94/6.92 | 2.34/6.78 | 0.28/2.12 | 10.1/15.9 | 0.16/1.18 |
RHP (ours) | 37.1/58.4 | 23.4/59.8 | 20.2/57.6 | 27.5/60.3 | 26.7/62.5 | 21.2/58.5 | 2.20/29.7 | 20.3/42.1 | 1.90/31.8 |
Methods | TVM | HGD | R&P | Incens3 | Incens4 | IncResens | PGD | ALP | FD |
---|---|---|---|---|---|---|---|---|---|
FGSM [18] | 20.6/44.1 | 5.34/22.3 | 10.1/15.8 | 11.7/19.4 | 10.5/17.2 | 10.4/16.3 | 2.06/13.8 | 17.5/32.6 | 1.72/14.7 |
MIM [14] | 20.3/39.4 | 15.0/28.1 | 13.4/22.1 | 17.4/24.6 | 15.1/22.5 | 13.6/22.6 | 1.84/8.80 | 12.3/25.9 | 1.62/9.48 |
DIM [14, 63] | 24.6/44.0 | 23.7/44.1 | 22.5/34.5 | 25.8/37.1 | 22.4/33.7 | 20.2/32.5 | 2.36/9.26 | 12.6/25.9 | 1.78/10.1 |
UAP [38] | 7.10/24.7 | 2.14/10.6 | 2.50/8.36 | 1.88/8.28 | 1.74/7.22 | 1.96/8.18 | 0.40/3.78 | 7.12/17.0 | -0.1/3.06 |
RHP (ours) | 37.1/57.4 | 26.9/62.1 | 25.1/61.4 | 29.7/62.3 | 29.8/63.3 | 26.8/62.8 | 2.20/28.3 | 22.8/43.5 | 2.20/32.2 |
In this section, we discuss the choice of region split functions, , vertical partition, horizontal partition, grid partition and slash partition (parallel to an increasing line with the slope equal to ). Figure 6 shows the transferability to the defenses, which demonstrates that different region split functions are almost equivalently effective and all are stronger than our strongest baseline (DIM). Moreover, we observe an interesting phenomenon as presented in Figure 7.
In each row of Figure 7, we exhibit the universal adversarial perturbation generated by a certain kind of region split functions, followed by the top-3 categories to which the generated adversarial examples are most likely to be misclassified. For each category, we show a clean image as an exemplar. Note that our experiments are about the non-targeted attack, indicating the target class is undetermined and solely relies on the region split function.
As can be seen, the regionally homogeneous perturbations with different region split functions seem to be targeting at different categories, with an inherent connection between the low-level cues (, texture, shape) they share. For example, when using grid partition, the top-3 target categories are quilt, shower curtain, and container ship, respectively, and one can observe that images in the three categories generally have grid-structured patterns.
Motivated by these qualitative results, we have a preliminary hypothesis that the regionally homogeneous perturbations tend to attack the low-level part of a model. The claim is not supported by a theoretical proof, however, it inspires us to test the cross-task transferability of RHP. As it is a common strategy to share the low-level CNN architecture/information in multi-task learning systems [47], we conjecture that RHP can well transfer between different tasks (see below).
To demonstrate the cross-task transferability of RHP, we attack with the semantic segmentation task and test on the object detection task.
In more detail, we attack a semantic segmentation model (an Xception-65 [10] based deeplab-v3+ [8]) on the Pascal VOC 2012 segmentation val [16], and obtain the adversarial examples. Then, we take a VGG16 [51] based Faster-RCNN model [45], trained on MS COCO [33] and VOC2007 trainval, as the testing model. To avoid testing images occurred in the training set of detection model, the testing set is the union of VOC2012 segmentation val and VOC2012 detection trainval, then we remove the images in VOC2007 dataset. The baseline performance of the clean images is mAP . Here mAP score is the average of the precisions at different recall values.
As shown in Table 5, RHP reports the lowest mAP with object detection, which demonstrates the stronger cross-task transferability than the baseline image-dependent perturbations, , FGSM [18], MIM [14], and DIM [14, 63].
Attacks | - | FGSM | MIM | DIM | RHP (ours) |
---|---|---|---|---|---|
mAP | 69.2 | 43.1 | 41.6 | 36.2 | 31.6 |
By white-box attacking naturally trained models and defense models, we observe the regional homogeneity of adversarial perturbations. Motivated by this observation, we propose a transforming paradigm and a gradient transformer module to generate the regionally homogeneous perturbation (RHP) specifically for attacking defenses. RHP possesses three merits, including 1) transferability: we demonstrate that RHP well transfers across different models (, black-box attack) and different tasks; 2) universal: taking advantage of the under-fitting of the gradient transformer module, RHP generates universal adversarial examples without explicitly enforcing the learning procedure towards it; 3) strong: RHP successfully attacks representative defenses and outperforms the state-of-the-art attacking methods by a large margin.
Recent studies [32, 62] show that the mechanism of some defense models can be interpreted as a “denoising” procedure. Since RHP is less like noise compared with other perturbations, it would be interesting to reveal the property of RHP from a denoising perspective in future works. Meanwhile, although evaluated with the non-targeted attack, RHP is supposed to be strong targeted attack as well, which requires further exploration and validation.
Learning to attack: adversarial transformation networks
. In AAAI, Cited by: §2.Xception: deep learning with depthwise separable convolutions
. In ICCV, Cited by: §4.5.Rethinking the inception architecture for computer vision
. In CVPR, Cited by: §4.1.Robustness may be at odds with accuracy
. In ICLR, Cited by: §2.In this section, we study the effect of , the number of region partitions. Specifically, we split the image into , , , , , , , , , , or regions. We show the learned universal perturbations in Figure 8. Due to the limitation of GPU memory, we study at most regions. The performance comparison is presented in Table 6. We observe that for stronger defenses (, PGD [37] and FD [62]), the optimal value of is relatively small. We explain that these strong defenses have stronger ability to denoise (Some work [32, 62] interprets the defense procedure as denoising), while the perturbations with small are less like a noise, thereby serving as strong perturbations for the strong defenses.
#regions | TVM [21] | HGD [32] | R&P [61] | Incens3 [56] | Incens4 [56] | IncResens [56] | PGD [37] | ALP [26] | FD [62] |
---|---|---|---|---|---|---|---|---|---|
1196 | 32.9 | 24.6 | 21.4 | 30.4 | 29.6 | 21.5 | 1.88 | 19.0 | 1.68 |
598 | 34.0 | 27.2 | 23.1 | 32.1 | 32.0 | 24.4 | 1.86 | 19.2 | 1.86 |
299 | 33.0 | 26.8 | 23.3 | 32.5 | 31.6 | 24.6 | 2.40 | 17.8 | 2.38 |
150 | 37.1 | 25.5 | 23.3 | 31.0 | 30.6 | 24.0 | 2.06 | 20.3 | 1.84 |
100 | 37.2 | 23.9 | 20.8 | 26.2 | 26.7 | 22.3 | 2.10 | 18.8 | 2.50 |
75 | 39.0 | 25.3 | 20.9 | 26.5 | 26.6 | 24.3 | 2.66 | 19.8 | 2.84 |
50 | 33.5 | 19.0 | 19.0 | 22.2 | 24.7 | 20.1 | 3.26 | 17.1 | 3.06 |
38 | 26.4 | 11.0 | 11.9 | 14.9 | 14.2 | 11.7 | 3.88 | 14.6 | 3.62 |
25 | 28.3 | 15.6 | 16.4 | 17.3 | 20.2 | 18.1 | 3.32 | 19.1 | 3.04 |
17 | 21.0 | 7.82 | 8.86 | 9.02 | 9.32 | 9.26 | 3.20 | 11.8 | 3.16 |
8 | 4.90 | 0.66 | 1.28 | 1.52 | 1.26 | 0.76 | 1.34 | 3.14 | 0.88 |
4 | 5.92 | 0.80 | 1.18 | 0.12 | 0.66 | 1.26 | 1.34 | 7.26 | 1.06 |
Besides the quantitative results included in the main manuscript, we show some qualitative results in Figure 9. Specifically, we show and compare the perturbations generated by universal inference (use a fixed zero as the input of the gradient transformer module) and image-dependent (use the loss gradient as the input) inference via using the gradient transformer module trained with different numbers of images. We arrive at the same conclusion as the main manuscript, , the gradient transformer module becomes quasi-input-independent when training with a large number of images (, 5k images or more).
Comments
There are no comments yet.