Cross-Domain Transferability of Adversarial Perturbations

05/28/2019 ∙ by Muzammal Naseer, et al. ∙ Australian National University 0

Adversarial examples reveal the blind spots of deep neural networks (DNNs) and represent a major concern for security-critical applications. The transferability of adversarial examples makes real-world attacks possible in black-box settings, where the attacker is forbidden to access the internal parameters of the model. The underlying assumption in most adversary generation methods, whether learning an instance-specific or an instance-agnostic perturbation, is the direct or indirect reliance on the original domain-specific data distribution. In this work, for the first time, we demonstrate the existence of domain-invariant adversaries, thereby showing common adversarial space among different datasets and models. To this end, we propose a framework capable of launching highly transferable attacks that crafts adversarial patterns to mislead networks trained on wholly different domains. For instance, an adversarial function learned on Paintings, Cartoons or Medical images can successfully perturb ImageNet samples to fool the classifier, with success rates as high as ∼99% (ℓ_∞< 10). The core of our proposed adversarial function is a generative network that is trained using a relativistic supervisory signal that enables domain-invariant perturbations. Our approach sets the new state-of-the-art for fooling rates, both under the white-box and black-box scenarios. Furthermore, despite being an instance-agnostic perturbation function, our attack outperforms the conventionally much stronger instance-specific attack methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 11

page 12

page 13

page 14

page 15

page 16

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Albeit displaying remarkable performance across a range of tasks, Deep Neural Networks (DNNs) are highly vulnerable to adversarial examples, which are carefully crafted examples generated by adding a certain degree of noise (a.k.a. perturbations) to the corresponding original images, typically appearing quasi-imperceptible to humans (Papernot, McDaniel). Importantly, these adversarial examples are transferable from one network to another, even when the other network fashions a different architecture and possibly trained on a different subset of training data (Liu, Chen; Florian, Papernot). Transferability permits an adversarial attack, without knowing the internals of the target network, posing serious security concerns on the practical deployment of these models.

Adversarial perturbations are either instance-specific or instance-agnostic. The instance-specific attacks iteratively optimize a perturbation pattern specific to an input sample (e.g., Fawzi (Fawzi, Dezfooli); Goodfellow (Shlens); Kurakin (Goodfellow); Dezfooli (Fawzi); Nguyen (Yosinski); Dong (Pang); Li (Bai)). In comparison, the instance-agnostic attacks learn a universal perturbation or a function that finds adversarial patterns on a data distribution instead of a single sample. For example, Dezfooli (Fawzi) proposed universal adversarial perturbations that can fool a model on the majority of the source dataset images. To reduce dependency on the input data samples, Mopuri (Garg) maximizes layer activations of the source network while Mopuri (Uppala) extracts deluding perturbations using class impressions relying on the source label space. To enhance the transferability of instance-agnostic approaches, recent generative models attempt to directly craft perturbations using an adversarially trained function Baluja (Fischer); Poursaeed (Katsman).

We observe that most prior works on crafting adversarial attacks suffer from two pivotal limitations that restrict their transferability to real-world scenarios. (a) Existing attacks rely directly or indirectly on the source (training) data, which hampers their transferability to other domains. From a practical standpoint, source domain can be unknown, or the domain-specific data may be unavailable to the attacker. Therefore, a true "black-box" attack must be able to fool learned models across different target domains without ever being explicitly trained on those data domains. (b) Instance-agnostic attacks, compared with their counterparts, are far more scalable to large datasets as they avoid expensive per-instance iterative optimization. However, they demonstrate weaker transferability rates than the instance-specific attacks. Consequently, the design of highly transferable instance-agnostic attacks that also generalize across unseen domains is a largely unsolved problem.

In this work, we introduce ‘domain-agnostic’ generation of adversarial examples, with the aim of relaxing the source data reliance assumption. In particular, we propose a flexible framework capable of launching vastly transferable adversarial attacks, e.g., perturbations found on paintings, comics or medical images are shown to trick natural image classifiers trained on ImageNet dataset with high fooling rates. A distinguishing feature of our approach is the introduction of relativistic loss that explicitly enforces learning of domain-invariant adversarial patterns. Our attack algorithm is highly scalable to large-scale datasets since it learns a universal adversarial function that avoids expensive iterative optimization from instance-specific attacks. While enjoying the efficient inference time of instance-agnostic methods, our algorithm outperforms all existing attack methods (both instance-specific and agnostic) by a significant margin ( average increase in fooling rate from naturally trained Inception-v3 to adversarially trained models in comparison to state-of-the-art Dong (Pang)) and sets the new state-of-the-art under both white-box and black-box settings. Figure 1 provides an overview of our approach.

Figure 1: Transferable Generative Adversarial Perturbation: We demonstrate that common adversaries exist across different image domains and introduce a highly transferable attack approach that carefully crafts adversarial patterns to fool classifiers trained on totally different domains. Our generative scheme learns to reconstruct adversaries on paintings or comics (left) that can successfully fool natural image classifiers with high fooling rates at the inference time (right).

2 Related Work

Image-dependent Perturbations: Several approaches target creation of image-dependent perturbations. Szegedy (Zaremba) noticed that despite exhibiting impressive performance, neural networks can be fooled through maliciously crafted perturbations that appear quasi-imperceptible to humans. Following this finding, many approaches Fawzi (Fawzi, Dezfooli); Goodfellow (Shlens); Kurakin (Goodfellow); Dezfooli (Fawzi); Nguyen (Yosinski) investigate the existence of these perturbations. They either apply gradient ascent in the pixel space or solve complex optimizations. Recently, a few methods Xie (Zhang); Dong (Pang) propose input or gradient transformation modules to improve the transferability of adversarial examples. A common characteristic of the aforementioned approaches is their data-dependence; the perturbations are computed for each data-point separately in a mutually exclusive way. Further, these approaches render inefficiently at inference time since they iterate on the input multiple times. In contrast, we resort to a data-independent approach based on a generator, demonstrating improved inference-time efficiency along with high transferability rates.

Universal Adversarial Perturbation: Seminal work of Dezfooli (Fawzi)

introduces the existence of Universal Adversarial Perturbation (UAP). It is a single noise vector which when added to a data-point can fool a pretrained model.

Dezfooli (Fawzi) crafts UAP in an iterative fashion utilizing target data-points that is capable of flipping their labels. Though it can generate image-agnostic UAP, the success ratio of their attack is proportional to the number of training samples used for crafting UAP. Mopuri (Garg) proposes a so-called data-independent algorithm by maximizing the product of mean activations at multiple layers given a universal perturbation as input. This method crafts a so-called data-independent perturbation, however, the attack success ratio is not comparable to Dezfooli (Fawzi). Instead, we propose a fully distribution-agnostic approach that crafts adversarial examples directly from a learned generator, as opposed to first generating perturbations followed by their addition to images.

Generator-oriented Perturbations: Another branch of attacks leverage generative models to craft adversaries. Baluja (Fischer) learns a generator network to perturb images, however, the unbounded perturbation magnitude in their case might render perceptible perturbations at test time. Xiao (Li) apply generative adversarial networks to craft visually realistic perturbations and build distilled network to perform black-box attack. Similarly, Poursaeed (Katsman); Mopuri (Uppala) train generators to create adversaries to launch attacks; the former uses target data directly and the latter relies on class impressions.

A common trait of prior work is that they either rely directly (or indirectly) upon the data distribution and/or entail access to its label space for creating adversarial examples (Table 1). In contrast, we propose a flexible, distribution-agnostic approach - inculcating relativistic loss - to craft adversarial examples that achieves state-of-the-art results both under white-box and black-box attack settings.

Method Data Type Transfer Label Cross-domain
Strength Agnostic Attack
FFF (Mopuri, Garg) Pretrained-net/data Low
AAA (Mopuri, Uppala) Class Impressions Medium
UAP (Dezfooli, Fawzi) ImageNet Low
GAP (Poursaeed, Katsman) ImageNet Medium
RHP (Li, Bai) ImageNet Medium
Ours Arbitrary (Paintings, Comics, Medical scans etc.) High
Table 1: A comparison of different attack methods based on their dependency on data distribution and labels.

3 Cross-Domain Transferable Perturbations

Our proposed approach is based on a generative model that is trained using an adversarial mechanism. Assume we have an input image belonging to a source domain . We aim to train a universal function that learns to add a perturbation pattern on the source domain which can successfully fool a network trained on source as well as any target domain when fed with perturbed inputs . Importantly, our training is only performed on the unlabelled source domain dataset with samples: and the target domain is not used at all during training. For brevity, in the following discussion, we will only refer the input and perturbed images using and respectively and the domain will be clear from the context.

The proposed framework consists of a generator and a discriminator parameterized by and . In our case, we initialize discriminator with a pretrained network and the parameters are remained fixed while the is learned. The output of is scaled to have a fixed norm and it lies within a bound; . The perturbed images as well as the real images

are passed through the discriminator. The output of the discriminator denotes the class probabilities

, where

is the number of classes. This is different from the traditional GAN framework where a discriminator only estimate whether an input is real or fake. For an adversarial attack, the goal is to fool a network on most examples by making minor changes to its inputs, i.e.,

(1)

where, is the fooling ratio, is the ground-truth label for the example and the predictions on clean images are given by, . Note that we do not necessarily require the ground-truth labels of source domain images to craft a successful attack. In the case of adversarial attacks based on a traditional GAN framework, the following objective is maximized for the generator to achieve the maximal fooling rate:

(2)

where

is the one-hot encoded label vector for an input example

. The above objective seeks to maximize the discriminator error on the perturbed images that are output from the generator network.

Figure 2: The proposed generative framework seeks to maximize the ‘fooling gap’ that helps in achieving very high transferability rates across domains. The orange dashed line shows the flow of gradients, notably only the generator is tuned in the whole pipeline to fool the pretrained discriminator.

We argue that the objective given by Eq. 2 does not directly enforce transferability for the generated perturbations . This is primarily due to the reason that the discriminator’s response for clean examples is totally ignored in the conventional generative attacks. Here, inspired by the generative adversarial network in Jolicoeur (Martineau), we propose a relativistic adversarial perturbation (RAP) generation approach that explicitly takes in to account the discriminator’s predictions on clean images. Alongside reducing the classifier’s confidence on perturbed images, the attack algorithm also forces the discriminator to maintain a high confidence scores for the clean samples. The proposed objective is given by:

(3)

The cross entropy loss would be higher when the perturbed image is scored significantly lower than the clean image response for the ground-truth class i.e., . The discriminator basically seeks to increase the ‘fooling gap’ () between the true and perturbed samples. Through such relative discrimination, we not only report better transferability rates across networks trained on the same domain, but most importantly show excellent cross-domain transfer rates for the instance-agnostic perturbations. We attribute this behaviour to the fact that once a perturbation pattern is optimized using the proposed loss on a source distribution (e.g., paintings, cartoon images), the generator learns a "contrastive" signal that is agnostic to the underlying distribution. As a result, when the same perturbation pattern is applied to networks trained on totally different domain (e.g., natural images), it still achieves the state-of-the-art attack transferability rates. Table 2 shows the gain in transferability when using relativistic cross-entropy (Eq. 3) in comparison to simple cross-entropy loss (Eq. 2).

For an untargeted attack, the above mentioned objective in Eq. 2 and 3 suffices, however, for a targeted adversarial attack, the prediction for the perturbed image must match a given target class i.e.,

. For such a case, we employ the following loss function:

(4)

The overall training scheme for the generative network is given in Algorithm 1.

1:A pretrained classifier , arbitrary training data distribution , perturbation budget , loss criteria
2:Randomly initialize generator network
3:repeat
4:     Sample mini-batch of data from the training set.
5:     Use the current state of the generator, , to generate unbounded adversaries.
6:     Project adversaries, , within a valid perturbation budget to obtain such that .
7:     Forward pass to and compute loss given in Eq. (3)/Eq. (4) for targeted/untargeted attack.
8:     Backward pass and update the generator, , parameters to maximize the loss.
9:until the training converges.
Algorithm 1 Generator Training for Relativistic Adversarial Perturbations
Loss VGG-16 VGG-19 Squeeze-v1.1 Dense-121
Cross Entropy (CE) 79.21 78.96 69.32 66.45
Relativistic CE 86.95 85.88 77.81 75.21
Table 2: Effect of Relativistic loss on transferability in terms of fooling rate (%) on ImageNet val-set. Generator is trained against ResNet-152 on Paintings dataset.

4 Experiments

4.1 Rules of the Game

We report results using following three different attack settings in our experiments: (a) White-box. Attacker has access to the original model (both architecture and parameters) and the training data distribution. (b) Black-box. Attacker has access to a pretrained model on the same distribution but without any knowledge of the target architecture and target data distribution. (c) Cross-domain Black-box. Attacker has neither access to (any) pretrained model, nor to its label space and its training data distribution. It then has to seek a transferable adversarial function that is learned from a model pretrained on a possibly different distribution than the original. Hence, this setting is relatively far more challenging than the plain black-box setting.

Perturbation Attack VGG-19 ResNet-50 Dense-121
Fool Rate () Top-1 () Fool Rate () Top-1 () Fool Rate () Top-1 ()
Gaussian Noise 23.59 64.65 18.06 70.74 17.05 70.30
Ours-Paintings 47.12 46.68 31.52 60.77 29.00 62.0
Ours-Comics 48.47 45.78 33.69 59.26 31.81 60.40
Ours-ChestX 40.81 50.11 22.00 67.72 20.53 67.63
Gaussian Noise 33.80 57.92 25.76 66.07 23.30 66.70
Ours-Paintings 66.52 30.21 47.51 47.62 44.50 49.76
Ours-Comics 67.75 29.25 51.78 43.91 50.37 45.17
Ours-ChestX 62.14 33.95 34.49 58.6 31.81 59.75
Gaussian Noise 61.07 35.48 47.21 48.40 39.90 54.37
Ours-Paintings 87.08 11.96 69.05 28.77 63.78 33.46
Ours-Comics 87.90 11.17 71.91 26.12 71.85 26.18
Ours-ChestX 88.12 10.92 62.17 34.85 59.49 36.98
Table 3: Cross-Domain Black-box: Untargeted attack success (%) in terms of fooling rate on ImageNet val-set. Adversarial generators are trained against ChexNet on Paintings, Comics and ChestX datasets. Perturbation budget, , is chosen as per the standard practice. Even without the knowledge of targeted model, its label space and its training data distribution, the transferability rate is much higher than the Gaussian noise.

4.2 Experimental Settings

Generator Architecture. We chose ResNet architecture introduced in (Johnson, Alahi) as the generator network ; it consists of downsampling, residual and upsampling blocks. For training, we used Adam optimizer (Kingma, Ba)

with a learning rate of 1e-4 and values of exponential decay rate for first and second moments set to 0.5 and 0.999, respectively. Generators are learned against the four pretrained ImageNet models including VGG-16, VGG-19

(Simonyan, Zisserman), Inception (Inc-v3) (Szegedy, Vanhoucke,Ioffe,Shlens,Wojna), ResNet-152 (He, Zhang,Ren,Sun) and ChexNet (which is a Dense-121 Huang (Liu) network trained to diagnose pneumonia) (Rajpurkar, Irvin).

Datasets. We consider the following datasets for generator training namely Paintings (Painter, by), Comics (Cenk, BircanoÄŸlu), ImageNet and a subset of ChestX-ray (ChestX) (Rajpurkar, Irvin). There are approximately 80k samples in Paintings, 50k in Comics, 1.2 million in ImageNet training set and 10k in ChestX.

Inference: Inference is performed on ImageNet validation set (val-set) (50k samples), a subset (5k samples) of ImageNet proposed by (Li, Bai) and ImageNet-NeurIPS NeurIPS (Attacks) (1k samples) dataset.

Evaluation Metrics: We use the fooling rate (percentage of input samples for which predicted label is flipped after adding adversarial perturbations), top-1 accuracy and % increase in error rate (the difference between error rate of adversarial and clean images) to evaluate our proposed approach.

Bee Eater Cardoon Impala Anemone Fish Crane

Jigsaw Puzzle Jigsaw Puzzle Jigsaw Puzzle Jigsaw Puzzle Jigsaw Puzzle
Figure 3: Untargeted adversaries produced by generator trained against Inception-v3 on Paintings dataset. 1st row shows original images while 2nd row shows unrestricted outputs of adversarial generator and 3rd row are adversaries after valid projection. Perturbation budget is set to .
Figure 4: Illustration of attention shift. We use Ramprasaath (Michael) to visualize attention maps of clean (1st row) and adversarial (2nd row) images. Adversarial images are obtained by training generator against VGG-16 on Paintings dataset.
Model Attack VGG-16 VGG-19 ResNet-152 VGG-16 FFF 47.10 41.98 27.82 AAA 71.59 65.64 45.33 UAP 78.30 73.10 63.40 Ours-Paintings 99.58 98.97 47.90 Ours-Comics 99.83 99.56 58.18 Ours-ImageNet 99.75 99.44 52.64 VGG-19 FFF 38.19 43.60 26.34 AAA 69.45 72.84 51.74 UAP 73.50 77.80 58.00 Ours-Paintings 98.90 99.61 40.98 Ours-Comics 99.29 99.76 42.61 Ours-ImageNet 99.19 99.80 53.02 ResNet-152 FFF 19.23 17.15 29.78 AAA 47.21 48.78 60.72 UAP 47.00 45.5 84.0 Ours-Paintings 86.95 85.88 98.03 Ours-Comics 88.94 88.84 94.18 Ours-ImageNet 95.40 93.26 99.02
Table 4: White- and Black-box: Fool rate (%) of untargeted attack on ImageNet val-set. Perturbation budget is . * indicates white-box attack. Our attack’s transferability from ResNet-152 to VGG-16/19 is even higher than other white-box attacks.
Model Attack Inc-v3ens3 Inc-v3ens4 IncRes-v2ens Inc-v3 UAP 1.00/7.82 1.80/5.60 1.88/5.60 GAP 5.48/33.3 4.14/29.4 3.76/22.5 RHP 32.5/60.8 31.6/58.7 24.6/57.0 Inc-v4 UAP 2.08/7.68 1.94/6.92 2.34/6.78 RHP 27.5/60.3 26.7/62.5 21.2/58.5 IncRes-v2 UAP 1.88/8.28 1.74/7.22 1.96/8.18 RHP 29.7/62.3 29.8/63.3 26.8/62.8 Ours-Paintings 33.92/72.46 38.94/71.4 33.24/69.66 Ours-gs-Paintings 47.78/73.06 48.18/72.68 42.86/73.3 Ours-Comics 21.06/67.5 24.1/68.72 12.82/54.72 Ours-gs-Comics 34.52/70.3 56.54/69.9 23.58/68.02 Ours-ImageNet 28.34/71.3 29.9/66.72 19.84/60.88 Ours-gs-ImageNet 41.06/71.96 42.68/71.58 37.4/72.86
Table 5: Black-box: Transferability comparison in terms of (%) increase in error rate after attack. Results are reported on subset of ImageNet (5k) with perturbation budget of . Our generators are trained against naturally trained Inc-v3 only. ‘gs’ represents Gaussian smoothing applied to generator output before projection that enhances our attack strength.

4.2.1 Results

Table 3 shows the cross-domain black-box setting results, where attacker have no access to model architecture, parameters, its training distribution or label space. Note that ChestX (Rajpurkar, Irvin) does not have much texture, an important feature to deceive ImageNet models (Geirhos, Rubisch), yet the transferability rate of perturbations learned against ChexNet is much better than the Gaussian noise.

Tables 5 and 5 show the comparison of our method against different universal methods on both naturally and adversarially trained models Florian (Kurakin) (Inc-v3, Inc-v4 and IncRes-v2). Our attack success rate is much higher both in white-box and black-box settings. Notably, for the case of adversarially trained models, Gaussian smoothing on top of our approach leads to significant increase in transferability. We provide further comparison with GAP Poursaeed (Katsman) in the supplementary material. Figures 3 and 4 show the model’s output and attention shift on example adversaries.

4.2.2 Comparison with State-of-the-Art

Finally, we compare our method with recently proposed instance-specific attack method (Dong, Pang) that exhibits high transferability to adversarially trained models. For the very first time in literature, we showed that a universal function like ours can attain much higher transferability rate, outperforming the state-of-the-art instance-specific translation invariant method (Dong, Pang) by a large average absolute gain of 46.6% and 86.5% (in fooling rates) on both naturally and adversarially trained models, respectively, as reported in Table 6. The naturally trained models are Inception-v3 (Inc-v3) Szegedy (Vanhoucke,Ioffe,Shlens,Wojna), Inception-v4 (Inc-v4), Inception Resnet v2 (IncRes-v2) Szegedy (Ioffe) and Resnet v2-152 (Res-152) He (Zhang)). The adversarially trained models are from Florian (Kurakin).

Attack Naturally Trained Adversarially Trained
Inc-v3 Inc-v4 IncRes-v2 Res-152 Inc-v3ens3 Inc-v3ens4 IncRes-v2ens

Inc-v3

FGSM 79.6 35.9 30.6 30.2 15.6 14.7 7.0
TI-FGSM 75.5 37.3 32.1 34.1 28.2 28.9 22.3
MI-FGSM 97.8 47.1 46.4 38.7 20.5 17.4 9.5
TI-MI-FGSM 97.9 52.4 47.9 41.1 35.8 35.1 25.8
DIM 98.3 73.8 67.8 58.4 24.2 24.3 13.0
TI-DIM 98.5 75.2 69.2 59.2 46.9 47.1 37.4

IncRes-v2

FGSM 44.3 36.1 64.3 31.9 18.0 17.2 10.2
TI-FGSM 49.7 41.5 63.7 40.1 34.6 34.5 27.8
MI-FGSM 74.8 64.8 100.0 54.5 25.1 23.7 13.3
TI-MI-FGSM 76.1 69.5 100.0 59.6 50.7 51.7 49.3
DIM 86.1 83.5 99.1 73.5 41.2 40.0 27.9
TI-DIM 86.4 85.5 98.8 76.3 61.3 60.1 59.5

Res-152

FGSM 40.1 34.0 30.3 81.3 20.2 17.7 9.9
TI-FGSM 46.4 39.3 33.4 78.9 34.6 34.5 27.8
MI-FGSM 54.2 48.1 44.3 97.5 25.1 23.7 13.3
TI-MI-FGSM 55.6 50.9 45.1 97.4 39.9 37.7 32.8
DIM 77.0 77.8 73.5 97.4 40.5 36.0 24.1
TI-DIM 77.0 73.9 73.2 97.2 60.3 58.8 42.8
Ours-Paintings 100.0 99.7 99.8 98.9 69.3 74.6 64.8
Ours-gs-Paintings 99.9 98.5 97.6 93.6 85.2 83.9 75.9
Ours-Comics 99.9 99.8 99.8 98.7 39.3 46.8 23.3
Ours-gs-Comics 99.9 97.0 93.4 87.7 60.3 58.8 42.8
Ours-ImageNet 99.8 99.1 97.5 98.1 55.4 60.5 36.4
Ours-gs-ImageNet 98.9 95.4 90.5 91.8 78.6 78.4 68.9
Table 6: White-box and Black-box: Transferability comparisons. Success rate on ImageNet-NeurIPS validation set (1k images) is reported by creating adversaries within the perturbation budget of , as per the standard practice Dong (Pang). Our generators are learned against naturally trained Inception-v3 only. indicates white-box attack. ‘gs’ is Gaussian smoothing applied to the generator output before projection. Smoothing leads to slight decrease in transferability on naturally trained but shows significant increase against adversarially trained models.
(a) Naturally Trained IncRes-v2 (b) Adversarially Trained IncRes-v2 (c) Naturally Trained IncRes-v2 (d) Adversarially Trained IncRes-v2
Figure 5:

Effect of Gaussian kernel size and number of training epochs is shown on the transferability (in %age fool rate) of adversarial examples. Generator is trained against Inception-v3 on Paintings, while the inference is performed on ImageNet-NeurIPS. Firstly, as number of epochs increases, transferability against naturally trained IncRes-v2 increases while decreases against its adversarially trained version. Secondly, as the size of Gaussian kernel increases, transferability against naturally as well as adversarially trained IncRes-v2 decreases. Applying kernel of size 3 leads to optimal results against adversarially trained model. Perturbation is set to

.

4.3 Transferability: Naturally Trained vs. Adversarially Trained

Furthermore, we study the impact of training iterations and Gaussian smoothing Dong (Pang) on the transferability of our generative adversarial examples. We report results using naturally and adversarially trained IncRes-v2 model (Szegedy, Ioffe) as other models exhibit similar behaviour. Figure 5 displays the transferability (in %age fool rate) as a function of the number of training epochs (a-b) and various kernel sizes for Gaussian smoothing (c-d).

Firstly, we observe a gradual increase in the transferability of generator against the naturally trained model as the training epochs advance. In contrast the transferability deteriorates against the adversarially trained model. Therefore, when targeting naturally trained models, we train for ten epochs on Paintings, Comics, and ChestX datasets (although we anticipate better performance for higher epochs). When targeting adversarially trained models, we deploy an early stopping criterion to obtain the best trained generator since the performance drops on such models as epochs are increased. This fundamentally shows the reliance of naturally and adversarially trained models on different set of features.

Our results clearly demonstrate that the adversarial solution space is shared across different architectures and even across distinct data domains. Since we train our generator against naturally trained models only, therefore it converges to a solution space on which an adversarially trained model has already been trained. As a result, our perturbations gradually become weaker against adversarially trained models as the training progress. A visual demonstration is provided in supplementary material.

Secondly, the application of Gaussian smoothing reveals different results on naturally trained and adversarially trained models. After applying smoothing, adversaries become stronger for adversarially trained models and get weaker for naturally trained models. We achieve optimal results with the kernel size of 3 and for adversarially trained models and use these settings consistently in our experiments. We apply Gaussian kernel on the unrestricted generator’s output, therefore as the kernel size is increased, generator’s output becomes very smooth and after projection within valid range, adversaries become weaker.

5 Conclusion

Adversarial examples have been shown to be transferable across different models trained on the same domain. For the first time in literature, we show that the cross-domain transferable adversaries exists that can fool the target domain networks with high success rates. We propose a novel generative framework that learns to generate strong adversaries using a relativistic discriminator. Surprisingly, our proposed universal adversarial function can beat the instance-specific attack methods that were previously found to be much stronger compared to the universal perturbations. Our generative attack model trained on Chest X-ray and Comics images, can fool VGG-16, ResNet50 and Dense-121 models with a success rate of and , respectively, without having any knowledge of data distribution or label space.

References

Supplementary: Cross Domain Transferability of Adversarial Perturbations

We further compare our method with GAP [16] in Sec. 1 to demonstrate superiority of our approach. In Sec. 2, we visually demonstrate the effect of training time and Gaussian kernel size of the generated adversaries. Finally, in Sec. 3, we show adversaries produced by different generators as well as demonstrate attention shift on adversarial examples.

1 Comparison with GAP

Perturbation Attack VGG-16 VGG-19 Inception-v3
Fool Rate () Top-1 () Fool Rate () Top-1 () Fool Rate () Top-1 ()
GAP 66.9 30.0 68.4 28.8 85.3 13.7
Ours-Paintings 95.31 4.29 96.84 2.94 97.95 1.86
Ours-Comics 99.15 0.97 98.58 1.33 98.90 1.0
Ours-ImageNet 98.57 1.32 98.71 1.24 91.03 8.4
GAP 80.80 17.7 84.10 14.6 98.3 1.7
Ours-Paintings 99.58 0.4 99.61 0.38 99.65 0.33
Ours-Comics 99.83 0.16 99.76 0.22 99.72 0.26
Ours-ImageNet 99.75 0.24 99.80 0.21 99.05 0.89
GAP 88.5 10.6 90.7 8.6 99.5 0.5
Ours-Paintings 99.86 0.16 99.83 0.16 99.8 0.18
Ours-Comics 99.88 0.12 99.86 0.13 99.83 0.17
Ours-ImageNet 99.87 0.13 99.86 0.15 99.67 0.13
Table 1: Comparison between GAP and our method. Untargeted attack success rate (%) in terms of fooling rate (higher is better) and top-1 accuracy (lower is better) is reported on 50k validation images. Each attack is carried out in white-box setting.

2 Effect of Training Time and Gaussian Kernel Size

Figures 1 and 2 show the evolution of generative adversaries as the number of epochs increases. At initial epochs, adversaries are more smoother and more transferable against adversarially trained models. On the other hand, as training progress, generator converges to a solution with locally strong patterns that are more transferable to naturally trained models.

Figures 3 and 4 show the effect of Gaussian smoothing. As the kernel size increases, transferability of adversaries decreases.

Epoch: 1 Epoch: 3 Epoch: 6 Epoch: 8 Epoch: 10
Figure 1: Evolution of adversaries produced by generator as the training progress. Adversaries found at initial training stages e.g., at epoch equal to 1 are highly transferable against adversarially trained models while adversaries found at later training stage e.g., at epoch equal to 10 are highly transferable against naturally trained models. Generator is trained against Inc-v3 on Paintings dataset. First row shows unrestricted adversaries while second row shows adversaries after valid projection ().
Epoch: 1 Epoch: 3 Epoch: 6 Epoch: 8 Epoch: 10
Figure 2: Evolution of adversaries produced by generator as the training progress. Adversaries found at initial training stage e.g., at epoch equal to 1 are highly transferable against adversarially trained models while adversaries found at later training stage e.g., at epoch equal to 10 are highly transferable against naturally trained models. Generator is trained against Inc-v3 on Paintings dataset. First row shows unrestricted adversaries while second row shows adversaries after valid projection ().
3x3 5x5 7x7 9x9 11x11
Figure 3: Evolution of adversaries produced by generator as the size of Gaussian kernel increases. Adversaries start to lose their effect as the kernel size increase. The optimal results against adversarially trained models are found at kernel size of 3. First and second rows show unrestricted adversaries before and after smoothing, while third row shows adversaries after valid projection ().
3x3 5x5 7x7 9x9 11x11
Figure 4: Evolution of adversaries produced by generator as the size of Gaussian kernel increases. Adversaries start to lose their effect as the kernel size increase. The optimal results against adversarially trained models are found at kernel size of 3. First and second rows show unrestricted adversaries before and after smoothing, while third row shows adversaries after valid projection ().

3 Examples

Figure 5 demonstrates the attention shift on generative adversarial examples produced by our method. Figures 6, 7, 8 and 9 show examples of different clean images and their corresponding adversaries produced by different generators.

Figure 5: Illustration of attention shift for ResNet-152. We use [31] to visualize attention maps of clean (1st row) and adversarial (2nd row) images. Adversarial images are obtained by training generator against ResNet-152 on Paintings dataset.

Original Images Target model: VGG-16, Distribution: Paintings, Fooling rate: 99.58% Target model: VGG-16, Distribution: Comics, Fooling rate: 99.8% Target model: VGG-16, Distribution: ImageNet, Fooling rate: 99.7%

Figure 6: Untargeted adversaries produced by generator (before and after projection) trained against VGG-16 on different distributions (Paintings, Comics and ImageNet). Perturbation budget is set to and fooling rate is reported on ImageNet validation set.

Original Images Target model: VGG-19, Distribution: Paintings, Fooling rate: 99.6% Target model: VGG-19, Distribution: Comics, Fooling rate: 99.76% Target model: VGG-19, Distribution: ImageNet, Fooling rate: 99.8%

Figure 7: Untargeted adversaries produced by generator (before and after projection) trained against VGG-19 on different distributions (Paintings, Comics and ImageNet). Perturbation budget is set to and fooling rate is reported on ImageNet validation set.

Original Images Target model: Inc-v3, Distribution: Paintings, Fooling rate: 99.65% Target model: Inc-v3, Distribution: Comics, Fooling rate: 99.72% Target model: Inc-v3, Distribution: ImageNet, Fooling rate: 99.04%

Figure 8: Untargeted adversaries produced by generator (before and after projection) trained against Inception-v3 on different distributions (Paintings, Comics and ImageNet). Perturbation budget is set to and fooling rate is reported on ImageNet validation set.

Original Images Target model: ResNet-152, Distribution: Paintings, Fooling rate: 98.0% Target model: ResNet-152, Distribution: Comics, Fooling rate: 94.18% Target model: ResNet-152, Distribution: ImageNet, Fooling rate: 99.0%

Figure 9: Untargeted adversaries produced by generator (before and after projection) trained against ResNet-152 on different distributions (Paintings, Comics and ImageNet). Perturbation budget is set to and fooling rate is reported on ImageNet validation set.