Log In Sign Up

Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method

Intelligent Internet of Things (IoT) systems based on deep neural networks (DNNs) have been widely deployed in the real world. However, DNNs are found to be vulnerable to adversarial examples, which raises people's concerns about intelligent IoT systems' reliability and security. Testing and evaluating the robustness of IoT systems becomes necessary and essential. Recently various attacks and strategies have been proposed, but the efficiency problem remains unsolved properly. Existing methods are either computationally extensive or time-consuming, which is not applicable in practice. In this paper, we propose a novel framework called Attack-Inspired GAN (AI-GAN) to generate adversarial examples conditionally. Once trained, it can generate adversarial perturbations efficiently given input images and target classes. We apply AI-GAN on different datasets in white-box settings, black-box settings and targeted models protected by state-of-the-art defenses. Through extensive experiments, AI-GAN achieves high attack success rates, outperforming existing methods, and reduces generation time significantly. Moreover, for the first time, AI-GAN successfully scales to complex datasets e.g. CIFAR-100 and ImageNet, with about 90% success rates among all classes.


page 1

page 4

page 7

page 8


Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense

In image classification of deep learning, adversarial examples where inp...

AT-GAN: A Generative Attack Model for Adversarial Transferring on Generative Adversarial Nets

Recent studies have discovered the vulnerability of Deep Neural Networks...

AI-GAN: Attack-Inspired Generation of Adversarial Examples

Adversarial examples that can fool deep models are mainly crafted by add...

CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator

Deep neural networks (DNNs) are vulnerable to adversarial attack despite...

TAD: Trigger Approximation based Black-box Trojan Detection for AI

An emerging amount of intelligent applications have been developed with ...

Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping

Recently, deep neural networks (DNNs) have been deployed in safety-criti...

Rethinking Feature Uncertainty in Stochastic Neural Networks for Adversarial Robustness

It is well-known that deep neural networks (DNNs) have shown remarkable ...

I Introduction

Deep neural networks have achieved great success in the last few years and drawn tremendous attention from both academia and industry. Nowadays, it has been intensively applied in smart industrial or daily-use IoT systems and devices, including those safety and security-critical ones, such as autopilots [1, 2]

, face recognition on mobile devices 

[3, 4], traffic transportation systems [5, 6] and intelligent manufacturing [7, 8]. With the rapid development and deployment of these intelligent IoT systems, safety concerns rise from society.

Recent studies have found that deep neural networks used by IoT systems are vulnerable to adversarial examples [9, 10, 11, 12]. Adversarial examples are usually crafted by adding carefully-designed imperceptible perturbations on legitimate samples. In human’s eyes, the appearances of adversarial examples are the same as their legitimate copies, while the predictions from deep learning models are different. The existence of adversarial examples has dramatically challenged smart IoT systems’ safety and reliability. Thus the importance and necessity of efficiently evaluating the reliability of safety-critical systems are getting increasingly high [13].

Many researchers have managed to evaluate the robustness of deep neural networks in different ways, such as box-constrained L-BFGS [9], Fast Gradient Sign Method (FGSM) [10], Jacobian-based Saliency Map Attack (JSMA) [14], C&W attack [15] and Projected Gradient Descent (PGD) attack [16]. These attack methods are optimization-based with proper distance metrics , and to restrict the magnitudes of perturbations and make the presented adversarial examples visually natural. These methods are based on optimization, which is usually time-consuming, computation-intensive, and need to access the target models at the inference period for strong attacks. These properties make optimization-based methods not applicable to test IoT systems in practice.

Some researchers employ generative models e.g., GAN [17] to produce adversarial perturbations [18, 19], or generate adversarial examples directly [20]. Compared to optimization-based methods, generative models significantly reduce the time of adversarial examples generation. Yet, existing methods have two apparent limitations: 1) The generation ability is limited i.e.,

they can only perform one specific targeted attack at a time. Re-training is needed for different targets. 2) they can hardly scale to real world images. Most prior works evaluated their methods only on MNIST and CIFAR-10, which is not feasible for complicated reality tasks.

To solve the above problems, we propose a new variant of GAN to generate adversarial perturbations conditionally and efficiently, which is named Attack-Inspired GAN (AI-GAN) and shown in Fig. 2

: a generator is trained to perform targeted attacks with clean images and targeted classes as inputs; a discriminator with an auxiliary classifier for classification generated samples in addition to discrimination. Unlike existing works, we propose to add an attacker and train the discriminator adversarially. On the one hand, the discriminator after adversarial training gets adversarially robust. This robust discriminator enhances the generator’s attack abilities. On the other hand, a robust discriminator can also stabilize the GAN training process 

[21, 22]. Inspired by other complicated tasks in deep learning e.g.,

object detection, which usually adopts pre-trained models as backbones, we use a pre-trained model in our generator to compress data for better scalability. On evaluation, we mainly select four datasets with different classes and image sizes and compare our approach with four representative methods in white-box settings, black-box settings and under defences. From the experiments, we conclude that 1) our model and loss function are useful, with much-improved efficiency and scalability; 2) our approach generates comparable or even stronger attacks (for the most time) than existing methods under the same

bound of perturbations in various experimental settings.

We summarize our contributions as follows:

  1. Unlike existing methods, we propose a novel framework called AI-GAN where a generator, a discriminator, and an attacker are trained jointly.

  2. To our best knowledge, AI-GAN is the first GAN-based approach to generate perceptually realistic adversarial examples given inputs and targets, and scales to complicated datasets e.g., CIFAR-100 and ImageNet, achieving high attack success rates ().

  3. Through extensive experiments, AI-GAN shows strong attack abilities, outperforming existing methods in both white-box and black-box settings, and saves time significantly.

  4. We show AI-GAN achieves comparable or higher success rates on target models protected by the state-of-the-art defense methods.

The remainder of this paper is organized as follows: In Section II, we briefly review the literature related to adversarial examples and generative models. In Section III, we introduce some representative attacks. Then Section IV elaborates our proposed method for generating adversarial examples efficiently. In Section V, we show the experimental results on various datasets with different settings. In Section VI, we discuss the efficiency of attack generation. Finally, Section VII concludes the paper.

Ii Related Work

Ii-a Adversarial Examples

Adversarial examples, which are able to mislead deep neural networks, are first discovered by [9]. They manage to maximize the network’s prediction error by adding hardly perceptible perturbations to benign images. Since then, various attack methods have been proposed. [10] developed Fast Gradient Sign Method (FGSM) to compute the perturbations efficiently using back-propagation. The perturbations could be expressed as , where represents the cross entropy loss function, and is a constant. Thus adversarial examples are expressed as . One intuitive extension of FGSM is Basic Iterative Method [23] which executes FGSM many times with smaller . [14] proposed Jacobian-based Saliency Map Attack (JSMA) with distance. The saliency map discloses the likelihood of fooling the target network when modifying pixels in original images. Optimization-based methods have been proposed to generate quasi-imperceptible perturbations with constraints in different distance metrics. [15] designed a set of attack methods. The objective function to be minimized is , where is an constant and could be 0, 2 or . [24] also proposed an algorithm approximating the gradients of targeted models based on Zeroth Order Optimization (ZOO). [16] introduced a convex optimization method called Projected Gradient Descent (PGD) to generate adversarial examples, which is proved to be the strongest first-order attack. However, such methods are usually time-consuming because the optimization process is slow, and they are only able to generate perturbations once a time.

Ii-B Generative Adversarial Networks (GANs)

GAN is firstly proposed by [17] and has achieved great success in various tasks [25, 26, 27]. GAN consists of two competing neural networks with different objectives, called the discriminator and the generator respectively. The training phase could be seen as a min-max game of the discriminator and the generator. The generator is trained to generate fake images to fool the discriminator while the discriminator tries to classify the real images and the fake images. Usually the generator and the discriminator are trained alternatively while the other one is fixed. The original objective function is denoted as


However, the training of vanilla GAN is very unstable. To stabilize the GAN training, a variety of improvements of GAN are developed such as DCGAN [28], WGAN [29] and WGAN with gradient penalty [30]. In this paper, we show that a robust discriminator is good for stable training.

Ii-C Generating Adversarial Examples via Generative Models

Generative models are usually used to create new data because of their powerful representation ability. [19] firstly applied generative models to generate four types of adversarial perturbations (universal or image dependent, targeted or non-targeted) with U-Net [31] and ResNet [32] architectures. [33] extended the idea of [19] with conditional generation. [18] used the idea of GAN to make adversarial examples more realistic. Different from the above methods generating adversarial perturbations, some other methods generate adversarial examples directly, which are called unrestricted adversarial examples [20]. Song et al. [20] proposed to search the latent space of pretrained ACGAN [34] to find adversarial examples. Note that all these methods are only evaluated on simple datasets e.g., MNIST and CIFAR-10.

Iii Preliminaries

In this section, we introduce some common adversarial attacks. Given an input image , the adversary aims to find a perturbation so that . Due to the intractability of in optimization, the objective of the adversary is to maximize the loss function of the target classifier . The norm of is usually restricted by a small scalar value , and is a popular norm used in the literature [16].

Iii-a Fast Gradient Sign Method (FGSM)

Goodfellow et al. [10] hypothesized the “linearity” of deep network models in high dimensional spaces that small perturbations on inputs would be exaggerated by the deep models. FGSM exploits such linearity to generate adversarial examples. Specifically, FGSM computes the sign of gradients with respect to the input , and adversarial perturbation is expressed as:


where represents the parameters of the targeted classifier, and denotes the loss function, and is the magnitude of perturbations.

Iii-B C&W Attack

C&W Attack proposed by [15] is designed to generate quasi-imperceptible adversarial examples with a high attack ability. The objective of C&W Attack is formulated as follows:


with defined as


where and are hyper-parameters, is correct class belongs to, and

is the probability of class

. Thus, the first term in Equation 3 is to find the worst samples, while the second term is used to constrain the magnitude of the perturbations.

Iii-C Projected Gradient Descent (PGD) Attack

PGD attack is a multi-step variant of FGSM with a smaller step size. To make sure that adversarial examples are in a valid range, PGD attack projects the adversarial examples into the neighbor of the original samples after every update, which is defined by and . Formally, the update scheme is expressed as


where is the set of allowed perturbations to ensure the validity of and is the step size.

Iii-D AdvGAN

AdvGAN is the first work which employs GAN to generate targeted adversarial examples. The architecture of AdvGAN is illustrated in Fig. 1. As we can see, AdvGAN consists of a generator, a discriminator, and a pretrained target model . Given an input , the goal of AdvGAN is to generate adversarial perturbations so that , where is the target class pre-defined by the adversary and doesn’t belong to class .

Fig. 1: The architecture of AdvGAN.

Iv Our Approach

Iv-a Problem Definition

Consider a classification network trained on dataset , with being the dimension of inputs. And suppose is the instance in the training data, where is generated from some unknown distribution , and is the ground truth label. The classifier is trained on natural images and achieves high accuracy. The goal of an adversary is to generate an adversarial example , which can fool to output a wrong prediction and looks similar to in terms of some distance metrics. We use to bound the magnitude of perturbations. There are two types of such attacks: untargeted attacks and targeted attacks. Specifically, untargeted attacks are the attacks that the adversary uses to mislead the target model to predict any of the incorrect classes, while targeted attacks are the attacks that the adversary uses to mislead the target model to a pre-defined class except the true class. Given an instance , for example, the adversarial example generated by the adversary is . If the goal of the adversary is , this is untargeted attack; or the goal of the adversary is , where is the target class defined by the adversary and , this is targeted attack. As claimed in [35], misclassification caused by untargeted attacks sometimes may not be meaningful for closely related classes; e.g., a German shepherd classified as a Doberman. They suggested that targeted attacks are more recommended for evaluation. In addition, generating targeted attacks is strictly harder than developing untargeted attacks [35]. Thus, we mainly focus on targeted attacks in this paper.

Iv-B Proposed Framework

Fig. 2: Overview of our approach. Left: the Training Scheme of AI-GAN. See Section IV-B for the detailed explanation. Right: an Illustration of Targeted Attacks by AI-GAN. Given the “stop sign” image and the target class “slow”, the perturbed image is misclassified in the “slow” class (red line). Without the modification by AI-GAN, this “stop sign” image is classified correctly (black line).

We propose a new variant of conditional GAN called Attack-Inspired GAN (AI-GAN) to generate adversarial examples conditionally and efficiently. As shown in Fig. 2, the overall architecture of AI-GAN consists of a generator , a two-head discriminator , an attacker , and a target classifier . Within our approach, both generator and discriminator are trained in an end-to-end way: the generator generates and feeds perturbed images to the discriminator. Meanwhile, clean images sampled from training data and their attacked copies are fed into the discriminator as well. Specifically, the generator takes a clean image and the target class label as inputs to generate adversarial perturbations . is sampled randomly from the dataset classes except the correct class. An adversarial example can be obtained and sent to the discriminator . Other than the adversarial examples generated by , the discriminator also takes the clean images and untargeted adversarial examples generated by the attacker . So not only discriminates clean/perturbed images, but also classifies adversarial examples correctly.

Discriminator. The discriminator is originally designed for classifying real or fake images, forcing the generator to generate convincing images [17, 18]. However, it is known that forcing a model to additional tasks can help on the original task [36, 34]. Besides, an auxiliary classifier can be used for training the generator as well. Motivated by these considerations, we proposed a discriminator with an auxiliary classification module trained to reconstruct correct labels of adversarial examples and an improved loss function based on AC-GAN [34].

In our approach, the discriminator has two branches as shown in Fig. 2: one is trained to discriminate between clean images and perturbed images ; another is to classify correctly. To further enhance the attack ability of the generator, we propose to train the classification module adversarially and add an attacker into the training process. We choose PGD attack as our attacker here because of the strong attack ability. Specifically, the attacker is used to generate adversarial examples for the classification module during training the discriminator. Such a robust classification module would stimulate the generator to produce stronger attacks. Another benefit of a robust discriminator is that it helps stabilize and accelerate the whole training [21].

Overall, the loss function of our discriminator consists of three parts: for discriminating clean/perturbed images and for classification on adversarial examples generated respectively by the attacker and the generator, which are expressed as




where represents the true label. The goal of the discriminator is to maximize .

Generator. The generators in prior work [18, 19, 33] are quite similar: given the clean images, the generators outputs the adversarial perturbations. There are mainly two problems in these methods: 1) their generators can generate only one specific targeted attack at a time because targeted classes are fixed during training; 2) their methods can hardly scale to large datasets.

To solve the above problems, we first modify the generator, which takes both clean images and targeted classes as inputs, as shown in Fig. 2. Then we propose to pre-train the encoder in a self-supervised way, which we elaborate on in the next subsection. The pre-trained encoder can extract features effectively, and reduce the training difficulties from training scratch. A pre-trained encoder’s existence makes our approach similar to feature space attacks and increases the adversarial examples’ transferability [37, 38]. As we train a robust discriminator with an auxiliary classifier, our generator’s attack ability is further enhanced.

The loss function of the generator consists of three parts: for attacking target models, for attacking the discriminator, and which is as same as the discriminator. and are expressed as




where is the class of targeted attacks. The goal of the generator is to maximize .

The whole training procedure of AI-GAN can be found in Algorithm 1.

Input: Training data , targeted model , learning rate , the attacker , class number
Output: The parameters of the generator and the discriminator : and .
for each training iteration do
       Sample ;
       Sample ;
       // training Generator ;
       // training Discriminator ;
end for
Algorithm 1 AI-GAN Training.

Iv-C Self-Supervised Pretraining of Genetator

Both self-supervised learning and pretraining have proved to be effective in many vision or language problems 

[39, 40, 41, 42]. Here we employ these two techniques to reduce the training difficulties and improve GAN’s scalability in our problem. The intuitive idea of self-supervised learning is to learn invariant features from data but without labels. After self-supervised pretraining, the generator’s encoder can compress data and extract useful features for attacks. The loss we use for pretraining in our approach is the prevalent contrastive loss:



is the latent vector of sample

, is the latent vector of sample , is a function to calculate the similarities of two latent vectors e.g., , and is a hyper-parameter. We follow the setting in [39] and set as 0.5 in our experiments.

V Experimental Results

In this section, we conduct extensive experiments to evaluate AI-GAN. First, we compare the attacking abilities of different methods with AI-GAN under different settings (white-box and black-box) on MNIST and CIFAR-10. Second, we evaluate these methods with defended target models. Third, we show the scalability of AI-GAN with on complicated datasets: CIFAR-100 and ImageNet-50.

V-a Datasets

We consider four different datasets in our experiments: (1) MNIST [43], a handwritten digit dataset, has a training set of 60K examples and a test set of 10K examples. (2) CIFAR-10 [44] contains 60K images in 10 classes, of which 50K for training and 10K for testing. (3) CIFAR-100. It is just like CIFAR-10, except that it has 100 classes containing 600 images each. (4) Imagenet-50, which is a randomly-sampled subset of ImageNet with 50 classes.

In all of our experiments, we follow the common settings in the literature [16, 10, 9, 18] and constrain the perturbations under a bound for different attack methods. The bounds of perturbations are 0.3/1 on MNIST, and 8/255 on CIFAR-10, CIFAR-100 and ImageNet-50.

V-B Implementation Details

V-B1 Model Architectures and Training Details

We adopt generator and discriminator architectures similar to those of [45] and use 7-step Projected Gradient Descent (PGD) [16] as an attacker in the training process. We apply C&W loss [15] to generating targeted adversarial examples for the Generator and set confidence in our experiments. We use Adam as our solver [46], with a batch size of 256 and a learning rate of 0.002. A larger batch size is applicable as well with a larger learning rate.

V-B2 Target Models in the Experiments

For MNIST, we use model A from [47] and model B from [15], whose architectures are shown in Table I; For CIFAR-10, we use ResNet32 and WRN34 (short for Wide ResNet34) [32, 48]; For CIFAR-100, we use ResNet20 and ResNet32; For ImageNet-50, we use ResNet18. All the models are well pretrained on natural data.

Model A Model B

Conv(32, 3, 3)+ReLU

Conv(64, 5, 5)+ReLU Conv(32, 3, 3)+ReLU
Conv(64, 5, 5)+ReLU MaxPool(2, 2)
Dropout(0.25) Conv(64, 3, 3)+ReLU
FullyConnected(128)+ReLU Conv(64, 3, 3)+ReLU
Dropout(0.5) MaxPool(2, 2)
FullyConnected + Softmax FullyConnected(200) + ReLU
FullyConnected(200) + ReLU
TABLE I: Model architectures used in this work for MNIST dataset.

V-B3 Selection of Baselines

For comparison, we select three representative optimization-based methods: FGSM, C&W attack, and PGD attack. FGSM is a single-step attack method, which is fast but weak. C&W and PGD attacks are both iterative methods. Generally speaking, more steps will produce stronger attacks. In our experiments, for C&W attack, we follow the original setting stated in [15]; for PGD attack, we set steps as 20 and step size as 2/255. We also compare with AdvGAN, the current state-of-the-art GAN-based attack method.

V-C White-box Attack Evaluation

Attacking in white-box settings is the worst case for target models as the adversary knows everything about the models. This subsection evaluates AI-GAN on MNIST and CIFAR-10 with different target models. The attack success rates of AI-GAN are shown in Table II. From the table, we can see that AI-GAN achieves high attack success rates with different target classes on both MNIST and CIFAR-10. On MNIST, the success rate exceeds 96% given any targeted class. The average attack success rates are 99.14% for Model A and 98.50% for Model B. AI-GAN also achieves high attack success rates on CIFAR-10. The average attack success rates are 95.39% and 95.84% for ResNet32 and WRN34 respectively. In this subsection, we mainly compared AI-GAN with AdvGAN, which is most similar to ours. As shown in Table III, AI-GAN performs better than AdvGAN in most cases. It is worth noting that AI-GAN can launch different targeted attacks at once, which is superior to AdvGAN.

Randomly selected adversarial examples generated by AI-GAN are shown in Fig. 3.

Target Class Model A Model B ResNet32 WRN34
Class 1 98.71% 99.45% 95.90% 90.70%
Class 2 97.04% 98.53% 95.20% 88.91%
Class 3 99.94% 98.14% 95.86% 93.20%
Class 4 99.96% 96.26% 95.63% 98.20%
Class 5 99.47% 99.14% 94.34% 96.56%
Class 6 99.80% 99.35% 95.90% 95.86%
Class 7 97.41% 99.34% 95.20% 98.44%
Class 8 99.85% 98.62% 95.31% 98.83%
Class 9 99.38% 98.50% 95.74% 98.91%
Class 10 99.83% 97.67% 94.88% 98.75%
Average 99.14% 98.50% 95.39% 95.84%
TABLE II: Attack success rates of adversarial examples generated by AI-GAN against different different models on MNIST and CIFAR-10 in white-box settings.

[Adversarial examples and their perturbations on MNIST. ]

[Adversarial examples and their perturbations on CIFAR-10.]

Fig. 3: Visualization of Adversarial examples and perturbations generated by AI-GAN. Rows represent the different targeted classes and columns are 10 images from different classes. Original images are shown on the diagonal. Perturbations are amplified for visualization.
Methods Model A Model B ResNet32 WRN34
AdvGAN 97.90% 98.30% 99.30% 94.70%
AI-GAN 99.14% 98.50% 95.39% 95.84%
TABLE III: Comparison of attack success rate of adversarial examples generated by AI-GAN and AdvGAN in white-box setting.

V-D Black-box Attack Evaluation

The black-box attack is another type of attack, which is more common in practice. In this subsection, we evaluate AI-GAN to perform attacks in the black-box setting. It is assumed that the adversaries have no prior knowledge of training data and the target models e.g., parameters. A common practice of the black-box attack is to train a local substitute model and apply the white-box attack strategy. This is based on the transferability of adversarial examples. We follow this practice and apply Dynamic Distillation to train AI-GAN. Specifically, we construct a model and train it to distill information from target models. Additionally, we jointly train it with AI-GAN. Similar to adversarial training, is updated every time after updating AI-GAN. Through , AI-GAN can approximate the predictions of target models when fed with adversarial examples and learn to generate more aggressive adversarial examples.

In the implementation, We select Model C from [49] and ResNet20 as the local models for MNIST and CIFAR-10 respectively. We mainly compare AI-GAN with AdvGAN as in the above subsection, and the experimental results are shown in Table IV. Apparently, adversarial examples generated by GAN-based methods have a better transferability than optimization-based methods. AI-GAN and AdvGAN show their effectiveness and achieve higher attack success rates. On MNIST, AI-GAN achieves higher attack success rates than AdvGAN; On CIFAR10, the success rates of AI-GAN is a little lower than AI-GAN, but it is acceptable considering the two methods’ different training costs.

Method Model A Model B ResNet32 WRN34
FGSM 29.49% 23.72% 20.86% 14.52%
CW 10.32% 10.09% 10.01% 10.07%
PGD 46.93% 33.70% 21.05% 14.78%
AdvGAN 93.40% 94.00% 81.80% 78.50%
AI-GAN 96.04% 94.95% 79.65% 75.94%
TABLE IV: Comparison of attack success rate of adversarial examples generated by AI-GAN and AdvGAN in black-box setting.

V-E Attack Evaluation Under defenses

Target Model Model A Model B Resnet32 WRN34
Attacks Adv. Ens. Iter.Adv Adv. Ens. Iter.Adv Adv. Ens. Iter.Adv Adv. Ens. Iter.Adv
FGSM 4.30% 1.60% 4.40% 2.70% 1.60% 1.60% 5.76% 10.09% 1.98% 0.10% 3.00% 1.00%
C&W 4.60% 4.20% 3.00% 3.00% 2.20% 1.90% 8.35% 9.79% 0.02% 8.74% 12.93% 0.00%
PGD 20.59% 11.45% 11.08% 10.67% 10.34% 9.90% 9.22% 10.06% 11.41% 8.09% 9.92% 9.87%
AdvGAN 8.00% 6.30% 5.60% 18.70% 13.50% 12.60% 10.19% 8.96% 9.30% 9.86% 9.07% 8.99%
AI-GAN 23.85% 12.17% 10.90% 20.94% 10.73% 13.12% 9.85% 12.48% 9.57% 10.17% 11.32% 9.91%
TABLE V: Comparison of attack success rates of adversarial examples generated by different methods in whitebox setting with defenses. The best accuracy is bolded and the second best is underlined.

In this subsection, we evaluate our method in the scenario, where the victims are aware of the potential attacks and defenses are deployed on target models. There are various defenses proposed against adversarial examples in the literature [16, 50], and adversarial training [16] is widely accepted as the most effective way. From these defense methods, we select three representative adversarial training methods to improve robustness of target models. The first adversarial training method is proposed by [10] based on Fast Gradient Sign Method (FGSM). The objective function is expressed as . The second method is Ensemble Adversarial Training extended from the first method by [47]. We use two different models as static models to generate adversarial examples. Then these data is fed to the target model accompanying the clean data and adversarial data generated in each training loop. Lastly, the third method is proposed by [16], which formulate adversarial training as a min-max problem and employs PGD to solve the inner maximization problem. Note that, the adversaries don’t know the defenses and will use the vanilla target models in white-box settings as their targets.

We compared AI-GAN with FGSM, C&W attack, PGD attack and AdvGAN quantitatively under these defense methods, and the results are summarized in Table V. As we can see, AI-GAN has the highest attack success rates in most settings, and nearly outperforms all other approaches.

V-F Experiments on Real World Datasets

One concern of our approach is whether it can generalize to large datasets with more classes or high-resolution images. In this section, we demonstrate the effectiveness and scalability of our approach to CIFAR-100 and ImageNet-50. We use these datasets because the current SOTA GAN [45, 22] work on these datasets, and adversarial training on ImageNet-1k data requires expensive computation resources e.g., 128 GPUs [51], which is out of reach for us.

On CIFAR-100, we use ResNet20 and ResNet32 as our targeted models. The average attack success rate of AI-GAN is 90.48% on ResNet20, and 87.76% on ResNet32. On ImageNet-50, we use ResNet18 as our targeted model, and AI-GAN achieved a 92.9% attack success rate on average. All the attacks in our experiments are targeted, so we visualized the confusion matrix in Fig. 

4, where the rows are targeted classes, and columns are predicted classes. The diagonal line in Fig. 4 shows the attack success rate for each targeted class. We can see all the attack success rates are very high. Samples of adversarial examples generated by AI-GAN on ImageNet-50 are shown in Fig. 5. Thus, our approach proves to be able to extend to complicated datasets with many classes or high-resolution images.

[CIFAR-100] [ImageNet-50]

Fig. 4: Visualization of confusion matrix for targeted models on adversarial examples generated by AI-GAN, given data from CIFAR-100 and ImageNet-50 and different targeted classes. On the diagonal, it shows the targeted attacks success rates of AI-GAN. The lighter, the higher.
(a) L:truckP:barrier
(b) L:carP:computer
(c) L:street signP:toy dog
(d) L:traffic lightP:truck
Fig. 5: Randomly sampled adversarial examples generated by AI-GAN on ImageNet-50. We use L and P to denote the labels and the predictions made by ResNet18, respectively.
Run Time 0.06s >3h 0.7s <0.01s <0.01s
Targeted Attack
Black-box Attack
TABLE VI: Comparison with the state-of-the-art attack methods.

Vi Discussions

We have demonstrated the strong ability of AI-GAN in Section V. In this section, we discuss the efficiency of the strong attack methods.

For evaluating the robustness of IoT systems in reality, efficiency is just as important as attack ability. To compare the efficiency of different methods, we perform 1000 attacks at a time and count the total time for each method, which are summarized in Table VI.

As we can see, GAN-based approaches have great advantages in saving time. Even the fastest FGSM among optimization methods consumes multiple times of time, let alone PGD and C&W attacks. Moreover, FGSM is too weak for evaluation. So there is always a trade-off between strong attack ability and efficiency, as it is usually true that optimization-based methods with more iterations are stronger. In addition, these methods show poor black-box attack abilities in experiments, which is also supported by [18].

On the other hand, AI-GAN is superior to AdvGAN because AI-GAN can perform attacks with different targets once trained. At the same time, AdvGAN needs extra copies with prefixed training targets, i.e., ten AdvGANs needed for ten targeted classes.

Vii Conclusion

In this paper, we propose AI-GAN to generate adversarial examples with different targets. In our approach, a generator, a discriminator, and an attacker are involved in training. Once AI-GAN is trained, it can launch adversarial attacks with different targets, which significantly promotes efficiency and preserves image quality. We compare AI-GAN with several SOTA methods under different settings e.g., white-box, black-box or defended, and AI-GAN shows comparable or superior performances. With the novel architecture and training objectives, AI-GAN scales to large datasets for the first time. In extensive experiments, AI-GAN shows strong attack ability, efficiency, and scalability, making it a good tester for evaluating the robustness of IoT systems in practice. For the future work, we would like to utilize the generative models to enhance the adversarial robustness of deep models [52, 53].