Improved Network Robustness with Adversary Critic

10/30/2018 ∙ by Alexander Matyasko, et al. ∙ Nanyang Technological University 0

Ideally, what confuses neural network should be confusing to humans. However, recent experiments have shown that small, imperceptible perturbations can change the network prediction. To address this gap in perception, we propose a novel approach for learning robust classifier. Our main idea is: adversarial examples for the robust classifier should be indistinguishable from the regular data of the adversarial target. We formulate a problem of learning robust classifier in the framework of Generative Adversarial Networks (GAN), where the adversarial attack on classifier acts as a generator, and the critic network learns to distinguish between regular and adversarial images. The classifier cost is augmented with the objective that its adversarial examples should confuse the adversary critic. To improve the stability of the adversarial mapping, we introduce adversarial cycle-consistency constraint which ensures that the adversarial mapping of the adversarial examples is close to the original. In the experiments, we show the effectiveness of our defense. Our method surpasses in terms of robustness networks trained with adversarial training. Additionally, we verify in the experiments with human annotators on MTurk that adversarial examples are indeed visually confusing. Codes for the project are available at https://github.com/aam-at/adversary_critic.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks are powerful representation learning models which achieve near-human performance in image He et al. (2016) and speech Hinton et al. (2012) recognition tasks. Yet, state-of-the-art networks are sensitive to small input perturbations. Szegedy et al. (2013) showed that adding adversarial noise to inputs produces images which are visually similar to the images inputs but which the network misclassifies with high confidence. In speech recognition, Carlini and Wagner (2018) introduced an adversarial attack, which can change any audio waveform, such that the corrupted signal is over similar to the original but transcribes to any targeted phrase. The existence of adversarial examples

puts into question generalization ability of deep neural networks, reduces model interpretability, and limits applications of deep learning in safety and security-critical environments 

Sharif et al. (2016); Papernot et al. (2016).

Adversarial training Goodfellow et al. (2015); Kurakin et al. (2017); Tramèr et al. (2018) is the most popular approach to improve network robustness. Adversarial examples are generated online using the latest snapshot of the network parameters. The generated adversarial examples are used to augment training dataset. Then, the classifier is trained on the mixture of the original and the adversarial images. In this way, adversarial training smoothens a decision boundary in the vicinity of the training examples. Adversarial training (AT) is an intuitive and effective defense, but it has some limitations. AT is based on the assumption that adversarial noise is label non-changing. If the perturbation is too large, the adversarial noise may change the true underlying label of the input. Secondly, adversarial training discards the dependency between the model parameters and the adversarial noise. As a result, the neural network may fail to anticipate changes in the adversary and overfit the adversary used during training.

Figure 1: Adversarial examples should be indistinguishable from the regular data of the adversarial target. The images in the figure above are generated using Carlini and Wagner (2017a) -attack on the network trained with our defense, such that the confidence of the prediction on the adversarial images is . The confidence on the original images and is

Ideally, what confuses neural network should be confusing to humans. So the changes introduced by the adversarial noise should be associated with removing identifying characteristics of the original label and adding identifying characteristics of the adversarial label. For example, images that are adversarial to the classifier should be visually confusing to a human observer. Current techniques Goodfellow et al. (2015); Kurakin et al. (2017); Tramèr et al. (2018) improve robustness to input perturbations from a selected uncertainty set. Yet, the model’s adversarial examples remain semantically meaningless. To address this gap in perception, we propose a novel approach for learning robust classifier. Our core idea is that adversarial examples for the robust classifier should be indistinguishable from the regular data of the attack’s target class (see fig. 1).

We formulate the problem of learning robust classifier in the framework of Generative Adversarial Networks (GAN) Goodfellow et al. (2014)

. The adversarial attack on the classifier acts as a generator, and the critic network learns to distinguish between natural and adversarial images. We also introduce a novel targeted adversarial attack which we use as the generator. The classifier cost is augmented with the objective that its adversarial images generated by the attack should confuse the adversary critic. The attack is fully-differentiable and implicitly depends on the classifier parameters. We train the classifier and the adversary critic jointly with backpropagation. To improve the stability of the adversarial mapping, we introduce adversarial cycle-consistency constraint which ensures that the adversarial mapping of the adversarial examples is close to the original. Unlike adversarial training, our method does not require adversarial noise to be label non-changing. To the contrary, we require that the changes introduced by adversarial noise should change the “true” label of the input to confuse the critic. In the experiments, we demonstrate the effectiveness of the proposed approach. Our method surpasses in terms of robustness networks trained with adversarial training. Additionally, we verify in the experiments with human annotators that adversarial examples are indeed visually confusing.

2 Related work

Adversarial attacks    Szegedy et al. (2013) have originally introduced a targeted adversarial attack which generates adversarial noise by optimizing the likelihood of input for some adversarial target using a box-constrained L-BFGS method. Fast Gradient Sign method (FGSM) Goodfellow et al. (2015) is a one-step attack which uses a first-order approximation of the likelihood loss. Basic Iterative Method (BIM), which is also known as Projected Gradient Descent (PGD), Kurakin et al. (2016) iteratively applies the first-order approximation and projects the perturbation after each step. Papernot et al. (2016) propose an iterative method which at each iteration selects a single most salient pixel and perturbs it. DeepFool Moosavi-Dezfooli et al. (2016) iteratively generates adversarial perturbation by taking a step in the direction of the closest decision boundary. The decision boundary is approximated with first-order Taylor series to avoid complex non-convex optimization. Then, the geometric margin can be computed in the closed-form. Carlini and Wagner (2017a)

propose an optimization-based attack on a modified loss function with implicit box-constraints.

Papernot et al. (2017)

 introduce a black-box adversarial attack based on transferability of adversarial examples. Adversarial Transformation Networks (ATN) 

Baluja and Fischer (2018) trains a neural network to attack.

Defenses against adversarial attacks    Adversarial training (AT) Goodfellow et al. (2015) augments training batch with adversarial examples which are generated online using Fast Gradient Sign method. Virtual Adversarial training (VAT) Miyato et al. (2015)

minimizes Kullback-Leibler divergence between the predictive distribution of clean inputs and adversarial inputs. Notably, adversarial examples can be generated without using label information and VAT was successfully applied in semi-supervised settings.

Madry et al. (2017) applies iterative Projected Gradient Descent (PGD) attack to adversarial training. Stability training Zheng et al. (2016) minimizes a task-specific distance between the output on clean and the output on corrupted inputs. However, only a random noise was used to distort the input. Matyasko and Chau (2017); Elsayed et al. (2018) propose to maximize a geometric margin to improve classifier robustness. Parseval networks Cissé et al. (2017) are trained with the regularization constraint, so the weight matrices have a small spectral radius. Most of the existing defenses are based on robust optimization and improve the robustness to perturbations from a selected uncertainty set.

Detecting adversarial examples is an alternative way to mitigate the problem of adversarial examples at test time. Hendrik Metzen et al. (2017) propose to train a detector network on the hidden layer’s representation of the guarded model. If the detector finds an adversarial input, an autonomous operation can be stopped and human intervention can be requested. Feinman et al. (2017)

adopt a Bayesian interpretation of Dropout to extract confidence intervals during testing. Then, the optimal threshold was selected to distinguish natural images from adversarial. Nonetheless,

Carlini and Wagner (2017b) have extensively studied and demonstrated the limitations of the detection-based methods. Using modified adversarial attacks, such defenses can be broken in both white-box and black-box setups. In our work, the adversary critic is somewhat similar to the adversary detector. But, unlike adversary-detection methods, we use information from the adversary critic to improve the robustness of the guarded model during training and do not use the adversary critic during testing.

Generative Adversarial Networks Goodfellow et al. (2014) introduce a generative model where the learning problem is formulated as an adversarial game between discriminator and generator. The discriminator is trained to distinguish between real images and generated images. The generator is trained to produce naturally looking images which confuse the discriminator. A two-player minimax game is solved by alternatively optimizing two models. Recently several defenses have been proposed which use GAN framework to improve robustness of neural networks. Defense-GAN Samangouei et al. (2018) use the generator at test time to project the corrupted input on the manifold of the natural examples. Lee et al. (2017) introduce Generative Adversarial Trainer (GAT) in which the generator is trained to attack the classifier. Like Adversarial Training Goodfellow et al. (2015), GAT requires that adversarial noise does not change the label. Compare with defenses based on robust optimization, we do not put any prior constraint on the adversarial attack. To the contrary, we require that adversarial noise for robust classifier should change the “true” label of the input to confuse the critic. Our formulation has three components (the classifier, the critic, and the attack) and is also related to Triple-GAN LI et al. (2017). But, in our work: 1) the generator also fools the classifier; 2) we use the implicit dependency between the model and the attack to improve the robustness of the classifier. Also, we use a fixed algorithm to attack the classifier.

3 Robust Optimization

We first recall a mathematical formulation for the robust multiclass classification. Let be a -class classifier, e.g. neural network, where is in the input space and are the classifier parameters. The prediction rule is . Robust optimization seeks a solution robust to the worst-case input perturbations:

(1)

where is a training loss, is an arbitrary (even adversarial) perturbation for the input , and is an uncertainty set, e.g. -norm -ball . Prior information about the task can be used to select a problem-specific uncertainty set .

Several regularization methods can be shown to be equivalent to the robust optimization, e.g. lasso regression Xu et al. (2009a) and support vector machine Xu et al. (2009b). Adversarial training Goodfellow et al. (2015) is a popular regularization method to improve neural network robustness. AT assumes that adversarial noise is label non-changing and trains neural network on the mixture of original and adversarial images:

(2)

where is the adversarial perturbation generated using Fast Gradient Sign method (FGSM). Shaham et al. (2015) show that adversarial training is a form of robust optimization with -norm constraint. Madry et al. (2017) experimentally argue that Projected Gradient Descent (PGD) adversary is inner maximizer of eq. 1 and, thus, PGD is the optimal first-order attack. Adversarial training with PGD attack increases the robustness of the regularized models compare to the original defense. Margin maximization Matyasko and Chau (2017) is another regularization method which generalizes SVM objective to deep neural networks, and, like SVM, it is equivalent to the robust optimization with the margin loss.

Figure 2: Images on the diagonal are corrupted with the adversarial noise generated by CW Carlini and Wagner (2017a) -norm attack, so the prediction confidence on the adversarial images is at least . The prediction confidence on the original images is

Selecting a good uncertainty set for robust optimization is crucial. Poorly chosen uncertainty set may result in an overly conservative robust model. Most importantly, each perturbation should leave the “true” class of the original input unchanged. To ensure that the changes of the network prediction are indeed fooling examples, Goodfellow et al. (2015) argue in favor of a max-norm perturbation constraint for image classification problems. However, simple disturbance models (e.g. - and -norm

-ball used in adversarial training) are inadequate in practice because the distance to the decision boundary for different examples may significantly vary. To adapt uncertainty set to the problem at hand, several methods have been developed for constructing data-dependent uncertainty sets using statistical hypothesis tests 

Bertsimas et al. (2018). In this work, we propose a novel approach for learning a robust classifier which is orthogonal to prior robust optimization methods.

Ideally, inputs that are adversarial to the classifier should be confusing to a human observer. So the changes introduced by the adversarial noise should be associated with the removing of identifying characteristics of the original label and adding the identifying characteristics of the adversarial target. For example, adversarial images in Figure 2 are visually confusing. The digit ‘’ (second row, eighth column) after adding the top stroke was classified by the neural network as digit ‘’. Likewise, the digit ‘’ (eighth row, second column) after removing the top stroke was classified by the network as digit ‘’. Similarly for other images in Figure 2

, the model’s “mistakes” can be predicted visually. Such behavior of the classifier is expected and desired for the problems in computer vision. Additionally, it improves the interpretability of the model. In this work, we study image classification problems, but our formulation can be extended to the classification tasks in other domains, e.g. audio or text.

Based on the above intuition, we develop a novel formulation for learning a robust classifier. Classifier is robust if its adversarial examples are indistinguishable from the regular data of the adversarial target (see fig. 1). So, we formulate the following mathematical problem:

(3)

where and is the distribution of the natural and the adversarial for examples and the parameter controls the trade-off between accuracy and robustness. Note that the distribution is constructed by transforming natural samples with , so that adversarial example is classified by as the attack’s target .

The first loss in eq. 3

, e.g. NLL, fits the model predictive distribution to the data distribution. The second term measures the probabilistic distance between the distribution of the regular and adversarial images and constrains the classifier, so its adversarial examples are indistinguishable from the regular inputs. It is important to note that we minimize a probabilistic distance between joint distributions because the distance between marginal distributions

and is trivially minimized when . Compare with adversarial training, the proposed formulation does not impose the assumption that adversarial noise is label non-changing. To the contrary, we require that adversarial noise for the robust classifier should be visually confusing and, thus, it should change the underlying label of the input. Next, we will describe the implementation details of the proposed defense.

4 Robust Learning with Adversary Critic

As we have argued in the previous section, adversarial examples for the robust classifier should be indistinguishable from the regular data of the adversarial target. Minimizing the statistical distance between and in eq. 3

requires probability density estimation which in itself is a difficult problem. Instead, we adopt the framework of Generative Adversarial Networks 

Goodfellow et al. (2014). We rely on a discriminator, or an adversary critic, to estimate a measure of difference between two distributions. The discriminator given an input-label pair classifies it as either natural or adversarial. For the -class classifier , we implement the adversary critic as a -output neural network (see fig. 3). The objective for the -th output of the discriminator is to correctly distinguish between natural and adversarial examples of the class :

(4)

where is the targeted adversarial attack on the classifier which transforms the input to the adversarial target . An example of such attack is Projected Gradient Descent Kurakin et al. (2016) which iteratively takes a step in the direction of the target . Note that the second term in eq. 4 is computed by transforming the regular inputs with the original label different from the adversarial target .

Our architecture for the discriminator in Figure 3 is slightly different from the previous work on joint distribution matching LI et al. (2017) where the label information was added as the input to each layer of the discriminator. We use class label only in the final classification layer of the discriminator. In the experiments, we observe that with the proposed architecture: 1) the discriminator is more stable during training; 2) the classifier converges faster and is more robust. We also regularize the adversary critic with a gradient norm penalty Gulrajani et al. (2017)

. For the gradient norm penalty, we do not interpolate between clean and adversarial images but simply compute the penalty at the real and adversarial data separately. Interestingly, regularizing the gradient of the binary classifier has the interpretation of maximizing the geometric margin 

Matyasko and Chau (2017).

1:  Input: Image , target , network , confidence .
2:  Output: Adversarial image .
3:  
4:  while  do
5:     
6:     
7:      
8:     
9:  end while
Algorithm 1 High-Confidence Attack
Figure 3: Multiclass Adversary Critic.
Figure 3: Multiclass Adversary Critic.

The objective for the classifier is to minimize the number of mistakes subject to that its adversarial examples generated by the attack fool the adversary critic :

(5)

where is a standard supervised loss such as negative log-likelihood (NLL) and the parameter controls the trade-off between test accuracy and classifier robustness. To improve stability of the adversarial mapping during training, we introduce adversarial cycle-consistency constraint which ensures that adversarial mapping of the adversarial examples should be close to the original:

(6)

where is the original label of the input and

is the adversarial target. Adversarial cycle-consistency constraint is similar to cycle-consistency constraint which was introduced for image-to-image translation 

Zhu et al. (2017). But, we introduce it to constraint the adversarial mapping and it improves the robustness of the classifier . Next, we discuss implementation of our targeted adversarial attack .

Our defense requires that the adversarial attack is differentiable. Additionally, adversarial examples generated by the attack should be misclassified by the network with high confidence. Adversarial examples which are close to the decision boundary are likely to retain some identifying characteristics of the original class. An attack which optimizes for the mistakes, e.g. DeepFool Moosavi-Dezfooli et al. (2016), guarantees the confidence of for -way classifier. To generate high-confidence adversarial examples, we propose a novel adversarial attack which iteratively maximizes the confidence of the adversarial target. The confidence of the target after adding perturbation is . The goal of the attack is to find the perturbation, so the adversarial input is misclassified as with the confidence at least :

We apply a first-order approximation to the constraint inequality:

Softmax in the final classification layer saturates quickly and shatters the gradient. To avoid small gradients, we use log-likelihood instead. Finally, the -norm minimal perturbation can be computed using a method of Lagrange multipliers as follows:

(7)

Because we use the approximation of the non-convex decision boundary, we iteratively update perturbation for steps using eq. 7 until the adversarial input is misclassified as the target with the confidence . Our attack can be equivalently written as where is an indicator function. The discrete stopping condition introduces a non-differentiable path in the computational graph. We replace the gradient of the indicator function with sigmoid-adjusted straight-through estimator during backpropagation Bengio et al. (2013)

. This is a biased estimator but it has low variance and performs well in the experiments.

The proposed attack is similar to Basic Iterative Method (BIM) Kurakin et al. (2016). BIM takes a fixed -norm step in the direction of the attack target while our method uses an adaptive step . The difference is important for our defense:

  1. BIM introduces an additional parameter . If is too large, then the attack will not be accurate. If is too small, then the attack will require many iterations to converge.

  2. Both attacks are differentiable. However, for BIM attack during backpropagation, all the gradients have an equal weight . For our attack, the gradients will be weighted adaptively depending on the distance to the attack’s target. The step for our attack is also fully-differentiable.

Full listing of our attack is shown in algorithm 1. Next, we discuss how we select the adversarial target and the attack’s target confidence during training.

The classifier approximately characterizes a conditional distribution . If the classifier is optimal and robust, its adversarial examples generated by the attack should fool the adversary critic . Therefore, the attack to fool the critic should generate adversarial examples with the confidence equal to the confidence of the classifier on the regular examples. During training, we maintain a running mean of the confidence score for each class on the regular data. The attack target for the input with the label

can be sampled from the masked uniform distribution. Alternatively, the class with the closest decision boundary 

Moosavi-Dezfooli et al. (2016) can be selected. The latter formulation resulted in more robust classifier and we used it in all our experiments. This is similar to support vector machine formulation which maximizes the minimum margin.

Finally, we train the classifier and the adversary critic

jointly using stochastic gradient descent by alternating minimization of

Equations 5 and 4. Our formulation has three components (the classifier , the critic , and the attack ) and it is similar to Triple-GAN LI et al. (2017) but the generator in our formulation also fools the classifier.

5 Experiments

Adversarial training Goodfellow et al. (2015) discards the dependency between the model parameters and the adversarial noise. In this work, it is necessary to retain the implicit dependency between the classifier and the adversarial noise, so we can backpropagate through the adversarial attack

. For these reasons, all experiments were conducted using Tensorflow 

Abadi et al. (2016) which supports symbolic differentiation and computation on GPU. Backpropagation through our attack requires second-order gradients which increases computational complexity of our defense. At the same time, this allows the model to anticipate the changes in the adversary and, as we show, significantly improves the model robustness both numerically and perceptually. Codes for the project are available on-line.

We perform experiments on MNIST dataset. While MNIST is a simple classification task, it remains unsolved in the context of robust learning. We evaluate robustness of the models against attacks. Minimal adversarial perturbation is estimated using DeepFool Moosavi-Dezfooli et al. (2016), Carlini and Wagner (2017a), and the proposed attack. To improve the accuracy of DeepFool and our attack during testing, we clip the -norm of perturbation at each iteration to . Note that our attack with the fixed step is equivalent to Basic Iterative Method Kurakin et al. (2016). We set the maximum number of iterations for DeepFool and our attack to . The target confidence for our attack is set to the prediction confidence on the original input . DeepFool and our attack do not handle domain constraints explicitly, so we project the perturbation after each update. For Carlini and Wagner (2017a), we use implementation provided by the authors with default settings for the attack but we reduce the number of optimization iterations from to . As suggested in Moosavi-Dezfooli et al. (2016), we measure the robustness of the model as follows:

(8)

where is the attack on the classifier and is the test set.

We compare our defense with reference (no defense), Adversarial Training Goodfellow et al. (2015); Kurakin et al. (2017) (), Virtual Adversarial Training (VAT) Miyato et al. (2015) (), and -norm Margin Maximization Matyasko and Chau (2017) () defense. We study the robustness of two networks with rectified activation: 1) a fully-connected neural network with three hidden layers of size

units each; 2) Lenet-5 convolutional neural network. We train both networks using Adam optimizer 

Kingma and Ba (2014) with batch size for epochs. Next, we will describe the training details for our defense.

Our critic has two layers with units each and leaky rectified activation. We also add Gaussian noise to the input of each layer. We train both the classifier and the critic using Adam Kingma and Ba (2014) with the momentum . The starting learning rate is set to and for the classifier and the discriminator respectively. We train our defense for epochs and the learning rate is halved every epochs. We set for fully-connected network and for Lenet-5 network which we selected using validation dataset. Both networks are trained with for the adversarial cycle-consistency loss and for the gradient norm penalty. The number of iterations for our attack is set to . The attack confidence is set to the running mean class confidence of the classifier on natural images. We pretrain the classifier for epoch without any regularization to get an initial estimate of the class confidence scores.

Defense Moosavi-Dezfooli et al. (2016) Carlini and Wagner (2017a) Our
Reference
Goodfellow et al. (2015)
Miyato et al. (2015)
Matyasko and Chau (2017)
Our
(a)
Defense Moosavi-Dezfooli et al. (2016) Carlini and Wagner (2017a) Our
Reference
Goodfellow et al. (2015)
Miyato et al. (2015)
Matyasko and Chau (2017)
Our
(b)
Table 1: Results on MNIST dataset for fully-connected network in table 0(a) and for Lenet-5 convolutional network in table 0(b). Column 1: test error on original images. Column 3-5: robustness under DeepFool Moosavi-Dezfooli et al. (2016), Carlini and Wagner (2017a), and the proposed attack. 

Our results for independent runs are summarized in Table 1, where the second column shows the test error on the clean images, and the subsequent columns compare the robustness to DeepFool Moosavi-Dezfooli et al. (2016), Carlini and Wagner (2017a), and our attacks. Our defense significantly increases the robustness of the model to adversarial examples. Some adversarial images for the neural network trained with our defense are shown in Figure 4. Adversarial examples are generated using Carlini and Wagner (2017a) attack with default parameters. As we can observe, adversarial examples at the decision boundary in Figure 3(b) are visually confusing. At the same time, high-confidence adversarial examples in Figure 3(c) closely resemble natural images of the adversarial target. We propose to investigate and compare various defenses based on how many of its adversarial “mistakes” are actual mistakes.

(a)
(b)
(c)
Figure 4: Figure 3(a) shows a random subset of test images (average confidence ). Figure 3(b) shows adversarial examples at the class decision boundary (average confidence ). Figure 3(c) shows high-confidence adversarial images (average confidence ). 

We conduct an experiment with human annotators on MTurk. We asked the workers to label adversarial examples. Adversarial examples were generated from the test set using the proposed attack. The attack’s target was set to class closest to the decision boundary and the target confidence was set to the model’s confidence on the original examples. We split test images into assignments. Each assignment was completed by one unique annotator. We report the results for four defenses in Table 2. For the model trained without any defense, adversarial noise does not change the label of the input. When the model is trained with our defense, the high-confidence adversarial noise actually changes the label of the input.

Defense Change No change
Reference
Goodfellow et al. (2015)
Miyato et al. (2015)
Matyasko and Chau (2017)
Our
(a)
Defense Change No change
Reference
Goodfellow et al. (2015)
Miyato et al. (2015)
Matyasko and Chau (2017)
Our
(b)
Table 2: Results of Amazon Mechanical Turk experiment for fully-connected network in table 1(a) and for Lenet-5 convolutional network in fig. 3(c). Column 2: shows percent of adversarial images which human annotator label with its adversarial target, so adversarial noise changed the “true” label of the input. Column 3: shows percent of the adversarial images which human annotator label with its original label, so adversarial noise did not change the underlying label of the input. 

6 Conclusion

In this paper, we introduce a novel approach for learning a robust classifier. Our defense is based on the intuition that adversarial examples for the robust classifier should be indistinguishable from the regular data of the adversarial target. We formulate a problem of learning robust classifier in the framework of Generative Adversarial Networks. Unlike prior work based on robust optimization, our method does not put any prior constraints on adversarial noise. Our method surpasses in terms of robustness networks trained with adversarial training. In experiments with human annotators, we also show that adversarial examples for our defense are indeed visually confusing. In the future work, we plan to scale our defense to more complex datasets and apply it to the classification tasks in other domains, such as audio or text.

Acknowledgments

This work was carried out at the Rapid-Rich Object Search (ROSE) Lab at Nanyang Technological University (NTU), Singapore. The ROSE Lab is supported by the National Research Foundation, Singapore, and the Infocomm Media Development Authority, Singapore. We thank NVIDIA Corporation for the donation of the GeForce Titan X and GeForce Titan X (Pascal) used in this research. We also thank all the anonymous reviewers for their valuable comments and suggestions.

References

  • He et al. (2016) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • Hinton et al. (2012) G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. In IEEE Signal Processing Magazine, 2012.
  • Szegedy et al. (2013) C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In ICLR, 2013.
  • Carlini and Wagner (2018) N. Carlini and D. Wagner. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. arXiv preprint arXiv:1801.01944, 2018.
  • Sharif et al. (2016) Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.

    Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition.

    In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016.
  • Papernot et al. (2016) Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy, 2016.
  • Goodfellow et al. (2015) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
  • Kurakin et al. (2017) A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial Machine Learning at Scale. In ICLR, 2017.
  • Tramèr et al. (2018) Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In ICLR, 2018.
  • Carlini and Wagner (2017a) Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, 2017.
  • Goodfellow et al. (2014) Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
  • Kurakin et al. (2016) A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
  • Moosavi-Dezfooli et al. (2016) S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: A simple and accurate method to fool deep neural networks. In CVPR, 2016.
  • Papernot et al. (2017) Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 2017.
  • Baluja and Fischer (2018) Shumeet Baluja and Ian Fischer. Learning to attack: Adversarial transformation networks. In AAAI, 2018.
  • Miyato et al. (2015) T. Miyato, S.-i. Maeda, M. Koyama, K. Nakae, and S. Ishii. Distributional Smoothing with Virtual Adversarial Training. In ICLR, 2015.
  • Madry et al. (2017) A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks. In ICLR, 2018.
  • Zheng et al. (2016) S. Zheng, Y. Song, T. Leung, and I. Goodfellow. Improving the robustness of deep neural networks via stability training. In CVPR, 2016.
  • Matyasko and Chau (2017) Alexander Matyasko and Lap-Pui Chau. Margin maximization for robust classification using deep learning. In IJCNN, 2017.
  • Elsayed et al. (2018) Gamaleldin Fathy Elsayed, Dilip Krishnan, Hossein Mobahi, Kevin Regan, and Samy Bengio. Large margin deep networks for classification. In NIPS, 2018.
  • Cissé et al. (2017) Moustapha Cissé, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval networks: Improving robustness to adversarial examples. In ICML, 2017.
  • Hendrik Metzen et al. (2017) J. Hendrik Metzen, T. Genewein, V. Fischer, and B. Bischoff. On Detecting Adversarial Perturbations. In ICLR, 2017.
  • Feinman et al. (2017) R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner. Detecting Adversarial Samples from Artifacts. arXiv preprint arXiv:1703.00410, 2017.
  • Carlini and Wagner (2017b) Nicholas Carlini and David A. Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In

    Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security

    , 2017.
  • Samangouei et al. (2018) Pouya Samangouei, Maya Kabkab, and Rama Chellappa. Defense-GAN: Protecting classifiers against adversarial attacks using generative models. In ICLR, 2018.
  • Lee et al. (2017) H. Lee, S. Han, and J. Lee. Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN. arXiv preprint arXiv:1705.03387, 2017.
  • LI et al. (2017) Chongxuan LI, Taufik Xu, Jun Zhu, and Bo Zhang. Triple generative adversarial nets. In NIPS, 2017.
  • Xu et al. (2009a) Huan Xu, Constantine Caramanis, and Shie Mannor. Robust regression and lasso. In NIPS, 2009a.
  • Xu et al. (2009b) Huan Xu, Constantine Caramanis, and Shie Mannor. Robustness and regularization of support vector machines. In Journal of Machine Learning Research, 2009b.
  • Shaham et al. (2015) U. Shaham, Y. Yamada, and S. Negahban. Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization. arXiv preprint arXiv:1511.05432, 2015.
  • Bertsimas et al. (2018) Dimitris Bertsimas, Vishal Gupta, and Nathan Kallus. Data-driven robust optimization. In Mathematical Programming, 2018.
  • Gulrajani et al. (2017) Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In NIPS, 2017.
  • Zhu et al. (2017) J. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017.
  • Bengio et al. (2013) Yoshua Bengio, Nicholas Léonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2015.
  • Abadi et al. (2016) The Tensorflow Development Team. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint arXiv:1603.04467, 2015.
  • Kingma and Ba (2014) D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980, 2015.