1 Introduction
Machine learning systems are vulnerable to adversarial manipulations of their inputs (Szegedy et al., 2013; Biggio and Roli, 2017)
. The problem affects simple linear classifiers for spam filtering
(Dalvi et al., 2004; Lowd and Meek, 2005) as well as stateoftheart deep networks for image classification (Szegedy et al., 2013; Goodfellow et al., 2014), audio signal recognition (Kereliuk et al., 2015; Carlini and Wagner, 2018)(Huang et al., 2017; Behzadan and Munir, 2017) and various other applications (Jia and Liang, 2017; Kos et al., 2017; Fischer et al., 2017; Grosse et al., 2017b). In the context of image classification, this adversarial example phenomenon has sometimes been interpreted as a theoretical result without practical implications (Luo et al., 2015; Lu et al., 2017). However, it is becoming increasingly clear that realworld applications are potentially under serious threat (Kurakin et al., 2016a; Athalye and Sutskever, 2017; Liu et al., 2016; Ilyas et al., 2017).The phenomenon has previously been described in detail (Moosavi Dezfooli et al., 2016; Carlini and Wagner, 2017b) and some theoretical analysis has been provided (Bastani et al., 2016; Fawzi et al., 2016; Carlini et al., 2017). Attempts have been made at designing more robust architectures (Gu and Rigazio, 2014; Papernot et al., 2016b; Rozsa et al., 2016) or at detecting adversarial examples during evaluation (Feinman et al., 2017; Grosse et al., 2017a; Metzen et al., 2017). Adversarial training has also been introduced as a new regularization technique penalizing adversarial directions (Goodfellow et al., 2014; Kurakin et al., 2016b; Tramèr et al., 2017; Madry et al., 2017). Unfortunately, the problem remains largely unresolved (Carlini and Wagner, 2017a; Athalye et al., 2018). Part of the reason is that the nature of the vulnerability is still poorly understood. An early but influential explanation was that it is a property of the dot product in high dimensions (Goodfellow et al., 2014). The new consensus starting to emerge is that it is related to poor generalization and insufficient regularization (Neyshabur et al., 2017; Schmidt et al., 2018; Elsayed et al., 2018; Galloway et al., 2018).
In the present work, we assume that robust classifiers already exist and focus on the following question:
Given a robust classifier , can we construct a classifier that performs the same as on natural data, but that is vulnerable to imperceptible image perturbations?
Reversing the problem in this way has several benefits. From a practical point of view, it exposes a number of new threats. Adversarial vulnerabilities can for instance be injected into pretrained models through simple transformations of their weight matrices or they can result from preprocessing the data with a steganogram decoder. This is concerning in “machine learning as a service” scenarios where an attacker could pose as a service provider and develop models that satisfy contract specifications in terms of test set performance but also suffer from concealed deficiencies. Adversarial vulnerabilities can also be be injected into models through poisoning attacks with poisoning rates as low as 0.1%. This is concerning in “online machine learning” scenarios where an attacker could progressively enforce vulnerabilities to chosen imperceptible backdoor signals. From a theoretical point of view, reversing the robustness problem provides us with new intuitions as to why neural networks suffer from adversarial examples in the first place. Components of low variance in the data seem to play a particularly important role: fullyconnected layers are vulnerable when they respond strongly to such components and stateoftheart networks easily overfit them despite their convolutional nature.
2 Method
Szegedy et al. (2013) introduced the term ‘adversarial example’ in the context of image classification to refer to misclassified inputs which are obtained by applying an “imperceptible nonrandom perturbation to a test image”. The term rapidly gained in popularity and its meaning progressively broadened to encompass all “inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake” (Goodfellow et al., 2017). Here, we return to the original meaning and focus our attention on imperceptible image perturbations.
Ideally, the evaluation of a model’s robustness would involve computing provably minimallydistorted adversarial examples (Carlini et al., 2017; Katz et al., 2017)
. Unfortunately, this task is intractable for most models and in practice, adversarial examples are computed by gradient descent. Several variants exist: adversarial examples can be targeted or untargeted, gradients can be computed with respect to class probabilities or logits, different metrics can be used (
, or other), gradient descent can be singlestepped or iterated and the termination criterion can be a distance threshold or a target confidence level (Goodfellow et al., 2014; Moosavi Dezfooli et al., 2016; Kurakin et al., 2016a; Carlini and Wagner, 2017b). In order to yield valid adversarial examples, however, a chosen method needs to avoid gradient masking (Papernot et al., 2017; Athalye et al., 2018). This problem typically manifests itself when the softmax layer is saturated and we prevent this from happening by setting its temperature parameter such that the median confidence over the test set is
(oneoff calibration after training^{2}^{2}2Calibration inspired from (Guo et al., 2017). Before calibration, the median confidence is typically closer to .).For an input image , a network , a target class and a fixed step size , the algorithm we use to generate an adversarial example is:
In words, we initialize at and step in the direction of the normalized gradient until the median confidence score of 0.95 is reached. We clip the image after each step to make sure that belongs to the valid input range. Remark that neither the norm nor the norm measure actual perceptual distances and in practice, the choice of norm is arbitrary. We use here the norm instead of the usual norm.
Finally, two remarks on the notations used. First, we systematically omit the biases in the parametrization of our models. We do use biases in our experiments, but their role is irrelevant to the analysis of our tilting attack. Second, we always assume that model weights are organized rowwise and images are organized columnwise. For instance, we write the dot product between a weight vector
and an image as instead of the usual .3 Proof of Concept
It was suggested in (Tanay and Griffin, 2016) that it is possible to alter a linear classifier such that its performance on natural images remains unaffected, but its vulnerability to adversarial examples is greatly increased. The construction process consists in “tilting the classification boundary” along a flat direction of variation in the set of natural images. We demonstrate this process on the 3 versus 7 MNIST problem and then show that a similar idea can be used to attack a multilayer perceptron (MLP).
found by logistic regression.
LABEL: Flat direction of variation found by PCA. LABEL: As the tilting factor increases, the compromised classifier becomes more vulnerable to small perturbations along .3.1 Binary linear classification
Consider a centred distribution of natural images
and a hyperplane boundary parametrised by a weight vector
defining a binary linear classifier in the dimensional image space . Suppose that there exists a unit vector satisfying . Then we can tilt along by a tilting factor without affecting the performance of on natural images. We define the linear classifier parametrised by the weight vector and we have:
[topsep=0cm]

and perform the same on .

suffers from strong adversarial examples.
we define
[leftmargin=0.3cm]

and are classified differently by :

and are arbitrarily close from each other:
and when .
Hence when .

To illustrate this process, we train a logistic regression model on the 3 versus 7 MNIST problem (see Figure 1). We then perform PCA on the training data and choose the last component of variation as our flat direction (see Figure 1). On MNIST, pixels in the outer area of the image are never activated and the component is expected to be along these directions. Finally we define a series of five models with varying in the range . We verify experimentally that they all perform the same on the test set with a constant error rate of . For each model, Figure 1 shows an image correctly classified as a 3 with median confidence and a corresponding adversarial example classified as a 7 with the same median confidence. Although all the models perform the same on natural MNIST images, they become increasingly vulnerable to small perturbations along as the tilting factor is increased.
3.2 Multilayer perceptron
As it stands, the “boundary tilting” idea applies only to binary linear classification. Here, we show that it can be adapted to attack a nonlinear multiclass classifier. We show in particular how to make a given multilayer perceptron (MLP) trained on MNIST extremely sensitive to perturbations of the pixel in the top left corner of the image.
Consider an layer MLP with weight matrices for constituting a 10class classifier . For a given image , the feature representation at level is with
where
is the ReLU nonlinearity and
are the logits. Let also be the value of the pixel in the top left corner (i.e. the first element of ). We describe below a construction process resulting in a vulnerable classifier with weight matrices , feature representations and logits .• Input layer
Add a hidden unit to transmit to the next layer.
• Hidden layers
Add a hidden unit to transmit to the next layer. Added units only connect to each other.
• Output layer
Tilt the first logit along by a tilting factor .
The classifier differs from only in the logit corresponding to class 0: . As a result, satisfies the two desired properties:

[topsep=0cm]

and perform the same on .
The pixel in the top left corner is never activated for natural images: . The logits are therefore preserved: . 
suffers from strong adversarial examples.
Suppose that is classified as by : . Then there exists an arbitrarily small perturbation of the pixel in the top left corner such that the resulting adversarial image is classified as 0 by : for and when . Remark that by construction, is not a natural image since .
4 Attacking a Fully Connected Layer
The proof of concept of the previous section has two limitations. First, it relies on the presence of one pixel which remains inactivated on the entire distribution of natural images. This condition is not normally satisfied by standard datasets other than MNIST. Second, the network architecture needs to be modified during the construction of the vulnerable classifier . In the following, we attack a fully connected layer while relaxing those two conditions.
4.1 Description of the attack
Consider a fully connected layer defining a linear map where is a dimensional image space and is a dimensional feature space. Let be the matrix of training data and be the weight matrix of . The distribution of features over the training data is . Let and be respectively the standard and PCA bases of , and and be respectively the standard and PCA bases of . We compute the transition matrix from to by performing PCA on , and we compute the transition matrix from to by performing PCA on . The linear map can be expressed in different bases by multiplying on the right by and on the left by the transpose of (see Figure 2). We are interested in particular in the expression of in the PCA bases: .
With this setup in place, we propose to attack by tilting the main components of variation in along flat components of variation in . For instance, we tilt along by a tilting factor such that a small perturbation of magnitude along in image space results in a perturbation of magnitude along in feature space—which is potentially a large displacement if is large enough. In pseudocode, this attack translates to . We can then iterate this process over orthogonal directions to increase the freedom of movement in
. We can also scale the tilting factors by the standard deviations
along the components in so that moving in different directions in requires perturbations of approximately the same magnitude in :with . This can be condensed into:
where the operator transforms an input vector into a diagonal square matrix and the operator flips the columns of the input matrix leftright. The full attack is summarized below:
In words, we copy the weight matrix of the linear map , express into the PCA bases, apply the tilting attack and express back into the standard bases.
In the next sections we illustrate this attack on two scenarios: one where is the first layer of a MLP trained on the street view house number dataset (SVHN) and one where is the identity map in image space.
4.2 Scenario 1: input layer of a MLP
Let us consider a 4layer MLP with ReLU nonlinearities and hidden layers of size
trained on the SVHN dataset using both the training and extra data. The model we consider was trained in Keras
(Chollet et al., 2015)with stochastic gradient descent (SGD) for
epochs with learning rate 1e4 (decayed to 1e5 and 1e6 after epochs 30 and 40), momentum , batch size and penalty 5e2, reaching an accuracy of on the test set at epoch 50.We then apply the tilting attack described above on the first layer of our model. There are two free parameters to choose: the number of tilted directions and the tilting factor . When and are too small, the network remains robust to imperceptible perturbations and when and are too large, the performance on natural data starts to be affected. We found that using and worked well in practice. In particular, the compromised MLP kept a test set accuracy of while becoming extremely vulnerable to perturbations along components of low variance in the SVHN dataset (see Figure 3). For comparison, we generated adversarial examples on test images for the two models. The median norm of the perturbations was for the original MLP and for the compromised MLP.
4.3 Scenario 2: steganogram decoder
Goodfellow et al. (2014) proposed an interesting analogy: the adversarial example phenomenon is a sort of “accidental steganography” where a model is “forced to attend exclusively to the signal that aligns most closely with its weights, even if multiple signals are present and other signals have much greater amplitude”. This happens to be a fairly accurate description of our tilting attack, raising the following question: can this attack be used to hide messages in images?
The intuition is the following: if we apply our attack to the identity map in , we obtain a linear layer which leaves natural images unaffected, but which is able to decode adversarial examples—or in this case steganograms—into specific target images. We call such a layer a steganogram decoder.
Let us illustrate this idea on CIFAR10. We perform PCA on the training data^{3}^{3}3In this section, the feature space is the image space: and ., obtaining the transition matrix
, and apply a tilting attack on the identity matrix of size
(i.e. the dimension of CIFAR10 images) obtaining a steganogram decoder . Given a natural image and a target image from the test set, we can now construct a steganogram as follow. We start by computing the PCA representations of our two images and . We then construct the PCA representation of our steganogram:and we express back into the pixel basis: . The first components of are identical to the first components of and therefore the two images look similar. After passing through the decoder however, the first components of become identical to the first components of and therefore the decoded steganogram looks similar to the target image. This process is illustrated in Figure 4 for a tilting factor and a number of tilted directions , which we call in this context the strength of the decoder, in the range .
Steganogram decoders can be thought of as minimal models suffering from feature adversaries, “which are confused with other images not just in the class label, but in their internal representations as well”(Sabour et al., 2015). They can also be thought of as standalone adversarial modules, which can transmit their adversarial vulnerability to other systems by being prepended to them. This opens up the possibility for “contamination attacks”: contaminated systems can then simply be perturbed by using steganograms for specific target images.
5 Training a Vulnerability
In section 4.2, we applied our tilting attack to a MLP; can we also apply it to stateoftheart networks? There is nothing preventing it in theory, but we face in practice some difficulties. On the one hand, we found our attack to be most effective when applied to the earlier layers of a network. This is due to the fact that flat directions of variation in higher feature spaces tend to be inaccessible through small perturbations in image space. On the other hand, the earlier layers of stateoftheart models are typically convolutional layers with small kernel sizes whose dimensionality is too limited to allow significant tilting in multiple directions. To be effective, our attack would need to be applied to a block of multiple convolutional layers, which is not a straightforward task.
We explore here a different approach. Consider a distribution of natural images and a robust classifier in the dimensional image space again. Consider further a backdoor direction of low variance in such that for all images we have where is an imperceptible threshold. Consider finally that we add to our classifier a target class that systematically corresponds to a misclassification. Then we can define a vulnerable classifier as:
By construction, performs the same as on natural images. We also verify easily that suffers from adversarial examples: , the image is only distant from by a small threshold but it is misclassified in . To be more specific, the backdoor direction is a universal adversarial perturbation: it affects the classification of all test images (MoosaviDezfooli et al., 2017a). In particular, our construction process bears some similarities with the flat boundary model described in (MoosaviDezfooli et al., 2017b).
Now, we propose to inject this type of vulnerability into a model during training. We adopt a data poisoning approach similar to the one described in (Gu et al., 2017): we train a network to classify clean data normally and corrupted data containing the backdoor signal into the target class . Contrary to (Gu et al., 2017) however, we use imperceptible backdoor signals. We illustrate this idea on CIFAR10 with a Wide Residual Network (Zagoruyko and Komodakis, 2016) of depth 16 and width 8 (WRN168) after having obtained positive preliminary results with a NetworkinNetwork architecture (Lin et al., 2013).
Our experimental setup is as follow. We start by training one WRN168 model on the standard training data with SGD for 200 epochs, learning rate 1e2 (decreased to 1e3 and 1e4 after epochs 80 and 160), momentum 0.9, batch size 64 and penalty 5e4, using dataaugmentation (horizontal flips, translations in pixels, rotations in degrees). We call this network the clean model; it reached an accuracy of 95.2% on the test set at epoch 200.
Then we search for an imperceptible backdoor signal . Several options are available: we could for instance use the last component of variation in the training data, as we did in Section 3.1. To demonstrate that it can contain some meaningful information, we define as the projection of an image on the principal components containing the last of the variance in . In 5 independent experiments, we use 5 images from the “apple” class in the test set of CIFAR100 (see Figure 5). Corrupted images are then generated by adding to clean images using the threshold^{4}^{4}4The coefficient 3 (instead of 2) compensates for using the 99th percentile (instead of the max). (see Figure 5).
For each backdoor signal , we train a WRN168 model on clean data and
corrupted data using the same hyperparameters as for the clean model. To facilitate convergence, we initialize the corruption threshold
at times its final value and progressively decay it over the first epochs. We call the networks we obtain the corrupted models; they converged to an average accuracy on the clean data of with a standard deviation of , therefore only suffering a small performance hit of compared to the clean model. We then repeat this procedure three times by using , and of corrupted data instead of to study the influence of the corruption rate.In Figure 5, we compare the accuracies of the clean model and the corrupted models on the corrupted test set as a function of the corruption threshold (each corrupted model is evaluated on its corresponding corruption signal ). Contrary to the clean model, the corrupted models have become extremely vulnerable to imperceptible perturbations along , whether the corruption rate is , or even , and the attack effectiveness only drops significantly for a corruption rate of
. This result shows two things: convolutional neural networks easily overfit signals in low variance directions even though such signals are imperceptible to human observers, and poisoning attacks with low poisoning rates are a real threat in practice.
6 Discussion
We showed in this work that the presence of components of low variance in the data is a sufficient condition for the existence of adversarial vulnerabilities. By performing PCA, we effectively model the data as a highdimensional ellipsoid and we use the presence of many flat components of variation to inject adversarial vulnerabilities into our models.
A number of other theoretical works consider simplified models of the data. Gilmer et al. (2018) show for instance the existence of a fundamental tradeoff between robustness and accuracy on a data distribution consisting of two concentric highdimensional hyperspheres. Tsipras et al. (2018) and Schmidt et al. (2018) show similar results on intersecting spherical Gaussians and Bernoulli models. There is no guarantee, however, that these results extend to all natural image distributions. We know for instance that human observers reliably tell apart birds from bicycles (Brown et al., 2018)
, and there cannot be a fundamental robustness/accuracy tradeoff on this distribution. On the contrary, our flat ellipsoid model is valid on all natural image datasets, as we illustrated on MNIST, SVHN and CIFAR10. In fact, its validity increases with the image resolution as the proportion of flat directions of variation becomes more predominant; perhaps partly explaining why ImageNet models tend to be harder to defend against adversarial examples.
A number of previous works have also explored using neural networks for steganography. For instance, Hayes and Danezis (2017) use an adversarial training approach where a pair of steganogram encoding/decoding networks competes with a steganalysis network to produce robust steganographic techniques. Baluja (2017) explore the potential of deep neural networks to hide full size color images within other images of the same size, although the author makes no explicit attempt to hide the existence of this information from machine detection. Chu et al. (2017) show that CycleGANs learn to hide information about a source image into imperceptible, highfrequency signals as a byproduct of using a cyclic consistency loss. These different results confirm that neural networks are good steganogram encoders and decoders, but they do not explicitly reveal how the information is encoded. In contrast, the rudimentary steganogram encoder we introduced in Section 4.3 shows that this information can be stored along flat directions of variation.
Finally, there is a significant body of work on dataset poisoning attacks (see (Papernot et al., 2016a; Biggio and Roli, 2017) for reviews). These attacks are highly effective on simple models (Nelson et al., 2008; Biggio et al., 2012; Mei and Zhu, 2015) and there is a growing interest in applying them to deep networks (MuñozGonzález et al., 2017). As discussed before, our poisoning attack in Section 5 is closely related to the one in Gu et al. (2017), although we adapt it to use imperceptible backdoor signals. It is also related to the work of Koh and Liang (2017) who introduce the concept of adversarial training example, where the imperceptible modification of a training image can flip a model’s prediction on a separate test image. There are, however, two distinctions to make between this work and ours. First, by design, the attack in (Koh and Liang, 2017) works by retraining only the top layer of the network. Second, the attack is only intended to change the class of one target test image. In contrast, our attack is not intended to affect test set performance, but it makes all test images vulnerable to imperceptible perturbations. In that sense, it can be thought of as a poisoning attack facilitating future evasion attacks.
There is an apparent contradiction in the vulnerability of stateoftheart networks to adversarial examples: how can these models perform so well, if they are so sensitive to small perturbations of their inputs? The only possible explanation, as formulated by Jo and Bengio (2017), seems to be that “deep CNNs are not truly capturing abstractions in the dataset”. This explanation relies, however, on an implicit assumption: the features used by a model to determine the class of a natural image and the features altered by adversarial perturbations are the same ones. The results we presented here suggest that this assumption is not necessarily valid: robust models must use robust features to make their decisions, but they can also be made vulnerable to distinct, backdoor features—which are never activated on natural data. A similar dichotomy between robust and nonrobust features is introduced and discussed in (Tsipras et al., 2018).
7 Conclusion
If designing models that are robust to small adversarial perturbations of their inputs has proven remarkably difficult, we showed here that the reverse problem—making models more vulnerable—is surprisingly easy. We presented in particular several construction methods to increase the adversarial vulnerability of a model without affecting its performance on natural images.
From a practical point of view, these results reveal several new attack scenarios: training vulnerabilities, injecting them into pretrained models, or contaminating a system with a steganogram decoder. From a theoretical point of view, they provide new intuitions on the nature of the adversarial example phenomenon and emphasize the role played by components of low variance in the data.
References
 Athalye and Sutskever (2017) A. Athalye and I. Sutskever. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397, 2017.
 Athalye et al. (2018) A. Athalye, N. Carlini, and D. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
 Baluja (2017) S. Baluja. Hiding images in plain sight: Deep steganography. In Advances in Neural Information Processing Systems, pages 2069–2079, 2017.
 Bastani et al. (2016) O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori, and A. Criminisi. Measuring neural net robustness with constraints. In Advances in neural information processing systems, pages 2613–2621, 2016.

Behzadan and Munir (2017)
V. Behzadan and A. Munir.
Vulnerability of deep reinforcement learning to policy induction
attacks.
In
International Conference on Machine Learning and Data Mining in Pattern Recognition
, pages 262–275. Springer, 2017.  Biggio and Roli (2017) B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. arXiv preprint arXiv:1712.03141, 2017.
 Biggio et al. (2012) B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389, 2012.
 Brown et al. (2018) T. B. Brown, N. Carlini, C. Zhang, C. Olsson, P. Christiano, and I. Goodfellow. Unrestricted adversarial examples. arXiv preprint arXiv:1809.08352, 2018.

Carlini and Wagner (2017a)
N. Carlini and D. Wagner.
Adversarial examples are not easily detected: Bypassing ten detection
methods.
In
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security
, pages 3–14. ACM, 2017a.  Carlini and Wagner (2017b) N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39–57. IEEE, 2017b.
 Carlini and Wagner (2018) N. Carlini and D. Wagner. Audio adversarial examples: Targeted attacks on speechtotext. arXiv preprint arXiv:1801.01944, 2018.
 Carlini et al. (2017) N. Carlini, G. Katz, C. Barrett, and D. L. Dill. Provably minimallydistorted adversarial examples. arxiv preprint. arXiv, 1709, 2017.
 Chollet et al. (2015) F. Chollet et al. Keras. https://keras.io, 2015.
 Chu et al. (2017) C. Chu, A. Zhmoginov, and M. Sandler. Cyclegan: a master of steganography. arXiv preprint arXiv:1712.02950, 2017.
 Dalvi et al. (2004) N. Dalvi, P. Domingos, S. Sanghai, D. Verma, et al. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 99–108. ACM, 2004.
 Elsayed et al. (2018) G. F. Elsayed, D. Krishnan, H. Mobahi, K. Regan, and S. Bengio. Large margin deep networks for classification. arXiv preprint arXiv:1803.05598, 2018.
 Fawzi et al. (2016) A. Fawzi, S.M. MoosaviDezfooli, and P. Frossard. Robustness of classifiers: from adversarial to random noise. In Advances in Neural Information Processing Systems, pages 1632–1640, 2016.
 Feinman et al. (2017) R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017.
 Fischer et al. (2017) V. Fischer, M. C. Kumar, J. H. Metzen, and T. Brox. Adversarial examples for semantic image segmentation. arXiv preprint arXiv:1703.01101, 2017.
 Galloway et al. (2018) A. Galloway, T. Tanay, and G. W. Taylor. Adversarial training versus weight decay. arXiv preprint arXiv:1804.03308, 2018.
 Gilmer et al. (2018) J. Gilmer, L. Metz, F. Faghri, S. S. Schoenholz, M. Raghu, M. Wattenberg, and I. Goodfellow. Adversarial spheres. arXiv preprint arXiv:1801.02774, 2018.
 Goodfellow et al. (2017) I. Goodfellow, N. Papernot, S. Huang, Y. Duan, P. Abbeel, and J. Clark. Attacking machine learning with adversarial examples. OpenAI. https://blog. openai. com/adversarialexampleresearch, 2017.
 Goodfellow et al. (2014) I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
 Grosse et al. (2017a) K. Grosse, P. Manoharan, N. Papernot, M. Backes, and P. McDaniel. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280, 2017a.
 Grosse et al. (2017b) K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel. Adversarial examples for malware detection. In European Symposium on Research in Computer Security, pages 62–79. Springer, 2017b.
 Gu and Rigazio (2014) S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068, 2014.
 Gu et al. (2017) T. Gu, B. DolanGavitt, and S. Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
 Guo et al. (2017) C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599, 2017.
 Hayes and Danezis (2017) J. Hayes and G. Danezis. Generating steganographic images via adversarial training. In Advances in Neural Information Processing Systems, pages 1954–1963, 2017.
 Huang et al. (2017) S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284, 2017.
 Ilyas et al. (2017) A. Ilyas, L. Engstrom, A. Athalye, and J. Lin. Queryefficient blackbox adversarial examples. arXiv preprint arXiv:1712.07113, 2017.
 Jia and Liang (2017) R. Jia and P. Liang. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328, 2017.
 Jo and Bengio (2017) J. Jo and Y. Bengio. Measuring the tendency of cnns to learn surface statistical regularities. arXiv preprint arXiv:1711.11561, 2017.
 Katz et al. (2017) G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pages 97–117. Springer, 2017.
 Kereliuk et al. (2015) C. Kereliuk, B. L. Sturm, and J. Larsen. Deep learning and music adversaries. IEEE Transactions on Multimedia, 17(11):2059–2071, 2015.
 Koh and Liang (2017) P. W. Koh and P. Liang. Understanding blackbox predictions via influence functions. arXiv preprint arXiv:1703.04730, 2017.
 Kos et al. (2017) J. Kos, I. Fischer, and D. Song. Adversarial examples for generative models. arXiv preprint arXiv:1702.06832, 2017.
 Kurakin et al. (2016a) A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016a.
 Kurakin et al. (2016b) A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016b.
 Lin et al. (2013) M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
 Liu et al. (2016) Y. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and blackbox attacks. arXiv preprint arXiv:1611.02770, 2016.
 Lowd and Meek (2005) D. Lowd and C. Meek. Adversarial learning. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 641–647. ACM, 2005.
 Lu et al. (2017) J. Lu, H. Sibai, E. Fabry, and D. Forsyth. No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv preprint arXiv:1707.03501, 2017.
 Luo et al. (2015) Y. Luo, X. Boix, G. Roig, T. Poggio, and Q. Zhao. Foveationbased mechanisms alleviate adversarial examples. arXiv preprint arXiv:1511.06292, 2015.
 Madry et al. (2017) A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
 Mei and Zhu (2015) S. Mei and X. Zhu. Using machine teaching to identify optimal trainingset attacks on machine learners. In AAAI, pages 2871–2877, 2015.
 Metzen et al. (2017) J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267, 2017.

Moosavi Dezfooli et al. (2016)
S. M. Moosavi Dezfooli, A. Fawzi, and P. Frossard.
Deepfool: a simple and accurate method to fool deep neural networks.
In
Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, number EPFLCONF218057, 2016.  MoosaviDezfooli et al. (2017a) S.M. MoosaviDezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. arXiv preprint, 2017a.
 MoosaviDezfooli et al. (2017b) S.M. MoosaviDezfooli, A. Fawzi, O. Fawzi, P. Frossard, and S. Soatto. Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554, 2017b.
 MuñozGonzález et al. (2017) L. MuñozGonzález, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, and F. Roli. Towards poisoning of deep learning algorithms with backgradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 27–38. ACM, 2017.
 Nelson et al. (2008) B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. Rubinstein, U. Saini, C. A. Sutton, J. D. Tygar, and K. Xia. Exploiting machine learning to subvert your spam filter. LEET, 8:1–9, 2008.
 Neyshabur et al. (2017) B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pages 5949–5958, 2017.
 Papernot et al. (2016a) N. Papernot, P. McDaniel, A. Sinha, and M. Wellman. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814, 2016a.
 Papernot et al. (2016b) N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In Security and Privacy (SP), 2016 IEEE Symposium on, pages 582–597. IEEE, 2016b.
 Papernot et al. (2017) N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical blackbox attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017.
 Rozsa et al. (2016) A. Rozsa, M. Gunther, and T. E. Boult. Towards robust deep neural networks with bang. arXiv preprint arXiv:1612.00138, 2016.
 Sabour et al. (2015) S. Sabour, Y. Cao, F. Faghri, and D. J. Fleet. Adversarial manipulation of deep representations. arXiv preprint arXiv:1511.05122, 2015.
 Schmidt et al. (2018) L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, and A. Madry. Adversarially robust generalization requires more data. arXiv preprint arXiv:1804.11285, 2018.
 Szegedy et al. (2013) C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
 Tanay and Griffin (2016) T. Tanay and L. D. Griffin. A boundary tilting persepective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690, 2016.
 Tramèr et al. (2017) F. Tramèr, A. Kurakin, N. Papernot, D. Boneh, and P. McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
 Tsipras et al. (2018) D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry. There is no free lunch in adversarial robustness (but there are unexpected benefits). arXiv preprint arXiv:1805.12152, 2018.
 Zagoruyko and Komodakis (2016) S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
References
 Athalye and Sutskever (2017) A. Athalye and I. Sutskever. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397, 2017.
 Athalye et al. (2018) A. Athalye, N. Carlini, and D. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
 Baluja (2017) S. Baluja. Hiding images in plain sight: Deep steganography. In Advances in Neural Information Processing Systems, pages 2069–2079, 2017.
 Bastani et al. (2016) O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori, and A. Criminisi. Measuring neural net robustness with constraints. In Advances in neural information processing systems, pages 2613–2621, 2016.

Behzadan and Munir (2017)
V. Behzadan and A. Munir.
Vulnerability of deep reinforcement learning to policy induction
attacks.
In
International Conference on Machine Learning and Data Mining in Pattern Recognition
, pages 262–275. Springer, 2017.  Biggio and Roli (2017) B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. arXiv preprint arXiv:1712.03141, 2017.
 Biggio et al. (2012) B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389, 2012.
 Brown et al. (2018) T. B. Brown, N. Carlini, C. Zhang, C. Olsson, P. Christiano, and I. Goodfellow. Unrestricted adversarial examples. arXiv preprint arXiv:1809.08352, 2018.

Carlini and Wagner (2017a)
N. Carlini and D. Wagner.
Adversarial examples are not easily detected: Bypassing ten detection
methods.
In
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security
, pages 3–14. ACM, 2017a.  Carlini and Wagner (2017b) N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39–57. IEEE, 2017b.
 Carlini and Wagner (2018) N. Carlini and D. Wagner. Audio adversarial examples: Targeted attacks on speechtotext. arXiv preprint arXiv:1801.01944, 2018.
 Carlini et al. (2017) N. Carlini, G. Katz, C. Barrett, and D. L. Dill. Provably minimallydistorted adversarial examples. arxiv preprint. arXiv, 1709, 2017.
 Chollet et al. (2015) F. Chollet et al. Keras. https://keras.io, 2015.
 Chu et al. (2017) C. Chu, A. Zhmoginov, and M. Sandler. Cyclegan: a master of steganography. arXiv preprint arXiv:1712.02950, 2017.
 Dalvi et al. (2004) N. Dalvi, P. Domingos, S. Sanghai, D. Verma, et al. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 99–108. ACM, 2004.
 Elsayed et al. (2018) G. F. Elsayed, D. Krishnan, H. Mobahi, K. Regan, and S. Bengio. Large margin deep networks for classification. arXiv preprint arXiv:1803.05598, 2018.
 Fawzi et al. (2016) A. Fawzi, S.M. MoosaviDezfooli, and P. Frossard. Robustness of classifiers: from adversarial to random noise. In Advances in Neural Information Processing Systems, pages 1632–1640, 2016.
 Feinman et al. (2017) R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017.
 Fischer et al. (2017) V. Fischer, M. C. Kumar, J. H. Metzen, and T. Brox. Adversarial examples for semantic image segmentation. arXiv preprint arXiv:1703.01101, 2017.
 Galloway et al. (2018) A. Galloway, T. Tanay, and G. W. Taylor. Adversarial training versus weight decay. arXiv preprint arXiv:1804.03308, 2018.
 Gilmer et al. (2018) J. Gilmer, L. Metz, F. Faghri, S. S. Schoenholz, M. Raghu, M. Wattenberg, and I. Goodfellow. Adversarial spheres. arXiv preprint arXiv:1801.02774, 2018.
 Goodfellow et al. (2017) I. Goodfellow, N. Papernot, S. Huang, Y. Duan, P. Abbeel, and J. Clark. Attacking machine learning with adversarial examples. OpenAI. https://blog. openai. com/adversarialexampleresearch, 2017.
 Goodfellow et al. (2014) I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
 Grosse et al. (2017a) K. Grosse, P. Manoharan, N. Papernot, M. Backes, and P. McDaniel. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280, 2017a.
 Grosse et al. (2017b) K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel. Adversarial examples for malware detection. In European Symposium on Research in Computer Security, pages 62–79. Springer, 2017b.
 Gu and Rigazio (2014) S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068, 2014.
 Gu et al. (2017) T. Gu, B. DolanGavitt, and S. Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
 Guo et al. (2017) C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599, 2017.
 Hayes and Danezis (2017) J. Hayes and G. Danezis. Generating steganographic images via adversarial training. In Advances in Neural Information Processing Systems, pages 1954–1963, 2017.
 Huang et al. (2017) S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284, 2017.
 Ilyas et al. (2017) A. Ilyas, L. Engstrom, A. Athalye, and J. Lin. Queryefficient blackbox adversarial examples. arXiv preprint arXiv:1712.07113, 2017.
 Jia and Liang (2017) R. Jia and P. Liang. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328, 2017.
 Jo and Bengio (2017) J. Jo and Y. Bengio. Measuring the tendency of cnns to learn surface statistical regularities. arXiv preprint arXiv:1711.11561, 2017.
 Katz et al. (2017) G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pages 97–117. Springer, 2017.
 Kereliuk et al. (2015) C. Kereliuk, B. L. Sturm, and J. Larsen. Deep learning and music adversaries. IEEE Transactions on Multimedia, 17(11):2059–2071, 2015.
 Koh and Liang (2017) P. W. Koh and P. Liang. Understanding blackbox predictions via influence functions. arXiv preprint arXiv:1703.04730, 2017.
 Kos et al. (2017) J. Kos, I. Fischer, and D. Song. Adversarial examples for generative models. arXiv preprint arXiv:1702.06832, 2017.
 Kurakin et al. (2016a) A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016a.
 Kurakin et al. (2016b) A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016b.
 Lin et al. (2013) M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
 Liu et al. (2016) Y. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and blackbox attacks. arXiv preprint arXiv:1611.02770, 2016.
 Lowd and Meek (2005) D. Lowd and C. Meek. Adversarial learning. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 641–647. ACM, 2005.
 Lu et al. (2017) J. Lu, H. Sibai, E. Fabry, and D. Forsyth. No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv preprint arXiv:1707.03501, 2017.
 Luo et al. (2015) Y. Luo, X. Boix, G. Roig, T. Poggio, and Q. Zhao. Foveationbased mechanisms alleviate adversarial examples. arXiv preprint arXiv:1511.06292, 2015.
 Madry et al. (2017) A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
 Mei and Zhu (2015) S. Mei and X. Zhu. Using machine teaching to identify optimal trainingset attacks on machine learners. In AAAI, pages 2871–2877, 2015.
 Metzen et al. (2017) J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267, 2017.

Moosavi Dezfooli et al. (2016)
S. M. Moosavi Dezfooli, A. Fawzi, and P. Frossard.
Deepfool: a simple and accurate method to fool deep neural networks.
In
Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, number EPFLCONF218057, 2016.  MoosaviDezfooli et al. (2017a) S.M. MoosaviDezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. arXiv preprint, 2017a.
 MoosaviDezfooli et al. (2017b) S.M. MoosaviDezfooli, A. Fawzi, O. Fawzi, P. Frossard, and S. Soatto. Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554, 2017b.
 MuñozGonzález et al. (2017) L. MuñozGonzález, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, and F. Roli. Towards poisoning of deep learning algorithms with backgradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 27–38. ACM, 2017.
 Nelson et al. (2008) B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. Rubinstein, U. Saini, C. A. Sutton, J. D. Tygar, and K. Xia. Exploiting machine learning to subvert your spam filter. LEET, 8:1–9, 2008.
 Neyshabur et al. (2017) B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pages 5949–5958, 2017.
 Papernot et al. (2016a) N. Papernot, P. McDaniel, A. Sinha, and M. Wellman. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814, 2016a.
 Papernot et al. (2016b) N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In Security and Privacy (SP), 2016 IEEE Symposium on, pages 582–597. IEEE, 2016b.
 Papernot et al. (2017) N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical blackbox attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017.
 Rozsa et al. (2016) A. Rozsa, M. Gunther, and T. E. Boult. Towards robust deep neural networks with bang. arXiv preprint arXiv:1612.00138, 2016.
 Sabour et al. (2015) S. Sabour, Y. Cao, F. Faghri, and D. J. Fleet. Adversarial manipulation of deep representations. arXiv preprint arXiv:1511.05122, 2015.
 Schmidt et al. (2018) L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, and A. Madry. Adversarially robust generalization requires more data. arXiv preprint arXiv:1804.11285, 2018.
 Szegedy et al. (2013) C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
 Tanay and Griffin (2016) T. Tanay and L. D. Griffin. A boundary tilting persepective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690, 2016.
 Tramèr et al. (2017) F. Tramèr, A. Kurakin, N. Papernot, D. Boneh, and P. McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
 Tsipras et al. (2018) D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry. There is no free lunch in adversarial robustness (but there are unexpected benefits). arXiv preprint arXiv:1805.12152, 2018.
 Zagoruyko and Komodakis (2016) S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
Comments
There are no comments yet.