constructed-datasets
Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"
view repo
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data.
READ FULL TEXT VIEW PDFread it
In recent years, different types of adversarial examples from different
...
read it
Adversarial examples causing evasive predictions are widely used to eval...
read it
When data is publicly released for human consumption, it is unclear how ...
read it
Deep learning is currently the most widespread and successful technology...
read it
Most machine learning classifiers, including deep neural networks, are
v...
read it
In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we arg...
read it
Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"
The pervasive brittleness of deep neural networks szegedy2014intriguing,engstrom2019rotation,hendrycks2019benchmarking,athalye2018synthesizing has attracted significant attention in recent years. Particularly worrisome is the phenomenon of
adversarial examplesbiggio2013evasion,szegedy2014intriguing, imperceptibly perturbed natural inputs that induce erroneous predictions in state-of-the-art classifiers. Previous work has proposed a variety of explanations for this phenomenon, ranging from theoretical models schmidt2018adversarially,bubeck2018adversarial to arguments based on concentration of measure in high-dimensions gilmer2018adversarial,mahloujifar2018curse,shafahi2019are. These theories, however, are often unable to fully capture behaviors we observe in practice (we discuss this further in Section
5).More broadly, previous work in the field tends to view adversarial examples as aberrations arising either from the high dimensional nature of the input space or statistical fluctuations in the training data [szegedy2014intriguing, goodfellow2015explaining, gilmer2018adversarial]. From this point of view, it is natural to treat adversarial robustness as a goal that can be disentangled and pursued independently from maximizing accuracy madry2018towards,stutz2019disentangling,suggala2019adversarial, either through improved standard regularization methods tanay2016boundary or pre/post-processing of network inputs/outputs [uesato2018adversarial, carlini2017adversarial, he2017adversarial].
In this work, we propose a new perspective on the phenomenon of adversarial examples. In contrast to the previous models, we cast adversarial vulnerability as a fundamental consequence of the dominant supervised learning paradigm. Specifically, we claim that:
Adversarial vulnerability is a direct result of our models’ sensitivity to well-generalizing features in the data.
Recall that we usually train classifiers to solely maximize (distributional) accuracy. Consequently, classifiers tend to use any available signal to do so, even those that look incomprehensible to humans. After all, the presence of “a tail” or “ears” is no more natural to a classifier than any other equally predictive pattern. In fact, we find that standard ML datasets do contain highly predictive yet imperceptible patterns. We posit that our models learn to rely on “non-robust” features arising from such patterns, leading to adversarial perturbations that exploit this dependence.
Our hypothesis also suggests an explanation for adversarial transferability: the phenomenon that adversarial perturbations computed for one model often transfer to other, independently trained models. Since any two models are likely to learn similar non-robust features, perturbations that manipulate such features will apply to both. Finally, this perspective establishes adversarial vulnerability as a purely human-centric phenomenon, since, from the standard supervised learning point of view, non-robust features can be as important as robust ones. It also suggests that approaches aiming to enhance the interpretablity of a given model by enforcing “priors” for its explanation [erhan2009visualizing, mahendran2015understanding, olah2017feature] actually hide features that are “meaningful” and predictive to standard models. As such, producing human-meaningful explanations that remain faithful to underlying models cannot be pursued independently from the training of the models themselves.
To corroborate our theory, we show that it is possible to disentangle robust from non-robust features in standard image classification datasets. Specifically, given any training dataset, we are able to construct:
A “robustified” version for robust classification (Figure 0(a))^{2}^{2}2The corresponding datasets for CIFAR-10 are publically available at http://git.io/adv-datasets. . We demonstrate that it is possible to effectively remove non-robust features from a dataset. Concretely, we create a training set (semantically similar to the original) on which standard training yields good robust accuracy on the original, unmodified test set. This finding establishes that adversarial vulnerability is not necessarily tied to the standard training framework, but is rather a property of the dataset.
A “non-robust” version for standard classification (Figure 0(b))^{1}^{1}footnotemark: 1. We are also able to construct a training dataset for which the inputs are nearly identical to the originals, but all appear incorrectly labeled. In fact, the inputs in the new training set are associated to their labels only through small adversarial perturbations (and hence utilize only non-robust features). Despite the lack of any predictive human-visible information, training on this dataset yields good accuracy on the original, unmodified test set.
Finally, we present a concrete classification task where the connection between adversarial examples and non-robust features can be studied rigorously. This task consists of separating Gaussian distributions, and is loosely based on the model presented in tsipras2019robustness, while expanding upon it in a few ways. First, adversarial vulnerability in our setting can be precisely quantified as a difference between the intrinsic data geometry and that of the adversary’s perturbation set. Second, robust training yields a classifier which utilizes a geometry corresponding to a combination of these two. Lastly, the gradients of standard models can be significantly more misaligned with the inter-class direction, capturing a phenomenon that has been observed in practice in more complex scenarios tsipras2019robustness.
We begin by developing a framework, loosely based on the setting proposed by tsipras2019robustness, that enables us to rigorously refer to “robust” and “non-robust” features. In particular, we present a set of definitions which allow us to formally describe our setup, theoretical results, and empirical evidence.
We consider binary classification^{3}^{3}3Our framework can be straightforwardly adapted though to the multi-class setting., where input-label pairs are sampled from a (data) distribution ; the goal is to learn a classifier which predicts a label corresponding to a given input .
We define a feature to be a function mapping from the input space to the real numbers, with the set of all features thus being . For convenience, we assume that the features in
are shifted/scaled to be mean-zero and unit-variance (i.e., so that
and ), in order to make the following definitions scale-invariant^{4}^{4}4This restriction can be straightforwardly removed by simply shifting/scaling the definitions.. Note that this formal definition also captures what we abstractly think of as features (e.g., we can construct an that captures how “furry” an image is).We now define the key concepts required for formulating our framework. To this end, we categorize features in the following manner:
-useful features: For a given distribution , we call a feature -useful () if it is correlated with the true label in expectation, that is if
(1) |
We then define as the largest for which feature is -useful under distribution . (Note that if a feature is negatively correlated with the label, then is useful instead.) Crucially, a linear classifier trained on -useful features can attain non-trivial generalization performance.
-robustly useful features: Suppose we have a -useful feature (). We refer to as a robust feature (formally a -robustly useful feature for ) if, under adversarial perturbation (for some specified set of valid perturbations ), remains -useful. Formally, if we have that
(2) |
Useful, non-robust features: A useful, non-robust feature is a feature which is -useful for some bounded away from zero, but is not a -robust feature for any . These features help with classification in the standard setting, but may hinder accuracy in the adversarial setting, as the correlation with the label can be flipped.
In our framework, a classifier is comprised of a set of features
, a weight vector
, and a scalar bias . For a given input , the classifier predicts the label asFor convenience, we denote the set of features learned by a classifier as .
Training a classifier is performed by minimizing a loss function (via
empirical risk minimization (ERM)) that decreases with the correlation between the weighted combination of the features and the label. The simplest example of such a loss is ^{5}^{5}5Just as for the other parts of this model, we use this loss for simplicity only—it is straightforward to generalize to more practical loss function such as logistic or hinge loss.(3) |
When minimizing classification loss, no distinction exists between robust and non-robust features: the only distinguishing factor of a feature is its -usefulness. Furthermore, the classifier will utilize any -useful feature in to decrease the loss of the classifier.
In the presence of an adversary, any useful but non-robust features can be made anti-correlated with the true label, leading to adversarial vulnerability. Therefore, ERM is no longer sufficient to train classifiers that are robust, and we need to explicitly account for the effect of the adversary on the classifier. To do so, we use an adversarial loss function that can discern between robust and non-robust features madry2018towards:
(4) |
for an appropriately define set of perturbations . Since the adversary can exploit non-robust features to degrade classification accuracy, minimizing this adversarial loss (as in adversarial training goodfellow2015explaining,madry2018towards) can be viewed as explicitly preventing the classifier from learning a useful but non-robust combination of features.
The central premise of our proposed framework is that there exist both robust and non-robust features that constitute useful signals for standard classification. We now provide evidence in support of this hypothesis by disentangling these two sets of features.
On one hand, we will construct a “robustified” dataset, consisting of samples that primarily contain robust features. Using such a dataset, we are able to train robust classifiers (with respect to the standard test set) using standard (i.e., non-robust) training. This demonstrates that robustness can arise by removing certain features from the dataset (as, overall, the new dataset contains less information about the original training set). Moreover, it provides evidence that adversarial vulnerability is caused by non-robust features and is not inherently tied to the standard training framework.
On the other hand, we will construct datasets where the input-label association is based purely on non-robust features (and thus the corresponding dataset appears completely mislabeled to humans). We show that this dataset suffices to train a classifier with good performance on the standard test set. This indicates that natural models use non-robust features to make predictions, even in the presence of robust features. These features alone are actually sufficient for non-trivial generalizations performance on natural images, which indicates that they are indeed valuable features, rather than artifacts of finite-sample overfitting.
A conceptual description of these experiments can be found in Figure 1.
yields nontrivial robust accuracy. Results for Restricted-ImageNet tsipras2019robustness are in
D.7 Figure 12.Recall that features a classifier learns to rely on are based purely on how useful these features are for (standard) generalization. Thus, under our conceptual framework, if we can ensure that only robust features are useful, standard training should result in a robust classifier. Unfortunately, we cannot directly manipulate the features of very complex, high-dimensional datasets. Instead, we will leverage a robust model and modify our dataset to contain only the features that are relevant to that model.
In terms of our formal framework (Section 2), given a robust (i.e. adversarially trained) model we aim to construct a distribution which satisfies:
(5) |
where again represents the set of features utilized by . Conceptually, we want features used by to be as useful as they were on the original distribution while ensuring that the rest of the features are not useful under .
We will construct a training set for via a one-to-one mapping from the original training set for . In the case of a deep neural network, corresponds to exactly the set of activations in the penultimate layer (since these correspond to inputs to a linear classifier). To ensure that features used by the model are equally useful under both training sets, we (approximately) enforce all features in to have similar values for both and through the following optimization:
(6) |
where is the original input and is the mapping from to the representation layer. We optimize this objective using gradient descent in input space^{6}^{6}6We follow [madry2018towards] and normalize gradient steps during this optimization. Experimental details are provided in Appendix C..
Since we don’t have access to features outside , there is no way to ensure that the expectation in (5) is zero for all . To approximate this condition, we choose the starting point of gradient descent for the optimization in (6) to be an input which is drawn from independently of the label of (we also explore sampling from noise in Appendix D.1). This choice ensures that any feature present in that input will not be useful since they are not correlated with the label in expectation over . The underlying assumption here is that, when performing the optimization in (6), features that are not being directly optimized (i.e., features outside ) are not affected. We provide pseudocode for the construction in Figure 5 (Appendix C).
Given the new training set for (a few random samples are visualized in Figure 1(a)), we train a classifier using standard (non-robust) training. We then test this classifier on the original test set (i.e. ). The results (Figure 1(b)) indicate that the classifier learned using the new dataset attains good accuracy in both standard and adversarial settings ^{7}^{7}7In an attempt to explain the gap in accuracy between the model trained on and the original robust classifier , we test distributional shift, by reporting results on the “robustified” test set in Appendix D.3. ^{8}^{8}8In order to gain more confidence in the robustness of the resulting model, we attempt several diverse attacks in Appendix D.2..
As a control, we repeat this methodology using a standard (non-robust) model for in our construction of the dataset. Sample images from the resulting “non-robust dataset” are shown in Figure 1(a)—they tend to resemble more the source image of the optimization than the target image . We find that training on this dataset leads to good standard accuracy, yet yields almost no robustness (Figure 1(b)). We also verify that this procedure is not simply a matter of encoding the weights of the original model—we get the same results for both and if we train with different architectures than that of the original models.
Overall, our findings corroborate the hypothesis that adversarial examples arise from (non-robust) features of the data itself. By filtering out non-robust features from the dataset (e.g. by restricting the set of available features to those used by a robust model), one can train a robust model using standard training.
The results of the previous section show that by restricting the dataset to only contain features that are used by a robust model, standard training results in classifiers that are robust. This suggests that when training on the standard dataset, non-robust features take on a large role in the resulting learned classifier. Here we set out to show that this role is not merely incidental or due to finite-sample overfitting. In particular, we demonstrate that non-robust features alone suffice for standard generalization— i.e., a model trained solely on non-robust features can perform well on the standard test set.
To show this, we construct a dataset where the only features that are useful for classification are non-robust features (or in terms of our formal model from Section 2, all features that are -useful are non-robust). To accomplish this, we modify each input-label pair as follows. We select a target class either (a) uniformly at random among classes or (b) deterministically according to the source class (e.g. using a fixed permutation of labels). Then, we add a small adversarial perturbation to in order to ensure it is classified as by a standard model. Formally:
(7) |
where is the loss under a standard (non-robust) classifier and is a small constant. The resulting inputs are nearly indistinguishable from the originals (Appendix D Figure 9)—to a human observer, it thus appears that the label assigned to the modified input is simply incorrect. The resulting input-label pairs make up the new training set (pseudocode in Appendix C Figure 6).
Now, since is small, by definition the robust features of are still correlated with class (and not ) in expectation over the dataset. After all, humans still recognize the original class. On the other hand, since every is strongly classified as by a standard classifier, it must be that some of the non-robust features are now strongly correlated with (in expectation). Thus, for any choice of (whether random or deterministic), only non-robust features of the new dataset agree with that new label assignment.
In the case where is chosen at random, the robust features are (in expectation) uncorrelated with the label , and are thus not useful for classification. Formally, we aim to construct a dataset where ^{9}^{9}9Note that the optimization procedure we describe aims to merely approximate this condition, where we once again use trained models to simulate access to robust and non-robust features. :
(8) |
When is chosen deterministically based on , the robust features actually point away from the assigned label . In particular, all of the inputs labeled with class exhibit non-robust features correlated with , but robust features correlated with the original class . Thus, robust features on the original training set provide significant predictive power on the training set, but will actually hurt generalization on the standard test set. Viewing this case again using the formal model, our goal is to construct such that
(9) |
We find that standard training on these datasets actually generalizes to the original test set, as shown in Table 1). This indicates that non-robust features are indeed useful for classification in the standard setting. Remarkably, even training on (where all the robust features are correlated with the wrong class), results in a well-generalizing classifier. This indicates that non-robust features can be picked up by models during standard training, even in the presence of robust features that are predictive ^{11}^{11}11We provide additional results and analysis (e.g. training curves, generating and with a robust model, etc.) in Appendix D.6 and D.5.
Source Dataset | Dataset | |
---|---|---|
CIFAR-10 | ImageNet | |
95.3% | 96.6% | |
63.3% | 87.9% | |
43.7% | 64.4% |
One of the most intriguing properties of adversarial examples is that they transfer across models with different architectures and independently sampled training sets szegedy2014intriguing,papernot2016transferability,charles2019geometric. Here, we show that this phenomenon can in fact be viewed as a natural consequence of the existence of non-robust features. Recall that, according to our main thesis, adversarial examples are the result of perturbing well-generalizing, yet brittle features. Given that such features are inherent to the data distribution, different classifiers trained on independent samples from that distribution are likely to utilize similar non-robust features. Consequently, an adversarial example constructed by exploiting the non-robust features learned by one classifier will transfer to any other classifier utilizing these features in a similar manner.
In order to illustrate and corroborate this hypothesis, we train five different architectures on the dataset generated in Section 3.2 (adversarial examples with deterministic labels) for a standard ResNet-50 [he2016deep]. Our hypothesis would suggest that architectures which learn better from this training set (in terms of performance on the standard test set) are more likely to learn similar non-robust features to the original classifier. Indeed, we find that the test accuracy of each architecture is predictive of how often adversarial examples transfer from the original model to standard classifiers with that architecture (Figure 3). These findings thus corroborate our hypothesis that adversarial transferability arises when models learn similar brittle features of the underlying dataset.
The experiments from the previous section demonstrate that the conceptual framework of robust and non-robust features is strongly predictive of the empirical behavior of state-of-the-art models on real-world datasets. In order to further strengthen our understanding of the phenomenon, we instantiate the framework in a concrete setting that allows us to theoretically study various properties of the corresponding model. Our model is similar to that of tsipras2019robustness in the sense that it contains a dichotomy between robust and non-robust features, but extends upon it in a number of ways:
The adversarial vulnerability can be explicitly expressed as a difference between the inherent data metric and the metric.
Robust learning corresponds exactly to learning a combination of these two metrics.
The gradients of adversarially trained models align better with the adversary’s metric.
We study a simple problem of maximum likelihood classification between two Gaussian distributions. In particular, given samples sampled from according to
(10) |
our goal is to learn parameters such that
(11) |
where represents the Gaussian negative log-likelihood (NLL) function. Intuitively, we find the parameters which maximize the likelihood of the sampled data under the given model. Classification under this model can be accomplished via likelihood test: given an unlabeled sample , we predict as
In turn, the robust analogue of this problem arises from replacing with the NLL under adversarial perturbation. The resulting robust parameters can be written as
(12) |
A detailed analysis of this setting is in Appendix E—here we present a high-level overview of the results.
Note that in this model, one can rigorously make reference to an inner product (and thus a metric) induced by the features. In particular, one can view the learned parameters of a Gaussian as defining an inner product over the input space given by . This in turn induces the Mahalanobis distance, which represents how a change in the input affects the features learned by the classifier. This metric is not necessarily aligned with the metric in which the adversary is constrained, the -norm. Actually, we show that adversarial vulnerability arises exactly as a misalignment of these two metrics.
Consider an adversary whose perturbation is determined by the “Lagrangian penalty” form of (12), i.e.
where is a constant trading off NLL minimization and the adversarial constraint. Then, the adversarial loss incurred by the non-robustly learned is given by:
and, for a fixed the above is minimized by .
In fact, note that such a misalignment corresponds precisely to the existence of non-robust features, as it indicates that “small” changes in the adversary’s metric along certain directions can cause large changes under the data-dependent notion of distance established by the parameters. This is illustrated in Figure 4, where misalignment in the feature-induced metric is responsible for the presence of a non-robust feature in the corresponding classification problem.
The optimal (non-robust) maximum likelihood estimate is
, and thus the vulnerability for the standard MLE estimate is governed entirely by the true data distribution. The following theorem characterizes the behaviour of the learned parameters in the robust problem. ^{12}^{12}12Note: as discussed in Appendix E.3.3, we study a slight relaxation of (12) that approaches exactness exponentially fast as . In fact, we can prove (Section E.3.4) that performing (sub)gradient descent on the inner maximization (also known as adversarial training goodfellow2015explaining,madry2018towards) yields exactly . We find that as the perturbation budget is increased, the metric induced by the learned features mixes and the metric induced by the features.Just as in the non-robust case, , i.e. the true mean is learned. For the robust covariance , there exists an , such that for any ,
The effect of robust optimization under an -constrained adversary is visualized in Figure 4. As grows, the learned covariance becomes more aligned with identity. For instance, we can see that the classifier learns to be less sensitive in certain directions, despite their usefulness for natural classification.
remains constant, but the learned covariance “blends” with the identity matrix, effectively adding more and more uncertainty onto the non-robust feature.
tsipras2019robustness observe that gradients of robust models tend to look more semantically meaningful. It turns out that under our model, this behaviour arises as a natural consequence of Theorem 2. In particular, we show that the resulting robustly learned parameters cause the gradient of the linear classifier and the vector connecting the means of the two distributions to better align (in a worst-case sense) under the inner product.
Let and be monotonic classifiers based on the linear separator induced by standard and -robust maximum likelihood classification, respectively. The maximum angle formed between the gradient of the classifier (wrt input) and the vector connecting the classes can be smaller for the robust model:
Figure 4 illustrates this phenomenon in the two-dimensional case. With -bounded adversarial training the gradient direction (perpendicular to the decision boundary) becomes increasingly aligned under the inner product with the vector between the means ().
Our theoretical analysis suggests that rather than offering any quantitative classification benefits, a natural way to view the role of robust optimization is as enforcing a prior over the features learned by the classifier. In particular, training with an -bounded adversary prevents the classifier from relying heavily on features which induce a metric dissimilar to the metric. The strength of the adversary then allows for a trade-off between the enforced prior, and the data-dependent features.
Note that in the setting described so far, robustness can
be at odds with accuracy since robust training prevents us from learning the most accurate classifier (a similar conclusion is drawn in tsipras2019robustness). However, we note that there are very similar settings where non-robust features manifest themselves in the same way, yet a classifier with perfect robustness and accuracy is still attainable. Concretely, consider the distributions pictured in Figure
13 in Appendix D.8. It is straightforward to show that while there are many perfectly accurate classifiers, any standard loss function will learn an accurate yet non-robust classifier. Only when robust training is employed does the classifier learn a perfectly accurate and perfectly robust decision boundary.Several models for explaining adversarial examples have been proposed in prior work, utilizing ideas ranging from finite-sample overfitting to high-dimensional statistical phenomena gilmer2018adversarial,fawzi2018adversarial,ford2019adversarial,tanay2016boundary, shafahi2019are,mahloujifar2018curse, shamir2019simple,goodfellow2015explaining,bubeck2018adversarial. The key differentiating aspect of our model is that adversarial perturbations arise as
well-generalizing, yet brittle, features, rather than statistical anomalies or effects of poor statistical concentration. In particular, adversarial vulnerability does not stem from using a specific model class or a specific training method, since standard training on the “robustified” data distribution of Section 3.1 leads to robust models. At the same time, as shown in Section 3.2, these non-robust features are sufficient to learn a good standard classifier. We discuss the connection between our model and others in detail in Appendix A. We discuss additional related work in Appendix B.In this work, we cast the phenomenon of adversarial examples as a natural consequence of the presence of highly predictive but non-robust features in standard ML datasets. We provide support for this hypothesis by explicitly disentangling robust and non-robust features in standard datasets, as well as showing that non-robust features alone are sufficient for good generalization. Finally, we study these phenomena in more detail in a theoretical setting where we can rigorously study adversarial vulnerability, robust training, and gradient alignment.
Our findings prompt us to view adversarial examples as a fundamentally human phenomenon. In particular, we should not be surprised that classifiers exploit highly predictive features that happen to be non-robust under a human-selected notion of similarity, given such features exist in real-world datasets. In the same manner, from the perspective of interpretability, as long as models rely on these non-robust features, we cannot expect to have model explanations that are both human-meaningful and faithful to the models themselves. Overall, attaining models that are robust and interpretable will require explicitly encoding human priors into the training process.
Here, we describe other models for adversarial examples and how they relate to the model presented in this paper.
An orthogonal line of work gilmer2018adversarial,fawzi2018adversarial, mahloujifar2018curse,shafahi2019are, argues that the high dimensionality of the input space can present fundamental barriers on classifier robustness. At a high level, one can show that, for certain data distributions, any decision boundary will be close to a large fraction of inputs and hence no classifier can be robust against small perturbations. While there might exist such fundamental barriers to robustly classifying standard datasets, this model cannot fully explain the situation observed in practice, where one can train (reasonably) robust classifiers on standard datasets madry2018towards,raghunathan2018certified, wong2018provable,xiao2019training,cohen2019certified.
schmidt2018adversarially propose a theoretical model under which a single sample is sufficient to learn a good, yet non-robust classifier, whereas learning a good robust classifier requires samples. Under this model, adversarial examples arise due to insufficient information about the true data distribution. However, unless the adversary is strong enough (in which case no robust classifier exists), adversarial inputs cannot be utilized as inputs of the opposite class (as done in our experiments in Section 3.2). We note that our model does not explicitly contradict the main thesis of schmidt2018adversarially. In fact, this thesis can be viewed as a natural consequence of our conceptual framework. In particular, since training models robustly reduces the effective amount of information in the training data (as non-robust features are discarded), more samples should be required to generalize robustly.
tanay2016boundary introduce the “boundary tilting” model for adversarial examples, and suggest that adversarial examples are a product of over-fitting. In particular, the model conjectures that “adversarial examples are possible because the class boundary extends beyond the submanifold of sample data and can be—under certain circumstances—lying close to it.” Consequently, the authors suggest that mitigating adversarial examples may be a matter of regularization and preventing finite-sample overfitting. In contrast, our empirical results in Section 3.2 suggest that adversarial inputs consist of features inherent to the data distribution, since they can encode generalizing information about the target class.
Inspired by this hypothesis and concurrently to our work, kim2019bridging present a simple classification task comprised of two Gaussian distributions in two dimensions. They experimentally show that the decision boundary tends to better align with the vector between the two means for robust models. This is a special case of our theoretical results in Section 4. (Note that this exact statement is not true beyond two dimensions, as discussed in Section 4.)
fawzi2016robustness and ford2019adversarial argue that the adversarial robustness of a classifier can be directly connected to its robustness under (appropriately scaled) random noise. While this constitutes a natural explanation of adversarial vulnerability given the classifier robustness to noise, these works do not attempt to justify the source of the latter.
At the same time, recent work [lecuyer2018certified, cohen2019certified, ford2019adversarial] utilizes random noise during training or testing to construct adversarially robust classifiers. In the context of our framework, we can expect the added noise to disproportionately affect non-robust features and thus hinder the model’s reliance on them.
goodfellow2015explaining suggest that the local linearity of DNNs is largely responsible for the existence of small adversarial perturbations. While this conjecture is supported by the effectiveness of adversarial attacks exploiting local linearity (e.g., FGSM goodfellow2015explaining), it is not sufficient to fully characterize the phenomena observed in practice. In particular, there exist adversarial examples that violate the local linearity of the classifier madry2018towards, while classifiers that are less linear do not exhibit greater robustness athalye2018obfuscated.
shamir2019simple prove that the geometric structure of the classifier’s decision boundaries can lead to sparse adversarial perturbations. However, this result does not take into account the distance to the decision boundary along these direction or feasibility constraints on the input domain. As a result, it cannot meaningfully distinguish between classifiers that are brittle to small adversarial perturbations and classifiers that are moderately robust.
bubeck2018adversarial and nakkiran2019adversarial propose theoretical models where the barrier to learning robust classifiers is, respectively, due to computational constraints or model complexity. In order to construct distributions that admit accurate yet non-robust classifiers they (implicitly) utilize the concept of non-robust features. Namely, they add a low-magnitude signal to each input that encodes the true label. This allows a classifier to achieve perfect standard accuracy, but cannot be utilized in an adversarial setting as this signal is susceptible to small adversarial perturbations.
We describe previously proposed models for the existence of adversarial examples in the previous section. Here we discuss other work that is methodologically or conceptually similar to ours.
The experiments performed in Section 3.1 can be seen as a form of distillation. There is a line of work, known as model distillation hinton2015distilling,furlanello2018born, bucilua2006model, where the goal is to train a new model to mimic another already trained model. This is typically achieved by adding some regularization terms to the loss in order to encourage the two models to be similar, often replacing training labels with some other target based on the already trained model. While it might be possible to successfully distill a robust model using these methods, our goal was to achieve it by only modifying the training set (leaving the training process unchanged), hence demonstrating that adversarial vulnerability is mainly a property of the dataset. Closer to our work is dataset distillation wang2018dataset which considers the problem of reconstructing a classifier from an alternate dataset much smaller than the original training set. This method aims to produce inputs that directly encode the weights of the already trained model by ensuring that the classifier’s gradient with respect to these inputs approximates the desired weights. (As a result, the inputs constructed do not resemble natural inputs.) This approach is orthogonal to our goal since we are not interested in encoding the particular weights into the dataset but rather in imposing a structure to its features.
In our work, we posit that a potentially natural consequence of the existence of non-robust features is adversarial transferability papernot2017practical,liu2017delving,papernot2016transferability. A recent line of work has considered this phenomenon from a theoretical perspective, confined to simple models, or unbounded perturbations [charles2019geometric, zou2017geometric]. tramer2017space study transferability empirically, by finding adversarial subspaces, (orthogonal vectors whose linear combinations are adversarial perturbations). The authors find that there is a significant overlap in the adversarial subspaces between different models, and identify this as a source of transferability. In our work, we provide a potential reason for this overlap—these directions correspond to non-robust features utilized by models in a similar manner.
moosavi2017universal construct perturbations that can cause misclassification when applied to multiple different inputs. More recently, jetley2018friends discover input patterns that are meaningless to humans and can induce misclassification, while at the same time being essential for standard classification. These findings can be naturally cast into our framework by considering these patterns as non-robust features, providing further evidence about their pervasiveness.
ding2018on perform synthetic transformations on the dataset (e.g., image saturation) and study the performance of models on the transformed dataset under standard and robust training. While this can be seen as a method of restricting the features available to the model during training, it is unclear how well these models would perform on the standard test set. geirhos2018imagenettrained aim to quantify the relative dependence of standard models on shape and texture information of the input. They introduce a version of ImageNet where texture information has been removed and observe an improvement to certain corruptions.
For our experimental analysis, we use the CIFAR-10 [krizhevsky2009learning] and (restricted) ImageNet [russakovsky2015imagenet] datasets. Attaining robust models for the complete ImageNet dataset is known to be a challenging problem, both due to the hardness of the learning problem itself, as well as the computational complexity. We thus restrict our focus to a subset of the dataset which we denote as restricted ImageNet. To this end, we group together semantically similar classes from ImageNet into 9 super-classes shown in Table 2. We train and evaluate only on examples corresponding to these classes.
Class | Corresponding ImageNet Classes |
---|---|
“Dog” | 151 to 268 |
“Cat” | 281 to 285 |
“Frog” | 30 to 32 |
“Turtle” | 33 to 37 |
“Bird” | 80 to 100 |
“Primate” | 365 to 382 |
“Fish” | 389 to 397 |
“Crab” | 118 to 121 |
“Insect” | 300 to 319 |
We use the ResNet-50 architecture for our baseline standard and adversarially trained classifiers on CIFAR-10 and restricted ImageNet. For each model, we grid search over three learning rates (, , ), two batch sizes (,
) including/not including a learning rate drop (a single order of magnitude) and data augmentation. We use the standard training parameters for the remaining parameters. The hyperparameters used for each model are given in Table
3.Dataset | LR | Batch Size | LR Drop | Data Aug. | Momentum | Weight Decay |
---|---|---|---|---|---|---|
(CIFAR) | 0.1 | 128 | Yes | Yes | 0.9 | |
(Restricted ImageNet) | 0.01 | 128 | No | Yes | 0.9 | |
(CIFAR) | 0.1 | 128 | Yes | Yes | 0.9 | |
(CIFAR) | 0.01 | 128 | Yes | Yes | 0.9 | |
(Restricted ImageNet) | 0.01 | 256 | No | No | 0.9 | |
(CIFAR) | 0.1 | 128 | Yes | No | 0.9 | |
(Restricted ImageNet) | 0.05 | 256 | No | No | 0.9 |
To obtain robust classifiers, we employ the adversarial training methodology proposed in [madry2018towards]. Specifically, we train against a projected gradient descent (PGD) adversary constrained in -norm starting from the original image. Following madry2018towards we normalize the gradient at each step of PGD to ensure that we move a fixed distance in -norm per step. Unless otherwise specified, we use the values of provided in Table 4 to train/evaluate our models. We used steps of PGD with a step size of .
Adversary | CIFAR-10 | Restricted Imagenet |
0.5 | 3 |
In Section 3.1, we describe a procedure to construct a dataset that contains features relevant only to a given (standard/robust) model. To do so, we optimize the training objective in (6). Unless otherwise specified, we initialize as a different randomly chosen sample from the training set. (For the sake of completeness, we also try initializing with a Gaussian noise instead as shown in Table 7.) We then perform normalized gradient descent (-norm of gradient is fixed to be constant at each step). At each step we clip the input to in the range so as to ensure that it is a valid image. Details on the optimization procedure are shown in Table 5. We provide the pseudocode for the construction in Figure 5.
CIFAR-10 | Restricted Imagenet | |
---|---|---|
step size | 0.1 | 1 |
iterations | 1000 | 2000 |
To construct the dataset as described in Section 3.2, we use the standard projected gradient descent (PGD) procedure described in [madry2018towards] to construct an adversarial example for a given input from the dataset (7). Perturbations are constrained in -norm while each PGD step is normalized to a fixed step size. The details for our PGD setup are described in Table 6. We provide pseudocode in Figure 6.
Attack Parameters | CIFAR-10 | Restricted Imagenet |
---|---|---|
0.5 | 3 | |
step size | 0.1 | 0.1 |
iterations | 100 | 100 |
In Section 3.1, we generate a “robust” training set by restricting the dataset to only contain features relevant to a robust model (robust dataset) or a standard model (non-robust dataset). This is performed by choosing either a random input from the training set or random noise^{13}^{13}13We use 10k steps to construct the dataset from noise, instead to using 1k steps done when the input is a different training set image (cf. Table 5). and then performing the optimization procedure described in (6). The performance of these classifiers along with various baselines is shown in Table 7. We observe that while the robust dataset constructed from noise resembles the original, the corresponding non-robust does not (Figure 7). This also leads to suboptimal performance of classifiers trained on this dataset (only standard accuracy) potentially due to a distributional shift.
Robust Accuracy | |||
Model | Accuracy | ||
Standard Training | 95.25 % | 4.49% | 0.0% |
Robust Training | 90.83% | 82.48% | 70.90% |
Trained on non-robust dataset (constructed from images) | 87.68% | 0.82% | 0.0% |
Trained on non-robust dataset (constructed from noise) | 45.60% | 1.50% | 0.0% |
Trained on robust dataset (constructed from images) | 85.40% | 48.20 % | 21.85% |
Trained on robust dataset (constructed from noise) | 84.10% | 48.27 % | 29.40% |
To verify the robustness of our classifiers trained on the ‘robust” dataset, we evaluate them with strong attacks carlini2019on. In particular, we try up to 2500 steps of projected gradient descent (PGD), increasing steps until the accuracy plateaus, and also try the CW- loss function carlini2017towards with 1000 steps. For each attack we search over step size. We find that over all attacks and step sizes, the accuracy of the model does not drop by more than 2%, and plateaus at for both PGD and CW- (the value given in Figure 2). We show a plot of accuracy in terms of the number of PGD steps used in Figure 8.
In Section 3.1, we observe that an ERM classifier trained on a “robust” training dataset (obtained by restricting features to those relevant to a robust model) attains non-trivial robustness (cf. Figure 1 and Table 7). In Table 8, we evaluate the adversarial accuracy of the model on the corresponding robust training set (the samples which the classifier was trained on) and test set (unseen samples from , based on the test set). We find that the drop in robustness comes from a combination of generalization gap (the robustness on the test set is worse than it is on the robust training set) and distributional shift (the model performs better on the robust test set consisting of unseen samples from than on the standard test set containing unseen samples from ).
Dataset | Robust Accuracy |
---|---|
Robust training set | 77.33% |
Robust test set | 62.49% |
Standard test set | 48.27% |
Figure 9 shows sample images from , and constructed using a standard (non-robust) ERM classifier, and an adversarially trained (robust) classifier.
In Table 9, we repeat the experiments in Table 1 based on datasets constructed using a robust model. Note that using a robust model to generate the and datasets will not result in non-robust features that are strongly predictive of (since the prediction of the classifier will not change). Thus, training a model on these datasets leads to poor accuracy on the standard test set from .
Observe from Figure 10 that models trained on datasets derived from the robust model show a decline in test accuracy as training progresses. In Table 9, the accuracy numbers reported correspond to the last iteration, and not the best performance. This is because we have no way to cross-validate in a meaningful way as the validation set itself comes from or , and not from the true data distribution . Thus, validation accuracy will not be predictive of the true test accuracy, and thus will not help determine when to early stop.
In Table 10), we evaluate the performance of classifiers trained on on both the original test set drawn from , and the test set relabelled using . Observe that the classifier trained on constructed using a robust model actually ends up learning permuted labels based on robust features (indicated by high test accuracy on the relabelled test set).
Model used to construct training dataset for | Dataset used in testing | |
---|---|---|
relabelled- | ||
Standard | 43.7% | 16.2% |
Robust | 5.8% | 65.5% |
In this section, we develop a framework for studying non-robust features by studying the problem of maximum likelihood classification between two Gaussian distributions. We first recall the setup of the problem, then present the main theorems from Section 4. First we build the techniques necessary for their proofs.
We consider the setup where a learner receives labeled samples from two distributions, , and . The learner’s goal is to be able to classify new samples as being drawn from or according to a maximum likelihood (MLE) rule.
A simple coupling argument demonstrates that this problem can actually be reduced to learning the parameters , of a single Gaussian , and then employing a linear classifier with weight . In the standard setting, maximum likelihoods estimation learns the true parameters, and , and thus the learned classification rule is .
In this work, we consider the problem of adversarially robust maximum likelihood estimation. In particular, rather than simply being asked to classify samples, the learner will be asked to classify adversarially perturbed samples , where is chosen to maximize the loss of the learner. Our goal is to derive the parameters corresponding to an adversarially robust maximum likelihood estimate of the parameters of . Note that since we have access to (indeed, the learner can just run non-robust MLE to get access), we work in the space where is a diagonal matrix, and we restrict the learned covariance to the set of diagonal matrices.
We denote the parameters of the sampled Gaussian by , and . We use
to represent the smallest eigenvalue of a square matrix
, and to represent the Gaussian negative log-likelihood for a single sample . For convenience, we often use , and . We also define the operator to represent the vectorization of the diagonal of a matrix. In particular, for a matrix , we have that if .We focus on the case where for some , i.e. the ball, corresponding to the following minimax problem:
(13) |
We first derive the optimal adversarial perturbation for this setting (Section E.3.1), and prove Theorem 1 (Section E.3.2). We then propose an alternate problem, in which the adversary picks a linear operator to be applied to a fixed vector, rather than picking a specific perturbation vector (Section E.3.3). We argue via Gaussian concentration that the alternate problem is indeed reflective of the original model (and in particular, the two become equivalent as ). In particular, we propose studying the following in place of (13):
(14) | ||||
Our goal is to characterize the behavior of the robustly learned covariance in terms of the true covariance matrix and the perturbation budget . The proof is through Danskin’s Theorem, which allows us to use any maximizer of the inner problem in computing the subgradient of the inner minimization. After showing the applicability of Danskin’s Theorem (Section E.3.4) and then applying it (Section E.3.5) to prove our main results (Section E.3.7). Our three main results, which we prove in the following section, are presented below.
First, we consider a simplified version of (13), in which the adversary solves a maximization with a fixed Lagrangian penalty, rather than a hard constraint. In this setting, we show that the loss contributed by the adversary corresponds to a misalignment between the data metric (the Mahalanobis distance, induced by ), and the metric: See 1
We then return to studying (14), where we provide upper and lower bounds on the learned robust covariance matrix : See 2
Finally, we show that in the worst case over mean vectors , the gradient of the adversarial robust classifier aligns more with the inter-class vector: See 3
In the first section, we have shown that the classification between two Gaussian distributions with identical covariance matrices centered at and can in fact be reduced to learning the parameters of a single one of these distributions.
Thus, in the standard setting, our goal is to solve the following problem:
Note that in this setting, one can simply find differentiate with respect to both and , and obtain closed forms for both (indeed, these closed forms are, unsurprisingly, and ). Here, we consider the existence of a malicious adversary who is allowed to perturb each sample point by some . The goal of the adversary is to maximize the same loss that the learner is minimizing.
We first consider, as a motivating example, an -constrained adversary. That is, the adversary is allowed to perturb each sampled point by . In this case, the minimax problem being solved is the following:
(15) |
The following Lemma captures the optimal behaviour of the adversary:
In this context, we can solve the inner maximization problem with Lagrange multipliers. In the following we write for brevity, and discard terms not containing as well as constant factors freely:
(17) |
Now we can solve (17) using the aforementioned Lagrange multipliers. In particular, note that the maximum of (17) is attained at the boundary of the ball . Thus, we can solve the following system of two equations to find , rewriting the norm constraint as :
(18) |
For clarity, we write : then, combining the above, we have that
(19) |
our final result for the maximizer of the inner problem, where is set according to the norm constraint. ∎
To simplify the analysis of Theorem 1, we consider a version of (15) with a fixed Lagrangian penalty, rather than a norm constraint:
Note then, that by Lemma 1, the optimal perturbation is given by
We begin by expanding the Gaussian negative log-likelihood for the relaxed problem:
Recall that we are considering the vulnerability at the MLE parameters and :
This shows the first part of the theorem. It remains to show that for a fixed , the adversarial risk is minimized by :
where are the eigenvalues of . Now, we have that by assumption, so by optimality conditions, we have that minimizes the above if , i.e. if for all . Now,
Then, by solving analytically, we find that
admits only one real solution, . Thus, . Scaling to satisfy the trace constraint yields , which concludes the proof. ∎
Our motivating example (Section E.3.1) demonstrates that the optimal perturbation for the adversary in the -constrained case is actually a linear function of , and in particular, that the optimal perturbation can be expressed as for a diagonal matrix . Note, however, that the problem posed in (15) is not actually a minimax problem, due to the presence of the expectation between the outer minimization and the inner maximization. Motivated by this and (19), we define the following robust problem:
(20) | ||||
First, note that this objective is slightly different from that
of (15).
In the motivating example, is constrained to always have -norm,
and thus is normalizer on a per-sample basis inside of the expectation. In contrast,
here the classifier is concerned with being robust to perturbations that are linear
in , and of squared norm in expectation.
Note, however, that via the result of laurent2000adaptive showing strong concentration for the norms of Gaussian random variables, in high dimensions this bound on expectation has a corresponding high-probability bound on the norm. In particular, this implies that as
, almost surely, and thus the problem becomes identical to that of (15). We now derive the optimal for a given :Consider the minimax problem described by (20), i.e.
Then, the optimal action of the inner maximization problem is given by
(21) |
where again is set so that .
We accomplish this in a similar fashion to what was done for , using Lagrange multipliers:
where is a constant depending on and enforcing the expected squared-norm constraint. ∎
Indeed, note that the optimal for the adversary takes a near-identical form to the optimal (19), with the exception that is not sample-dependent but rather varies only with the parameters.
The main tool in proving our key results is Danskin’s Theorem danskin1967theory, a powerful theorem from minimax optimization which contains the following key result:
Suppose is a continuous function of two arguments, where is compact. Define . Then, if for every , is convex and differentiable in , and is continuous:
The subdifferential of is given by
where represents the convex hull operation, and is the set of maximizers defined as
In short, given a minimax problem of the form where is a compact set, if is convex for all values of , then rather than compute the gradient of , we can simply find a maximizer for the current parameter ; Theorem 4 ensures that . Note that is trivially compact (by the Heine-Borel theorem), and differentiability/continuity follow rather straightforwardly from our reparameterization (c.f. (22)), and so it remains to show that the outer minimization is convex for any fixed .
Note that even in the standard case (i.e. non-adversarial), the Gaussian negative log-likelihood is not convex with respect to . Thus, rather than proving convexity of this function directly, we employ the parameterization used by [daskalakis2019efficient]: in particular, we write the problem in terms of and . Under this parameterization, we show that the robust problem is convex for any fixed .
Under the aforementioned parameterization of and , the following “Gaussian robust negative log-likelihood” is convex:
To prove this, we show that the likelihood is convex even with respect to a single sample ; the result follows, since a convex combination of convex functions remains convex. We begin by looking at the likelihood of a single sample
Comments
There are no comments yet.