Attacks Which Do Not Kill Training Make Adversarial Learning Stronger (ICML2020 Paper)
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. Although the problem has been widely studied empirically, much remains unknown concerning the theory underlying this trade-off. In this work, we quantify the trade-off in terms of the gap between the risk for adversarial examples and the risk for non-adversarial examples. The challenge is to provide tight bounds on this quantity in terms of a surrogate loss. We give an optimal upper bound on this quantity in terms of classification-calibrated loss, which matches the lower bound in the worst case. Inspired by our theoretical analysis, we also design a new defense method, TRADES, to trade adversarial robustness off against accuracy. Our proposed algorithm performs well experimentally in real-world datasets. The methodology is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge in which we won the 1st place out of 1,995 submissions in the robust model track, surpassing the runner-up approach by 11.41% in terms of mean ℓ_2 perturbation distance.READ FULL TEXT VIEW PDF
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger (ICML2020 Paper)
Pytorch implementation of Adversarially Robust Distillation (ARD)
TRADES by Michael I Jordan
In response to the vulnerability of deep neural networks to small perturbations around input data[SZS13]
, adversarial defenses have been an imperative object of study in machine learning[HPG17]SKN18, XWZ17, MC17]JL17], and many other domains. In machine learning, study of adversarial defenses has led to significant advances in understanding and defending against adversarial threat [HWC17]
. In computer vision and natural language processing, adversarial defenses serve as indispensable building blocks for a range of security-critical systems and applications, such as autonomous cars and speech recognition authorization. The problem of adversarial defenses can be stated as that of learning a classifier with high test accuracy on both natural andadversarial examples. The adversarial example for a given labeled data is a data point that causes a classifier to output a different label on than , but is “imperceptibly similar” to . Given the difficulty of providing an operational definition of “imperceptible similarity,” adversarial examples typically come in the form of restricted attacks such as -bounded perturbations [SZS13], or unrestricted attacks such as adversarial rotations, translations, and deformations [BCZ18, ETT17, GAG18, XZL18, AAG19, ZCS19]. The focus of this work is the former setting.
Despite a large literature devoted to improving the robustness of deep-learning models, many fundamental questions remain unresolved. One of the most important questions is how to trade off adversarial robustness against natural accuracy. Statistically, robustness can be be at odds with accuracy when no assumptions are made on the data distribution[TSE19]. This has led to an empirical line of work on adversarial defense that incorporates various kinds of assumptions [SZC18, KGB17]. On the theoretical front, methods such as relaxation based defenses [KW18, RSL18a] provide provable guarantees for adversarial robustness. They, however, ignore the performance of classifier on the non-adversarial examples, and thus leave open the theoretical treatment of the putative robustness/accuracy trade-off.
The problem of adversarial defense becomes more challenging when considering computational issues. This is due to the fact that direct formulations of robust-classification problems involves minimizing the robust 0-1 loss
a loss which is NP-hard to optimize [GR09]. This is why progress on algorithms that focus on accuracy have built on minimum contrast methods
that minimize a surrogate of the 0–1 loss function[BJM06], e.g., the hinge loss or cross-entropy loss. While prior work on adversarial defense replaced the 0-1 loss in Eqn. (1) with a surrogate loss to defend against adversarial threat [MMS18, KGB17, UOKvdO18], this line of research may suffer from loose surrogate approximation to the 0-1 loss. It may thus result in degraded performance.
We begin with an illustrative example that illustrates the trade-off between accuracy and adversarial robustness, a phenomenon which has been demonstrated by [TSE19], but without theoretical guarantees. We demonstrate that the minimal risk is achieved by a classifier with accuracy on the non-adversarial examples. We refer to this accuracy as the natural accuracy and we similarly refer to the natural error or natural risk. In this same example, the accuracy to the adversarial examples, which we refer to as the robust accuracy, is as small as (see Table 1). This motivates us to quantify the trade-off by the gap between optimal natural error and the robust error. Note that the latter is an adversarial counterpart of the former which allows a bounded worst-case perturbation before feeding the perturbed sample to the classifier.
We study this gap in the context of a differentiable surrogate loss. We show that surrogate loss minimization suffices to derive a classifier with guaranteed robustness and accuracy. Our theoretical analysis naturally leads to a new formulation of adversarial defense which has several appealing properties; in particular, it inherits the benefits of scalability to large datasets exhibited by Tiny ImageNet, and the algorithm achieves state-of-the-art performance on a range of benchmarks while providing theoretical guarantees. For example, while the defenses overviewed in[ACW18] achieve robust accuracy no higher than ~ under white-box attacks, our method achieves robust accuracy as high as ~ in the same setting. The methodology is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge where we won first place out of 1,995 submissions, surpassing the runner-up approach by in terms of mean perturbation distance.
Our work tackles the problem of trading accuracy off against robustness and advances the state-of-the-art in multiple ways.
Theoretically, we characterize the trade-off between accuracy and robustness for classification problems via the gap between robust error and optimal natural error. We provide an upper bound for this gap in terms of surrogate loss. The bound is optimal as it matches the lower bound in the worst-case scenario.
Algorithmically, inspired by our theoretical analysis, we propose a new formulation of adversarial defense, TRADES, as optimizing a regularized surrogate loss. The loss consists of two terms: the term of empirical risk minimization encourages the algorithm to maximize the natural accuracy, while the regularization term encourages the algorithm to push the decision boundary away from the data, so as to improve adversarial robustness (see Figure 1).
Experimentally, we show that our proposed algorithm outperforms state-of-the-art methods under both black-box and white-box threat models. In particular, the methodology won the final round of the NeurIPS 2018 Adversarial Vision Challenge.
Before proceeding, we define some notation and clarify our problem setup.
We will use bold capital letters such as and
to represent random vector,bold lower-case letters such as and to represent realization of random vector, capital letters such as and
to represent random variable, andlower-case letters such as and to represent realization of random variable. Specifically, we denote by the sample instance, and by the label, where indicates the instance space. represents the sign of scalar with . Denote by the score function which maps an instance to a confidence value associated with being positive. It can be parametrized, e.g., by deep neural networks. The associated binary classifier is . We will frequently use , the 0-1 loss, to represent an indicator function that is if an event happens and otherwise. For norms, we denote by a generic norm. Examples of norms include , the infinity norm of vector , and , the norm of vector . We use to represent a neighborhood of : . For a given score function , we denote by the decision boundary of ; that is, the set . indicates the neighborhood of the decision boundary of : . For a given function , we denote by the conjugate function of , by the bi-conjugate, and by the inverse function. We will frequently use to indicate the surrogate of 0-1 loss.
In the setting of adversarial learning, we are given a set of instances and labels . We assume that the data are sampled from an unknown distribution . To characterize the robustness of a score function , [SST18, CBM18, BPR18] defined robust (classification) error under the threat model of bounded distortion:
This is in sharp contrast to the standard measure of classifier performance—the natural (classification) error . We note that the two errors satisfy for all ; the robust error is equal to the natural error when .
Our study is motivated by the trade-off between natural and robust errors. [TSE19] showed that training robust models may lead to a reduction of standard accuracy. To illustrate the phenomenon, we provide a toy example here.
Example. Consider the case
, where the marginal distribution over the instance space is a uniform distribution over, and for ,
See Figure 2 for the visualization of . We consider two classifiers: a) the Bayes optimal classifier ; b) the all-one classifier which always outputs “positive.” Table 1 displays the trade-off between natural and robust errors: the minimal natural error is achieved by the Bayes optimal classifier with large robust error, while the optimal robust error is achieved by the all-one classifier with large natural error. Despite a large literature on the analysis of robust error in terms of generalization [SST18, CBM18, YRB18] and computational complexity [BPR18, BLPR18], the trade-off between the natural error and the robust error has not been a focus of theoretical study.
|Bayes Optimal Classifier||All-One Classifier|
Our goal. To characterize the trade-off, we aim at approximately solving a constrained problem for a score function with guarantee , given a precision parameter :
where represents the risk of the Bayes optimal classifier, the classifier with the minimal natural error. We note that it suffices to show . This is because a) , and b) , where the last inequality holds since for all ’s and therefore . In this paper, our principal goal is to provide a tight bound on , using a regularized surrogate loss which can be optimized easily.
Definition. Minimization of the 0-1 loss in the natural and robust errors is computationally intractable and the demands of computational efficiency have led researchers to focus on minimization of a tractable surrogate loss, . We then need to find quantitative relationships between the excess errors associated with and those associated with 0–1 loss. We make a weak assumption on : it is classification-calibrated [BJM06]. Formally, for , define the conditional -risk by
and define . The classification-calibrated condition requires that imposing the constraint that has an inconsistent sign with the Bayes decision rule leads to a strictly larger -risk:
We assume that the surrogate loss is classification-calibrated, meaning that for any , .
We argue that Assumption 1 is indispensable for classification problems, since without it the Bayes optimal classifier cannot be the minimizer of the -risk. Examples of classification-calibrated loss include hinge loss, sigmoid loss, exponential loss, logistic loss, and many others (see Table 2).
Properties. Classification-calibrated loss has many structural properties that one can exploit. We begin by introducing a functional transform of classification-calibrated loss which was proposed by [BJM06]. Define the function by , where . Indeed, the function is the largest convex lower bound on . The value characterizes how close the surrogate loss is to the class of non-classification-calibrated losses.
Below we state useful properties of the -transform. We will frequently use the function to bound .
Under Assumption 1, the function has the following properties: is non-decreasing, continuous, convex on and .
In this section, we present our main theoretical contributions for binary classification and compare our results with prior literature. Binary classification problems have received significant attention in recent years as many competitions evaluate the performance of robust models on binary classification problems [BCZ18]. We defer the discussions for multi-class problems to Section 4.
Our analysis leads to the following guarantee on the performance of surrogate loss minimization.
Under Assumption 1, for any non-negative loss function such that , any measurable , any probability distribution on
, any probability distribution on, and any , we have111We study the population form of the loss function, although we believe that our analysis can be extended to the empirical form by the uniform convergence argument. We leave this analysis as an interesting problem for future research.
where , and is the Bayes optimal classifier.
Quantity governing model robustness. Our result provides a formal justification for the existence of adversarial examples: learning models are brittle to small adversarial attacks because the probability that data lie around the decision boundary of the model, , is large. As a result, small perturbations may move the data point to the wrong side of the decision boundary, leading to weak robustness of classification models.
We now establish a lower bound on . Our lower bound matches our analysis of the upper bound in Section 3.1 up to an arbitrarily small constant.
Suppose that . Under Assumption 1, for any non-negative loss function such that as , any , and any , there exists a probability distribution on , a function , and a regularization parameter such that and
Optimization. Theorems 3.1 and 3.2 shed light on algorithmic designs of adversarial defenses. In order to minimize , the theorems suggest minimizing222There is correspondence between the in problem (3) and the in the right hand side of Theorem 3.1, because is a non-decreasing function. Therefore, in practice we do not need to involve function in the optimization formulation.
We name our method TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization).
Intuition behind the optimization. Problem (3) captures the trade-off between the natural and robust errors: the first term in (3) encourages the natural error to be optimized by minimizing the “difference” between and , while the second regularization term encourages the output to be smooth, that is, it pushes the decision boundary of classifier away from the sample instances via minimizing the “difference” between the prediction of natural example and that of adversarial example . This is conceptually consistent with the argument that smoothness is an indispensable property of robust models [CBG17]. The tuning parameter
plays a critical role on balancing the importance of natural and robust errors. To see how the hyperparameteraffects the solution in the example of Section 2.3, problem (3) tends to the Bayes optimal classifier when , and tends to the all-one classifier when .
Comparisons with prior works. We compare our approach with several related lines of research in the prior literature. One of the best known algorithms for adversarial defense is based on robust optimization [MMS18, KW18, WSMK18, RSL18a, RSL18b]. Most results in this direction involve algorithms that approximately minimize
where the objective function in problem (4) serves as an upper bound of the robust error . In complex problem domains, however, this objective function might not be tight as an upper bound of robust error, and may not capture the trade-off between natural and robust errors.
A related line of research is adversarial training by regularization [KGB17, RDV17, ZSLG16]. There are several key differences between the results in this paper and those of [KGB17, RDV17, ZSLG16]. Firstly, the optimization formulations are different. In the previous works, the regularization term either measures the “difference” between and [KGB17], or its gradient [RDV17]. In contrast, our regularization term measures the “difference” between and . While [ZSLG16] generated the adversarial example by adding random Gaussian noise to , our method simulates the adversarial example by solving the inner maximization problem in Eqn. (3). Secondly, we note that the losses in [KGB17, RDV17, ZSLG16] lack of theoretical guarantees. Our loss, with the presence of the second term in problem (3), makes our theoretical analysis significantly more subtle. Moreover, our algorithm takes the same computational resources as adversasrial training at scale [KGB17], which makes our method scalable to large-scale datasets. We defer the experimental comparisons of various regularization based methods to Table 5.
Heuristic algorithm. In response to the optimization formulation (3
), we use two heuristics to achieve more general defenses: a) extending to multi-class problems by involving multi-class calibrated loss; b) approximately solving the minimax problem via alternating gradient descent. For multi-class problems, a surrogate loss iscalibrated if minimizers of the surrogate risk are also minimizers of the 0-1 risk [PS16]. Examples of multi-class calibrated loss include cross-entropy loss. Algorithmically, we extend problem (3) to the case of multi-class classifications by replacing with a multi-class calibrated loss :
where is the output vector of learning model (with softmax operator in the top layer for the cross-entropy loss ), is the label-indicator vector, and is the regularization parameter. The pseudocode of adversarial training procedure, which aims at minimizing the empirical form of problem (5), is displayed in Algorithm 1.
The key ingredient of the algorithm is to approximately solve the linearization of inner maximization in problem (5) by the projected gradient descent (see Step 7). We note that is a global minimizer with zero gradient to the objective function in the inner problem. Therefore, we initialize by adding a small, random perturbation around in Step 5 to start the inner optimizer. More exhaustive approximations of the inner maximization problem in terms of either optimization formulations or solvers would lead to better defense performance.
In this section, we verify the effectiveness of TRADES by numerical experiments. We denote by the robust accuracy, and by the natural accuracy on test dataset. The pixels of input images are normalized to
. We release our PyTorch code athttps://github.com/yaodongyu/TRADES.
We verify the tightness of the established upper bound in Theorem 3.1
for binary classification problem on MNIST dataset. The negative examples are ‘1’ and the positive examples are ‘3’. Here we use a Convolutional Neural Network (CNN) with two convolutional layers, followed by two fully-connected layers. The output size of the last layer is 1. To learn the robust classifier, we minimize the regularized surrogate loss in Eqn. (3), and use the hinge loss in Table 2 as the surrogate loss , where the associated -transform is .
To verify the tightness of our upper bound, we calculate the left hand side in Theorem 3.1, i.e.,
and the right hand side, i.e.,
As we cannot have access to the unknown distribution
, we approximate the above expectation terms by test dataset. We first use natural training method to train a classifier so as to approximately estimateand , where we find that the naturally trained classifier can achieve natural error , and loss value for the binary classification problem. Next, we optimize problem (3) to train a robust classifier . We take perturbation , number of iterations and run epochs on the training dataset. Finally, to approximate the second term in , we use FGSM (white-box) attack (a.k.a. PGD attack) [KGB17] with iterations to approximately calculate the worst-case perturbed data .
|0.1||91.09 0.0385||99.41 0.0235||26.53 1.1698||91.31 0.0579|
|0.2||92.18 0.0450||99.38 0.0094||37.71 0.6743||89.56 0.2154|
|0.4||93.21 0.0660||99.35 0.0082||41.50 0.3376||87.91 0.2944|
|0.6||93.87 0.0464||99.33 0.0141||43.37 0.2706||87.50 0.1621|
|0.8||94.32 0.0492||99.31 0.0205||44.17 0.2834||87.11 0.2123|
|1.0||94.75 0.0712||99.28 0.0125||44.68 0.3088||87.01 0.2819|
|2.0||95.45 0.0883||99.29 0.0262||48.22 0.0740||85.22 0.0543|
|3.0||95.57 0.0262||99.24 0.0216||49.67 0.3179||83.82 0.4050|
|4.0||95.65 0.0340||99.16 0.0205||50.25 0.1883||82.90 0.2217|
|5.0||95.65 0.1851||99.16 0.0403||50.64 0.3336||81.72 0.0286|
The regularization parameter is an important hyperparameter in our proposed method. We show how the regularization parameter affects the performance of our robust classifiers by numerical experiments on two datasets, MNIST and CIFAR10. For both datasets, we minimize the loss in Eqn. (5) to learn robust classifiers for multi-class problems, where we choose as the cross-entropy loss.
MNIST setup. We use the CNN which has two convolutional layers, followed by two fully-connected layers. The output size of the last layer is 10. We set perturbation , perturbation step size , number of iterations , learning rate , batch size , and run epochs on the training dataset. To evaluate the robust error, we apply FGSM (white-box) attack with iterations and step size. The results are in Table 4.
CIFAR10 setup. We apply ResNet-18 [HZRS16] for classification. The output size of the last layer is 10. We set perturbation , perturbation step size , number of iterations , learning rate , batch size , and run epochs on the training dataset. To evaluate the robust error, we apply FGSM (white-box) attack with iterations and the step size is . The results are in Table 4.
We observe that as the regularization parameter increases, the natural accuracy decreases while the robust accuracy increases, which verifies our theory on the trade-off between robustness and accuracy. Note that for MNIST dataset, the natural accuracy does not decrease too much as the regularization term increases, which is different from the results of CIFAR10. This is probably because the classification task for MNIST is easier. Meanwhile, our proposed method is not very sensitive to the choice of . Empirically, when we set the hyperparameter in , our method is able to learn classifiers with both high robustness and high accuracy.
|Defense||Defense type||Under which attack||Dataset||Distance|
|[WSMK18]||robust opt.||FGSM (PGD)||CIFAR10||()||27.07%||23.54%|
|[MMS18]||robust opt.||FGSM (PGD)||CIFAR10||()||87.30%||47.04%|
|TRADES ()||regularization||FGSM (PGD)||CIFAR10||()||88.64%||49.14%|
|TRADES ()||regularization||FGSM (PGD)||CIFAR10||()||84.92%||56.61%|
|TRADES ()||regularization||DeepFool ()||CIFAR10||()||88.64%||59.10%|
|TRADES ()||regularization||DeepFool ()||CIFAR10||()||84.92%||61.38%|
|[MMS18]||robust opt.||FGSM (PGD)||MNIST||()||99.36%||96.01%|
|TRADES ()||regularization||FGSM (PGD)||MNIST||()||99.48%||96.07%|
Previously, [ACW18] showed that 7 defenses in ICLR 2018 which relied on obfuscated gradients may easily break down. In this section, we verify the effectiveness of our method with the same experimental setup under both white-box and black-box threat models.
MNIST setup. We use the CNN architecture in [CW17] with four convolutional layers, followed by three fully-connected layers. We set perturbation , perturbation step size , number of iterations , learning rate , batch size , and run epochs on the training dataset.
CIFAR10 setup. We use the same neural network architecture as [MMS18], i.e., the wide residual network WRN-34-10 [ZK16]. We set perturbation , perturbation step size , number of iterations , learning rate , batch size , and run epochs on the training dataset.
We summarize our results in Table 5 together with the results from [ACW18]. We also implement methods in [ZSLG16, KGB17, RDV17] on the CIFAR10 dataset as they are also regularization based methods. For MNIST dataset, we apply FGSM (white-box) attack with iterations and the step size is . For CIFAR10 dataset, we apply FGSM (white-box) attack with iterations and the step size is , under which the defense model in [MMS18] achieves robust accuracy. Table 5 shows that our proposed defense method can significantly improve the robust accuracy of models, which is able to achieve robust accuracy as high as . We also evaluate our robust model on MNIST dataset under the same threat model as in [SKC18] (C&W white-box attack [CW17]), and the robust accuracy is . See appendix for detailed information of models in Table 5.
|Defense Model||Robust Accuracy|
|Madry||97.43% (Natural)||97.38% (Ours)|
|TRADES||97.63% (Natural)||97.66% (Madry)|
|Defense Model||Robust Accuracy|
|Madry||84.39% (Natural)||66.00% (Ours)|
|TRADES||87.60% (Natural)||70.14% (Madry)|
We verify the robustness of our models under black-box attacks. We first train models without using adversarial training on the MNIST and CIFAR10 datasets. We use the same network architectures that are specified in the beginning of this section, i.e., the CNN architecture in [CW17] and the WRN-34-10 architecture in [ZK16]. We denote these models by naturally trained models (Natural). The accuracy of the naturally trained CNN model is on the MNIST dataset. The accuracy of the naturally trained WRN-34-10 model is on the CIFAR10 dataset. We also implement the method proposed in [MMS18] on both datasets. We denote these models by Madry’s models (Madry). The accuracy of [MMS18]’s CNN model is on the MNIST dataset. The accuracy of [MMS18]’s WRN-34-10 model is on the CIFAR10 dataset.
For both datasets, we use FGSM (black-box) method to attack various defense models. For MNIST dataset, we set perturbation and apply FGSM (black-box) attack with iterations and the step size is . For CIFAR10 dataset, we set and apply FGSM (black-box) attack with iterations and the step size is . Note that the setup is the same as the setup specified in Section 5.3.1. We summarize our results in Table 6 and Table 7. In both tables, we use two source models (noted in the parentheses) to generate adversarial perturbations: we compute the perturbation directions according to the gradients of the source models on the input images. It shows that our models are more robust against black-box attacks transfered from naturally trained models and [MMS18]’s models. Moreover, our models can generate stronger adversarial examples for black-box attacks compared with naturally trained models and [MMS18]’s models.
Competition settings. In the NeurIPS 2018 Adversarial Vision Challenge [BRK18], the adversarial attacks and defenses are under the black-box setting. The dataset in this challenge is Tiny ImageNet, which consists of 550,000 data (with our data augmentation) and 200 classes. The robust models only return label predictions instead of explicit gradients and confidence scores. The task for robust models is to defend against adversarial examples that are generated by the top-5 submissions in the un-targeted attack track. The score for each defense model is evaluated by the smallest perturbation distance that makes the defense model fail to output correct labels.
Competition results. The methodology in this paper was applied to the competition, where our entry ranked the 1st place in the robust model track. We implemented our method to train ResNet models. We report the mean perturbation distance of the top-6 entries in Figure 3. It shows that our method outperforms other approaches with a large margin. In particular, we surpass the runner-up submission by in terms of mean perturbation distance.
In this paper, we study the problem of adversarial defenses against structural perturbations around input data. We focus on the trade-off between robustness and accuracy, and show an upper bound on the gap between robust error and optimal natural error. Our result advances the state-of-the-art work and matches the lower bound in the worst-case scenario. The bounds motivate us to minimize a new form of regularized surrogate loss, TRADES, for adversarial training. Experiments on real datasets and NeurIPS 2018 Adversarial Vision Challenge demonstrate the effectiveness of our proposed algorithms. It would be interesting to combine our methods with other related line of research on adversarial defenses, e.g., feature denoising technique [XWvdM18] and network architecture design [CBG17], to achieve more robust learning systems.
IEEE Conference on Computer Vision and Pattern Recognition, pages 9185–9193, 2018.
Attack methods. Although deep neural networks have achieved great progress in various areas [ZXJ18, ZSS18], they are brittle to adversarial attacks. Adversarial attacks have been extensively studied in the recent years. One of the baseline attacks to deep nerual networks is the Fast Gradient Sign Method (FGSM) [GSS15]. FGSM computes an adversarial example as
where is the input instance, is the label, is the score function (parametrized by deep nerual network for example) which maps an instance to its confidence value of being positive, and is a surrogate of 0-1 loss. A more powerful yet natural extension of FGSM is the multi-step variant FGSM (also known as PGD attack) [KGB17]. FGSM applies projected gradient descent by times:
where is the -th iteration of the algorithm with and is the projection operator onto the ball . Both FGSM and FGSM are approximately solving (the linear approximation of) maximization problem:
They can be adapted to the purpose of black-box attacks by running the algorithms on another similar network which is white-box to the algorithms [TKP18]. Though defenses that cause obfuscated gradients defeat iterative optimization based attacks, [ACW18] showed that defenses relying on this effect can be circumvented. Other attack methods include MI-FGSM [DLP18] and LBFGS attacks [TV16].
Robust optimization based defenses. Compared with attack methods, adversarial defense methods are relatively fewer. Robust optimization based defenses are inspired by the above-mentioned attacks. Intuitively, the methods train a network by fitting its parameters to the adversarial examples:
Following this framework, [HXSS15, SYN15] considered one-step adversaries, while [MMS18] worked with multi-step methods for the inner maximization problem. There are, however, two critical differences between the robust optimization based defenses and the present paper. Firstly, robust optimization based defenses lack of theoretical guarantees. Secondly, such methods do not consider the trade-off between accuracy and robustness.
Relaxation based defenses. We mention another related line of research in adversarial defenses—relaxation based defenses. Given that the inner maximization in problem (6) might be hard to solve due to the non-convexity nature of deep neural networks, [KW18] and [RSL18a] considered a convex outer approximation of the set of activations reachable through a norm-bounded perturbation for one-hidden-layer neural networks. [WSMK18] later scaled the methods to larger models, and [RSL18b] proposed a tighter convex approximation. [SND18, VNS18]
Theoretical progress. Despite a large amount of empirical works on adversarial defenses, many fundamental questions remain open in theory. There are a few preliminary explorations in recent years. [FFF18] derived upper bounds on the robustness to perturbations of any classification function, under the assumption that the data is generated with a smooth generative model. From computational aspects, [BPR18, BLPR18] showed that adversarial examples in machine learning are likely not due to information-theoretic limitations, but rather it could be due to computational hardness. From statistical aspects, [SST18] showed that the sample complexity of robust training can be significantly larger than that of standard training. This gap holds irrespective of the training algorithm or the model family. [CBM18] and [YRB18] studied the uniform convergence of robust error by extending the classic VC and Rademacher arguments to the case of adversarial learning, respectively. A recent work demonstrates the existence of trade-off between accuracy and robustness [TSE19]. However, the work did not provide any methodology about how to tackle the trade-off.
In this section, we provide the proofs of our main results.
We denote by the Bayes decision rule throughout the proofs.
For any classifier , we have
For any classifier , we have
Now we are ready to prove Theorem 3.1.
where and is the Bayes optimal classifier.
By Lemma B.1, we note that
Also, notice that
as desired. ∎
Theorem 3.2 (restated). Suppose that . Under Assumption 1, for any non-negative loss function such that as , any , and any , there exists a probability distribution on , a function , and a regularization parameter such that and
The first inequality follows from Theorem 3.1. Thus it suffices to prove the second inequality.
Fix and . By the definition of and its continuity, we can choose such that and . For two distinct points , we set such that , , , and . By the definition of , we choose function such that for all , , and . By the continuity of , there is an such that for all . We also note that there exists an such that for any , we have
Thus, we have