1 Introduction
Deep neural networks (DNNs) have proved their effectiveness on a variety of domains and tasks. However, it has been found that DNNs are highly vulnerable to adversarial examples (Szegedy et al., 2014). To enhance the robustness of DNNs against adversarial examples, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) has become one of the most effective and widely used methods. Given a predefined perturbation tolerance, denoted as , adversarial training aims to minimize the robust loss, defined as the worstcase loss within ball around each example, leading to a minmax optimization problem. (Madry et al., 2018) show that applying a multistep projected gradient descent (PGD) attack to approximately solve the inner maximization leads to a robust model, and several recent research has proposed various ways to improve adversarial training (Zhang et al., 2019b; Wang, 2019; Wang et al., 2019; Balaji et al., 2019; Ding et al., 2018).
However, the standard adversarial training methods still have a hypothetical and possibly problematic assumption: the perturbation tolerance
is a large and fixed constant throughout the training process, which ignores the fact that every data point may have different intrinsic robustness. Intuitively, some examples are naturally closer to the decision boundary, and enforcing large margin on those examples will force the classifier to give up on those examples, leading to a distorted decision surface. This intuition may explain the known issue of the undesirable robustnessaccuracy tradeoff in adversarial robustness
(Su et al., 2018; Tsipras et al., 2019). Furthermore, with a different perturbation tolerance, it is questionable whether we should still force the model to learn to fit the onehot label as in the original adversarial training formulation. In the extreme case, if an example is perturbed to the decision boundary, a good classifier yielding the binary class prediction probabilities should output
instead of . This point becomes crucial when each example is associated with a different level of perturbation. Although some recent papers have started to address the uniform issue by treating correctly and incorrectly classified examples differently (Ding et al., 2018) or assigning nonuniform perturbation level (Balaji et al., 2019), none of them have tried to incorporate the customized training labels in this process.Motivated by these ideas, we propose a novel Customized Adversarial Training (CAT) framework that can substantially improve the performance of adversarial training. Throughout the adversarial training process, our algorithm dynamically finds a nonuniform and effective perturbation level and the corresponding customized target label for each example. This leads to better generalization performance and furthermore, with a careful design on adaptive tuning, our algorithm has only negligible computational overhead and runs as fast as the original adversarial training algorithm. Furthermore, we theoretically explain why the proposed method could lead to improved generalization performance.
Our method significantly outperforms existing adversarial training methods on the standard CIFAR10 defense task. With WideResNet structure on CIFAR10, under perturbation, our method achieves 73% robust accuracy under PGD attack and 71% robust accuracy under Carlini and Wagner (C&W) attack (Carlini and Wagner, 2017), while the current best model only achieves 58.6% under PGD attack and 56.8% under C&W attack. Furthermore, our method only degrades the clean accuracy from 95.93% (standard test accuracy) to 93.48%, while other adversarial training methods have clean accuracy below 91.34%.
2 Related Work
Adversarial attack.
Finding adversarial examples, also known as adversarial attacks, can be formulated as an optimization problem — the goal is to find the perturbation to maximize the (robust) loss, while constraining to have small norm (e.g., norm). Therefore gradientbased algorithms have been widely used, such as fast gradient sign method (FGSM) (Goodfellow et al., 2015), C&W attack (Carlini and Wagner, 2017) and PGD attack (Madry et al., 2018). In addition to whitebox attacks, it has been also found that adversarial attacks can be generated also in the softlabel black box setting (Chen et al., 2017; Ilyas et al., 2018) and hardlabel black box setting (Brendel et al., 2017; Cheng et al., 2018, 2020), and with similar quality to whitebox attacks. Moreover, physical attacks have been proposed to generate adversarial examples in the real world (Eykholt et al., 2018). Therefore, with the existence of these powerful adversarial attacks, how to enhance the robustness of neural network models has become an important issue in many real world applications.
Adversarial training.
To enhance the adversarial robustness of a neural network model, a natural idea is to iteratively generate adversarial examples, add them back to the training data, and retrain the model. For example, Goodfellow et al. (2015) use adversarial examples generated by FGSM to augment the data, and Kurakin et al. (2017) propose to use a multiplestep FGSM to further improve the performance. Madry et al. (2018) show that adversarial training can be formulated as a minmax optimization problem, and propose to use PGD attack (similar to multistep FGSM) to find adversarial examples for each batch. The resulting method achieves notable successes and can survive even under strong attacks (Athalye et al., 2018). After that, many defense algorithms are based on a similar minmax framework. Zhang et al. (2019b) propose TRADES, a theoreticallydriven upper bound minimization algorithm to achieve the top1 rank in NeurIPS 2018 defense competition. Recently, Ding et al. (2018) notice the importance of misclassified examples and treat correctly classified and misclassified examples differently. Wang (2020) use label prediction probability as a smooth way to combine correctly and misclassified samples. Other than just adding adversarial examples into the training process, Wang (2019) find that it is also effective to find the “adversarial label” along with the “adversarial perturbation”. The convergence of adversarial training has also been studied (Gao et al., 2019; Wang et al., 2019). Recently, to reduce the computational overhead brought by adversarial training, several works have been proposed (Shafahi et al., 2019; Zhang et al., 2019a; Wong et al., 2020). It is widely recognized that the current defensive models are still not ideal and have considerable room for improvement. Moreover, to make robust models useful in practice, it is crucial that both clean and robust error need to be further enhanced.
Other adversarial defenses
In addition to adversarial training based methods, a wide range of defense methods have been proposed such as Gaussian data augmentation (Zantedeschi et al., 2017), randomized smoothing (Liu et al., 2018; Cohen et al., 2019), Mixup (Zhang et al., 2018) and its variants (Thulasidasan et al., 2019; Verma et al., 2018), and Label smoothing (Shafahi et al., 2018; Goibert and Dohmatob, 2019). Shafahi et al. (2018) find that it could achieve similar robust accuracy with adversarial training when combining Gaussian data augmentation and labelsmoothing.
However, some of the aforementioned methods have been shown to cause obfuscated gradients instead of enhanced robustness (Athalye et al., 2018), while adversarial training based methods are still shown to be robust under different kinds of adversarial attacks.
3 Proposed Method
3.1 Preliminaries
Adversarial training can be formulated as a minmax optimization problem. For a class classification problem, let denote the set of training samples in the dataset with . Let denote a classification model parameterized by . We denote by as the prediction output for each class, i.e., . We use standard notation to hide universal constant factor, and to indicate .
Adversarial training can be formulated as:
(1) 
where denotes the norm ball centered at with radius . The inner maximization problem aims to find an adversarial version of a given data point that achieves a high loss. In general one can define based on the threat model, but the ball is the most popular choice adopted by recent works (Madry et al., 2018; Zhang et al., 2019b; Wang, 2019; Ding et al., 2018; Wang, 2020), which will also be used in this paper. For a deep neural network model, the inner maximization does not have a closed form solution, so adversarial training methods typically use a gradientbased iterative solver to approximately solve the inner problem. The most commonly used choice is the multistep PGD (Madry et al., 2018) and C&W attack (Carlini and Wagner, 2017).
3.2 Motivation
Intuitively, if adversarial training can always find a model with closetozero robust error, one should always use a large for training because it will automatically imply robustness to any smaller . Unfortunately, in practice a uniformly large is often harmful. In the following we empirically explain this problem and use it to motivate our proposed algorithm.
We use a simple linear classification case to demonstrate why a uniformly large is harmful. In Figure (a)a, we generate a synthetic linearly separable dataset with the margin set to be 1.75 for both classes, and the correct linear boundary can be easily obtained by standard training. In Figure (b)b, we run adversarial training with , and since this is smaller than the margin, the algorithm can still obtain nearoptimal results. However, when we use a large for adversarial training in Figure (c)c, the resulting decision boundary becomes significantly worse. It is because adversarial training cannot correctly fit all the samples with a margin up to 4, so it will sacrifice some data samples, leading to distorted and undesirable decision boundary. This motivates the following two problems:

We shouldn’t set the same large uniformly for all samples. Some samples are intrinsically closer to the decision boundary and they should use a smaller . Without doing this, adversarial training will give up on those samples, which leads to worse training and generalization error (see more discussions in Section 3.5 on the generalization bounds).

The adversarial training loss is trying to force the prediction to match the onehot label (e.g., in the binary classification case) even after large perturbations. However, if a sample is perturbed, the prediction shouldn’t remain onehot. For instance, if a sample is perturbed to the decision boundary, the prediction of a perfect model should be instead of
. This also makes adversarial training fail to recover a good decision hyperplane.
Furthermore, we observe that even if adversarial training can obtain closetozero training error with large (e.g., (Gao et al., 2019) proves that this will happen for overparameterized network with largeenough margin), a uniformly large will lead to larger generalization gap. This could be partially explained by the theoretical results provided by (Yin et al., 2018), which shows that the adversarial Rademacher complexity has a lower bound with an explicit dependence on the perturbation tolerance. The empirical results in Table 1 also illustrate this problem. When conducting adversarial training with on CIFAR10 VGG16, we found that the model achieves closetozero robust training error on all , but it suffers larger generalization gap compared to training with smaller . This also demonstrates that a uniformly large is harmful even when it achieves perfect training error.
Testing  Error Type  Training  

0.01  0.02  0.03  
0.01  Train  99.96%  99.99%  99.16% 
Test  69.79%  69.06%  66.04% 
CAT (Customized Adversarial Training)
We propose the Customized Adversarial Training (CAT) framework that improves adversarial training by addressing the abovementioned problems. First, our algorithm has an autotuning method to customize the used for each training example. Second, instead of forcing the model to fit the original label, we customize the target label for each example based on its own . In the following we will describe these two components in more detail.
3.3 Autotuning for adversarial training
The first component of our algorithm is an autotuning tuning method which adaptively assigns a suitable for each sample during the adversarial training procedure. Let be the perturbation level assigned to example . Based on the intuition mentioned in Section 3.2, we do not want to further increase if we find the classifier does not have capacity to robustly classify the example, which means we should set
(2) 
and the adversarial training objective becomes
(3) 
Note that in (2) depends on while in (3) also depends on . We thus propose to conduct alternative updates — conducting one SGD update on , and then update the in the current batch. However, finding exactly requires bruteforce search for every possible value, which adds significant computational overhead to adversarial training. Therefore, we only conduct a simplified update rule on as follows. Starting from an initial perturbation level of zero, at each iteration we conduct adversarial attack (e.g., PGD attack) with perturbation tolerance where is a constant. If the attack is successful, then we keep the current , while if the attack is unsuccessful, which means an attacker still cannot find an adversarial example that satisfies , then we increase . The attack results will also be used to update the model parameter , so this adaptive scheme does not require any additional cost. In practice, we also have an upper bound on the final perturbation to make sure each individual will not be too large.
3.4 Adaptive label uncertainty for adversarial training
As mentioned in Section 3.2, the standard adversarial training loss is trying to enforce a sample being classified as the original onehot label after perturbation. However, this may not be ideal. In the extreme case, if a sample is perturbed to the decision boundary, the prediction must be far away from onehot. This problem is more severe when using nonuniform , since each different will introduce a different bias to the loss, and that may be one of the reasons that purely adaptive scheduling does not work well (see our ablation study in Section 4.4 and also the results reported in (Balaji et al., 2019)).
In the following, we propose an adaptive label smoothing approach to reflect different perturbation tolerance on each example. Szegedy et al. (2016)
introduced label smoothing that converts onehot label vectors into onewarm vectors representing lowconfidence classification, in order to prevent the model from making overconfident predictions. Specifically, with a onehot encoded label
, the smoothed version iswhere
is the hyperparameter to control the smoothing level. In the adaptive setting, we set
so that a larger perturbation tolerance would receive a higher label uncertainty and is a hyperparameter. A common choice ofis the uniform distribution
. To further prevent overfitting and improve the generalization, we use where Dirichlet() refers to the Dirichlet distribution and is an all one vector. With different perturbation tolerance, the adaptive version of label smoothing is(4) 
The final objective function: Combining the two aforementioned two main techniques, our Customized Adversarial Training (CAT) method attempts to minimize the following objective:
(5) 
where is defined in (4). As described in Section 3.3, we approximately minimize this objective with an alternative update scheme, which encounters almost no additional cost compared to the original adversarial training algorithm. The detailed algorithm is shown in Algorithm 1.
Choice of loss function.
In general, our framework can be used with any loss function
. In the previous works, cross entropy loss is commonly used for . However, the model trained by smoothing techniques tends to have better performance against PGD attack than CW attack (see the VGG experiments in Figure 10. So in addition to testing our algorithm under cross entropy loss, we also propose a mixed loss to enhance the defense performance towards CW attack. That is,(6) 
where
is the final (logit) layer output, and
is the prediction score for the ith class and is the original label. The parameter encourages to find an adversary that will not classified as class with high confidence.3.5 Theoretical Analysis
To better understand how our scheme improves generalization, we provide some theoretical analysis. Recall we denote by as the prediction probability for the classes. We define the bilateral margin that our paper is essentially maximizing over as follows.
Definition 3.1 (Bilateral margin)
We define the bilateral perturbed network output by :
The bilateral margin is now defined as the minimum norm of required to cause the classifier to make false predictions:
(7)  
This margin captures both the relative perturbation on the input layer and on the softmax output .
Theorem 3.2
Suppose the parameter space we optimize over has covering number that scales as for some complexity . Then with probability over the draw of the training data, any classifer which achieves training error 0 satisfies:
where is of small order .
We defer the proof to the Appendix, which is adapted from Theorem 2.1 of (Wei and Ma, 2019). We observe the population risk is bounded by two key factors, the average of and , the covering number of the parameter space. On one side, the average of is dominated by the samples with the smallest margin. Therefore when we do adversarial training, it is important that we not only achieve higher overall accuracy, but also make sure the samples closer to the decision boundary have large enough margin. This can not be achieved by simply using constant and large that will maintain a large margin for most samples but sacrifice the accuracy of a small portion of data. On the other hand, the covering number of the network’s parameter space can be roughly captured by a bound of product of all layers’ weight norms. We hypothesize that with more flexibility in choosing , our algorithm will converge faster than using larger constant and will have more implicit regularization effect. To testify this hypothesis, we roughly measure the model complexity by the product of the weight norms of different models. In comparison to our model, when training with constant and , it respectively yields as large as and times of that of our model, which means our model indeed has more implicit regularization effect among others.
3.6 Connections with other training methods
Many recent papers attempt to improve adversarial training. Although they all follow the similar minmax framework, each of them uses slightly different loss functions. We summarize the loss functions used by recent adversarial training methods in Table 2.
We see that except for natural training which directly minimizes the cross entropy loss (denoted as CE), all training techniques involve the use of the minmax framework. TRADES and MMA use the unperturbed data’s cross entropy loss as an additional regularization term to achieve a better tradeoff between clean and robust error.
Similar to our method, both MMA and IAAT have samplewise adaptive during training. They also utilize the adaptive to find the largest possible for every sample . However, they do not consider the adaptive label technique mentioned in Section 3.4. As a result, they can only achieve better clean accuracy while the improvements in robust accuracy is limited. Our CATCE algorithm (CAT with CE loss) is more general than IAAT and MMA. CATCE reduce to IAAT when we set in adaptive label smoothing. Moreover, MMA could be treated as a special case of CATCE when we use a line search scheme to find the and . Also, in Section 4.4, we will show the importance of the adaptive label uncertainty step in CAT.
Methods  Loss Function 

Natural  
Adversarial training (Madry et al., 2018)  
TRADES (Zhang et al., 2019b)  
Bilateral Adv Training (Wang, 2019)  
MMA (Ding et al., 2018)  
MART (Wang, 2020)  
IAAT (Balaji et al., 2019)  
CATCE (ours)  
CATMIX (ours) 
4 Performance Evaluation
In this section, we conduct extensive experiments to show that CAT achieves a strong result on both clean and robust accuracy. We include the following methods into our comparison:

Customized Adversarial Training (CATCE): Our proposed method with the cross entropy loss.

Customized Adversarial Training (CATMIX): Our proposed method with the mixed cross entropy loss (6).

Adversarial training: The adversarial training method proposed in (Madry et al., 2018) where they use a Kstep PGD attack as adversary.

TRADES: TRADES (Zhang et al., 2019b) improves adversarial training by an additional loss on the clean examples and achieves the stateofart performance on robust accuracy.

Natural: the natural training which only minimizes the cross entropy loss.
Furthermore, since many recently proposed adversarial training methods have considered CIFAR10 with WideResNet structure as the standard setting and report their numbers, we also compare our performance with 7 previous methods on this specific setting.
Methods  Clean accuracy  PGD accuracy  CW accuracy 

Natural training  95.93%  0%  0% 
Adversarial training (Madry et al., 2018)  87.30%  52.68%  50.73% 
Dynamic adversarial training (Wang et al., 2019)  84.51%  55.03%  51.98% 
TRADES (Zhang et al., 2019b)  84.22%  56.40%  51.98% 
Bilateral Adv Training (Wang, 2019)  91.00%  57.5%  56.2% 
MMA (Ding et al., 2018)  84.36%  47.18%  ✗ 
MART (Wang, 2020)  84.17%  58.56%  54.58% 
IAAT (Balaji et al., 2019)  91.34%  48.53%  56.80% 
CATCE (ours)  93.48%  73.38%  61.88% 
CATMIX (ours)  89.61%  73.16%  71.67% 
4.1 Experimental Setup
Dataset and model structure. We use two popular dataset CIFAR10 (Krizhevsky et al., )
and RestrictedImageNet
(Deng et al., 2009) for performance evaluation. For CIFAR10, we use both standard VGG16 (Simonyan and Zisserman, 2015) and Wide ResNet that is used in both vanilla adversarial training (Madry et al., 2018) and TRADES (Zhang et al., 2019b). For VGG16, we implement adversarial training with the standard hyperparameters and train TRADES with the official implementation. For Wide ResNet, since the model has become standard for testing adversarial training methods, we use exactly the same model structure provided by (Madry et al., 2018; Zhang et al., 2019b). And use the models’ checkpoint released by adversarial training and TRADES official repository and implement the Madry’s adversarial training using the standard hyperparameters. For RestrictedImageNet, we use ResNet50. All our experiments were implemented in Pytorch1.4 and conducted using dual Intel E52640 v4 CPUs (2.40GHz) with 512 GB memory with a GTX 2080 TI GPU.
Implementation details. We set the number of iterations in adversarial attack to be 10 for all methods. Adversarial training and TRADES are trained on PGD attacks setting with cross entropy loss (CE). We implement our CAT method both on cross entropy (CE) (Madry et al., 2018) and CW loss (Carlini and Wagner, 2017), and set . All the models are trained using SGD with momentum 0.9, weight decay . For VGG16/Wide ResNet models, we use the initial learning rate of 0.01/0.1, and we decay the learning rate by 90% at the 80th, 140th, and 180th epoch. For CAT, we set epsilon scheduling parameter , and weighting parameter . For CATMIX, we set .
4.2 Robustness Evaluation and Analysis
Whitebox attcks. For CIFAR10, we evaluate all the models under different tolerance of whitebox norm bounded nontargeted PGD and CW attack. Specifically, we use both (20step PGD with step size and . All attacks are equipped with randomstart. To be noted, when , the robust accuracy is reduced to test accuracy of unperturbed (natural) test samples, i.e clean accuracy.
The experimental results are shown in Figure 10, where we can easily see that both CATCE and CATMIX clearly outperform other methods among from to . So our methods can achieve better robust error at the standard perturbation threshold considered in the literature, and also has better clean accuracy (). The accuracy curve becomes quite flat when is increased.
Wide ResNet has become a standard structure for comparing adversarial training methods, and it’s standard to train and evaluate with norm perturbation. For this setting, we collect the reported accuracy from 7 other adversarial training methods, with several of them published very recently, to have a detailed full comparison. As shown in Table 3, our method achieves stateofart robust accuracy while maintaining a high clean accuracy. Due to the page limit, we put the Restricted ImageNet result in the appendix.
Blackbox transfer attacks. We follow the criterion of evaluating transfer attacks as suggested by Athalye et al. (2018) to inspect whether the models trained by CAT will cause the issue of obfuscated gradients and give a false sense of model robustness. We generate 10,000 adversarial examples of CIFAR10 from natural models with and evaluate their attack performance on the target model. Table 4 shows that CAT achieves the best accuracy compared with adversarial training and TRADES, suggesting the effectiveness of CAT in defending both whitebox and transfer attacks.
Method  VGG 16  Wide ResNet 

Adv train  79.13%  85.84% 
TRADES  83.53%  83.90% 
CAT  86.58%  88.66 % 
4.3 Loss Landscape Exploration
To further verify the superior robustness using CAT, we visualize the loss landscape of different training methods in Figure 15. Following the implementation in (Engstrom et al., 2018), we divide the data input along a linear space grid defined by the sign of the input gradient and a random Rademacher vector, where the x and y axes represent the magnitude of the perturbation added in each direction and the zaxis represents the loss.
As shown in Figure 15, CAT generates a model with a lower and smoother loss landscape. Also, it could be taken as another strong evidence that we have found a robust model through CAT training.
4.4 Ablation study
The importance of adaptive label uncertainty. Here we discuss and perform an ablation study using VGG16 and CIFAR10 on the importance of adaptive label uncertainty and adaptive instancewise . In Figure (b)b, Adp train denotes the original adversarial training, Adv+LS denotes adversarial training with label smoothing (setting by Eq (4)), AdpAdv denotes adversarial training with adaptive instancewise , and CATCE is the proposed method which is a combination of these two tricks. We found that only applying adaptive instancewise or label smoothing cannot significantly boost the robust accuracy over standard adversarial training, but the proposed method, by nicely combining these two ideas, can significantly improve the performance. This explains why CAT significantly outperforms some instance adaptive methods like IAAT and MMA.
More iterations of PGD attack. As suggested in Athalye et al. (2018), to verify that the performance gain is not brought by insufficient iterations in PGD attack, in Figure (a)a, we show the robust accuracy with the number of iteration varying from to on CIFAR10 VGG16. The results show that although increasing the number of iterations would decrease the performance by around 2%, CAT always outperforms other methods significantly.
5 Conclusions
In this paper, we propose CAT, a customized adversarial training method that is designed to have better generalization for both clean and robust performance. We also provide a theoretical analysis to motivate our algorithm. Experimental results show that CAT has achieved stateofart robust accuracy and a high clean accuracy while keeping similar running time as standard adversarial training. The success of CAT indicates that it is crucial to customize the perturbation level on both data sample side and its label in adversarial training.
References
 Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. International Coference on International Conference on Machine Learning. Cited by: §2, §2, §4.2, §4.4.
 Instance adaptive adversarial training: improved accuracy tradeoffs in neural nets. arXiv preprint arXiv:1910.08051. Cited by: §1, §1, §3.4, Table 2, Table 3.
 Decisionbased adversarial attacks: reliable attacks against blackbox machine learning models. arXiv preprint arXiv:1712.04248. Cited by: §2.
 Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, pp. 39–57. Cited by: §1, §2, §3.1, §4.1.

Zoo: zeroth order optimization based blackbox attacks to deep neural networks without training substitute models.
In
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security
, pp. 15–26. Cited by: §2.  Queryefficient hardlabel blackbox attack: an optimizationbased approach. arXiv preprint arXiv:1807.04457. Cited by: §2.
 Signopt: a queryefficient hardlabel adversarial attack. In ICLR, Cited by: §2.
 Certified adversarial robustness via randomized smoothing. International Conference on Machine Learning. Cited by: §2.

Imagenet: a largescale hierarchical image database.
In
IEEE Conference on Computer Vision and Pattern Recognition
, pp. 248–255. Cited by: §4.1.  Maxmargin adversarial (mma) training: direct input space margin maximization through adversarial training. arXiv preprint arXiv:1812.02637. Cited by: §1, §1, §2, §3.1, Table 2, Table 3.
 Evaluating and understanding the robustness of adversarial logit pairing. arXiv preprint arXiv:1807.10272. Cited by: §4.3.

Robust physicalworld attacks on deep learning visual classification
. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634. Cited by: §2.  Convergence of adversarial training in overparametrized neural networks. In Advances in Neural Information Processing Systems, pp. 13009–13020. Cited by: §2, §3.2.
 Adversarial robustness via adversarial labelsmoothing. arXiv preprint arXiv:1906.11567. Cited by: §2.
 Explaining and harnessing adversarial examples. International Conference on Learning Representations. Cited by: §1, §2, §2.
 Blackbox adversarial attacks with limited queries and information. arXiv preprint arXiv:1804.08598. Cited by: §2.
 [17] () CIFAR10 (canadian institute for advanced research). . External Links: Link Cited by: §4.1.
 Adversarial machine learning at scale. International Conference on Learning Representations. Cited by: §2.
 Towards robust neural networks via random selfensemble. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 369–385. Cited by: §2.
 Towards deep learning models resistant to adversarial attacks. International Conference on Learning Representations. Cited by: §1, §2, §2, §3.1, Table 2, 3rd item, §4.1, §4.1, Table 3.
 Label smoothing and logit squeezing: a replacement for adversarial training?. Cited by: §2.
 Adversarial training for free!. Neural Information Processing Systems. Cited by: §2.
 Very deep convolutional networks for largescale image recognition. International Conference on Learning Representations. Cited by: §4.1.
 Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 631–648. Cited by: §1.
 Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: §3.4.
 Intriguing properties of neural networks. International Conference on Learning Representations. Cited by: §1.
 On mixup training: improved calibration and predictive uncertainty for deep neural networks. Neural Information Processing Systems. Cited by: §2.

Robustness may be at odds with accuracy
. In International Conference on Learning Representations, Cited by: §1. 
Manifold mixup: better representations by interpolating hidden states
. International Conference on Machine Learning. Cited by: §2.  Bilateral adversarial training: towards fast training of more robust models against adversarial attacks. International Conference on Computer Vision. Cited by: §1, §2, §3.1, Table 2, Table 3.
 On the convergence and robustness of adversarial training. In International Conference on Machine Learning, pp. 6586–6595. Cited by: §1, §2, Table 3.
 Improving adversarial robustness requires revisiting misclassified examples. International Conference on Learning Representations. External Links: Link Cited by: §2, §3.1, Table 2, Table 3.
 Improved sample complexities for deep networks and robust classification via an alllayer margin. arXiv preprint arXiv:1910.04284. Cited by: Theorem A.1, Appendix A, §3.5.
 Fast is better than free: revisiting adversarial training. arXiv preprint arXiv:2001.03994. Cited by: §2.
 Rademacher complexity for adversarially robust generalization. arXiv preprint arXiv:1810.11914. Cited by: §3.2.
 Efficient defenses against adversarial attacks. In ACM Workshop on Artificial Intelligence and Security, pp. 39–49. Cited by: §2.
 You only propagate once: accelerating adversarial training via maximal principle. In Advances in Neural Information Processing Systems, pp. 227–238. Cited by: §2.
 Theoretically principled tradeoff between robustness and accuracy. International Conference on Machine Learning. Cited by: §1, §2, §3.1, Table 2, 4th item, §4.1, Table 3.
 Mixup: beyond empirical risk minimization. International Conference on Learning Representations. Cited by: §2.
Appendix A Omitted Proofs
In this section we provide the omitted proof for Theorem 3.2, which is adapted from Theorem 2.1 from (Wei and Ma, 2019). They defined the all layer margin for a layer network and perturbation as follows:
They define the alllayer margin as the minimum norm of required that causes the classifier to make a false prediction.
(8)  
They consider the function class be the class of compositions of functions from function classes . They achieve the generalization bound as follows:
Theorem A.1 (Theorem 2.1 from (Wei and Ma, 2019))
In the above setting, with probability over the draw of the data, all classifiers which achieve training error satisfy
where is of small order .
For our problem, we define , where is identity mapping, and is the original function . Therefore the all layer margin is reduced to our bilateral margin:
Next, notice since is identity mapping, and composition with doesn’t affect the overall complexity. We apply Theorem A.1 and get our result.
Comments
There are no comments yet.