1 Introduction
Deep neural networks have become increasingly effective at various tasks. When training a network, label smoothing is a commonly used "trick" to improve deep neural network performance, which uses smoothed uniform label vectors in place of onehot label vectors when computing the crossentropy loss during training. Szegedy et al.
[1]originally proposed label smoothing as a regularization strategy that improved the performance of the Inception architecture on the ImageNet dataset. Since then, a set of works
[2, 3, 4] on image classification task start to use this means of regularization to improve generalization. They think label smoothing is a form of output distribution regularization that prevents overfitting of a neural network by softening the groundtruth labels to penalize overconfident outputs[5]. This method has been extended to many fields, including speech recognition [6], machine translation[7]. The recent work have also revealed that label smoothing improves model calibration[8].Over the last few years, there has been increasing works on the neural network’s brittleness. Szegedy et al.[9] first revealed that the existence of adversarial examples, concretely, imperceptible changes to an image can cause a DNN to mislabel it. Since then, researchers in the field have launched arm races against attack and defense. Among attack methods, a series of methods are commonly used, which exploits gradient information of models to construct or search for the direction of perturbation. We call them gradientbased methods, including PGD[10], CW[11] and so on.
However, recent studies have shown that evaluating robustness by these attack methods is insufficient and lead to overestimation of robustness [12, 13]. Inspired by them, we find that training model with label smoothing can invalidate most gradientbased attacks. The robust accuracy of the label smoothing model exceeds most commonly accepted defense models. We investigate robust and natural accuracy of models on various datasets with and without label smoothing. From Table 1, we can clearly see that model using label smoothing has a significant defensive effect against adversarial attacks no matter on what dataset.
Data set  Architecture  Circumstance  Accuracy(with LS)  Accuracy(without LS) 

MNIST  4layer CNN  natural  99.11  99.73 
FGSM  99.03  0  
CIFAR10  WideResNet3410  natural  94.74  95.44 
PGD  75.10  0  
CIFAR100  WideResNet3410  natural  80.50  81.10 
PGD  34.23  0.03 
Note that even for the adversarial training model [10] computational cost is required to achieve adversarial robustness. Previous work proves that training adversarially robust model is particularly difficult[10]. Either the network requires more capacity to be robust[10, 14], or the models tend to require more data[15, 16, 17]. Therefore, neither change the network capacity nor use more data, only to modify the training labels makes the model have a defensive effect against adversarial attacks, which make us raise following question:
Does label smoothing bring real robustness to the DNNs?
In order to explore this problem, we study the relationship between label smoothing and adversarial robustness from both theoretical and experimental aspects. Our contributions are as follows:

We indicate that label smoothing has a very significant effect to most gradient based attack.

We analyze the reasons why the label smoothing model invalidates most gradientbased attacks theoretically.

We demonstrate that label smoothing can not bring stable robustness. We study the properties of the label smoothing model and summarize the key points to break through its robustness experimentally.
Note that attacks that can be used to assess robustness should either be able to systematically find adversarial examples or be able to declare with high confidence that no adversarial examples exists. Our work reveal the necessity and urgency of proposing systematic and efficient attack methods to evaluate the robustness of neural networks.
2 Theoretical Analysis
In this section, we will analyze how label smoothing training change the model. We will first make a theoretical analysis, and then give a toy example of why such changes will affect the robustness of the model.
2.1 Preliminaries
A Nlayer DNN is a mapping that accepts an input and produces an output with parameter . It can be expressed as:
(1) 
where
for nonlinear activation function
, model weights and bias . Let denote the outputs of ,denote the output of the softmax layer. that is,
and . Letdenote the logit of
th class, and is the element of , which satisfies and due to the property of softmax. The DNN assigns the label .For a network trained with hard labels for a classification task with classes, we use onehot label vectors , where is "1" for the correct class and "0" for the rest. For a network trained with label smoothing, we use the soft label vectors instead. The value of each dimension is modified according to . The parameter which we call smoothing fator measures the extent of smoothness. When is small, the gap between wrong and correct class is relatively lager, smoothness level is low. As the increase, the smoothness of the label vector increases, and the gap between the correct and wrong class decreases.
2.2 Impacts of Label Smoothing
Most of the neural networks are trained with the Cross Entropy (CE) as the objective loss: , where is the number of classes. The loss can be written as follows when is onehot label:
(2) 
where
is output probability of correct class. And if we smooth the label according to
, the CE loss will become:(3) 
By substituting in Equation (2) and (3) using
, we can further deduce the relationship between loss function and logits when using one hot label and smoothed label respectively.
(4)  
(5) 
Let denote the margin between the corresponding logit of the correct class and one of the other classes, then,
(6)  
(7) 
Traditional networks trained with hard labels try to decrease the Equation (6) to increase the margin of the correct logit and other logits. Note that there is one more item on the right side of Equation (7) than Equation (6), when label smoothing is applied to the label, a term opposite to the original optimization objective (reducing ) is added to the loss function. Thus, the original loss will encourage the margin to grow infinitely, the intensity of encouragement (which is gradient) will gradually decrease and eventually converge. By contrast, when the label smoothing is applied, the first term of Equation 6 will suppress the growth of margin. The ultimate effect of the two losses restraining each other is to reach a compromise in a certain range acceptable to both terms. In other words, the value range of is limited to an acceptable range of two terms of loss. Calculate the partial derivative of loss with respect to a specific :
(8) 
According to the theorem that the partial derivative at the extreme point is 0, we can obtain the approximate range of . More ideally, the value of is a constant and satisfies , and depending on the properties of the softmax function, we can conclude that logits also have a range, and the size of this range is determined by . The approximate range of logits are listed in Table 2 according to the experiment.
range  range  

0  [5.00, 15]  0.5  [0.30, 2.2] 
0.1  [0.55, 4.5]  0.6  [0.25, 2.0] 
0.2  [0.45, 3.5]  0.7  [0.20, 1.6] 
0.3  [0.40, 3.0]  0.8  [0.14, 1.2] 
0.4  [0.35, 2.5]  0.9  [0.08, 0.7] 
2.3 Toy Example
Based on the results of the above theoretical analysis and experimental validation, we try to use a simplified model to explain why label smoothing has an impact on the effectiveness against adversarial attacks.
There are two loss functions commonly used in the algorithm of adversarial example generation. One is the crossentropy loss, the other is margin loss:
crossentropy loss  (9)  
margin loss  (10) 
Regardless of the loss function, the objective is to use the inverse direction of the gradient as the direction of the search for the adversarial example. Normally along this direction, the logit of the correct class decreases, and the predicted label changes when the search effort crosses the decision boundary (). If the perturbation size meets the requirements (within radius ball) at this point, the adversarial example is successfully found.
Let us take a toy example of what will happen when label smoothing is applied. Consider a onedimension classification task, the relationship between the logit of the network output and the input is shown in Figure 1. The solid line represents the correct logit, and the dotted line represents one of the remaining logit. The orange line represents the model with no label smoothing applied, and the blue line represents the applied. Empirically, the two models achieve extremes at similar locations, and the range of logit is smaller due to label smoothing applied, resulting in the different location of the decision boundary points between two models. Two different decision points of the model are far apart, which leads to fewer decision boundary points within the allowable perturbation interval. As the Figure 1 shows, Assuming that we use the logit of the correct class as a loss function to generate adversarial examples, we look for adversarial examples in the direction of the descent of the correct class. When , this method fails to generate adversarial example.
3 Experiment Verification
It has been revealed that most of attack methods for evaluation of defense model are often insufficient or give a wrong impression for robustness over the past years. We have demonstrated that label smoothing greatly improves the model’s performance against the PGD attack. However, it is necessary to have a more comprehensive understanding of the robustness of the label smoothing model. In this section, we use more representative attack methods to test the robustness more comprehensively and try to explain the experiment results with the conclusion of the previous section.
3.1 Experimental Setup
We consider the classification task on CIFAR10, and all the model’s architecture is WideResNet3410. For bounded attak, we use maximum perturbation ; for bounded attack, . We investigate two models that are the most representative defensive models as the baseline. Standard adversarial training (Madry et al.[10]) and TRADES (Zhang et al.[18]). Both of them are typical models based on adversarial training. The specific training settings for both models are the same as their original paper.
To avoid much concentration on optimizing the hyperparameter of label smoothing model, we fix it as a constant. Therefore, We first test when the label smoothing model has the best defense performance. We train a series of networks with different smoothing factors and evaluate them under the same attack setting: maximum perturbation , step size , and total perturbation steps . The results are shown in Figure 2.
From Figure 2, we can intuitively observe that when the smoothing factor is greater than 0.3, the model starts to have a remarkable defense effect against PGD attacks and reaches the maximum value when the factor is 0.9, then decreases. Based on the analysis in the previous section, this pattern of change is logical. Thus, in all the next experiments, the smoothing factor of all label smoothing models is set to 0.9 as it reaches the best performance.
We consider evaluation models in 4 aspects. These are PGDlike attack, CW attack, our proposed attack and transferability test. Details of the experimental settings for each section will be described later.
3.2 PGD Attack and Its Variants
Currently, PGD attack is a mostly used algorithm to test adversarial robustness. This method has the advantage of computationally cheap and performs well in many cases. However, it has been shown that PGD will lead to a significant overestimation of robustness. Recent years, a line of works about PGD variants are proposed. We first give a definition of PGDlike attacks:
Given a classifier
, original example , original target , adversarial loss function . The PGDlike algorithms refer to the algorithms that generate adversarial example by performing the following procedure one or more iterations.
Starting disturb at point which satisfies
(11) 
Searching new point by
(12)
where represents ball centered on , is the step size and indicates the operation of projection to the surface of ball.
We can summarize three characteristics of the PGDlike attack algorithm from the above definition.

The starting point of perturbation is in the neighborhood of a certain norm space of the original point.

The direction of perturbation is the sign direction of the adversarial loss function.

Use projection operations to control boundaries.
Restarts  Initialization  Steps & Step size  Loss function  

FGSM  w/o  original  1,  crossentropy 
w/o  random  10,  crossentropy  
w/o  random  20,  crossentropy  
w/o  random  40,  crossentropy  
PGDCW  w/o  random  20,  margin 
MT  18  random  20,  different margin 
ODI  20  calculated  20,  margin 
AA  1  random  100, auto  crossentropy 
Defense  clean  FGSM  PGDCW  MT  ODI  AA  

Madry  86.83  56.88  52.32  51.24  51.14  50.73  50.34  49.13  49.35 
Zhang  84.92  64.87  56.12  55.09  55.89  53.69  52.55  52.71  53.15 
LS  94.74  67.17  64.01  74.83  73.15  74.10  25.02  0.15  6.86 
We mainly apply two classic methods (FGSM and PGD) and three recent proposed methods(MT, ODI and AA) to test the adversarial robustness of the label smoothing model. These three new attack algorithms are considered to produce more accurate robustness evaluations than traditional PGD. MT represents the MultiTargeted attack [19], ODI represents Output Diversified Initialization attack [20], AA represents AutoAttack [21].
We first explain how these attacks are distinguished from each other and following the same pattern mentioned above, and introduce their specific attack settings in following experiment. Firstly, FGSM and PGD both perform the above procedure once without restarts. FGSM is the simplest PGDlike algorithm because it only perturbs one step from starting point , but the step size is also the largest. For PGD attack, we will test three different attack settings with different steps and step size , which are denoted as ; PGDCW attack is a way to realize CW attack in most defense papers. In this implementation, only the crossentropy loss function in PGD attack is replaced by the margin loss function. These attacks’ starting points are random sampled from radius ball . MT attack disturbs with multiple restarts and picks a new target class logit to calculate margin loss at each restart. ODI provides a more effective initialization strategy to determine the starting point with diversified logits at each restart. AA attack is a parameterfree ensemble of four attacks: FAB, two proposed AutoPGD attacks with different loss functions, and the blackbox Square Attack. But here, we also use one of its AutoPGD attacks—APGDCE, as a PGD variant. APGDCE improves the attack effect through searching good initial point with restarts and a step size selection strategy. The components and specific settings of these attacks are listed in Table 3.
The results are shown in three parts as Table 4. Firstly, label smoothing model reaches the best natural accuracy since no noise is added to the training samples. Secondly, under all the classic PGDlike attacks, including FGSM and PGD, the label smoothing model has also achieved the highest defense effect in comparison to two adversarial training models. It should be noted that the accuracy of the two models based on adversarial training does not fluctuate much when step increases (step size decrease). However, the label smoothing model presents an intuitive rising trend. Thirdly, 3 recent proposed attacks reduce the accuracy of the label smoothing model enormously, while the accuracy of two adversarial training models does not decrease much.
Synthesize the performance of the label smoothing model under differnt PGDlike attack, it can empirically conclude that label smoothing model does not have stable robustness. Firstly, The robustness of the label smoothing model is readily affected by the step size and step number. It can be an explanation that the larger step size is beneficial to jump out of the local wrong perturbation direction. Secondly, 3 new improved PGDlike attacks almost destroyed the defense of the model, though to varying degrees. It mainly owes to their restart mechanism. The restart mecanism it to try more perturbation directions and choose the best. MT utilize different classes of margin loss to vary the direction, ODI utilize different initial point to vary the direction and AA utilize the larger step size to vary the direction. Among them, the method of changing the starting point is the best, increasing step size is the second, and the effect of changing the loss function is the worst. In addition, the result of PGDCW attack manifests that it is not critical whether the loss function is crossentropy or margin.
3.3 CW Attack
CW attack is proposed by Carlini et al.[11]. As mentioned above, the PDGCW attacks are not strictly implementation of CW attacks, according to the algorithm proposed by Carlini et al.[11]. Find an adversarial example for image within distance with CW attack is equivalent to solving the following optimization problems,
(13)  
where is hyperparameter and the best is the margin loss according to the original paper. CW attacks differ from gradientbased PGDlike attacks in the following points.

CW’s starting point can be any point, not necessarily within radius ball .

CW use variable substitution to control boundaries, instead of projection operations.

CW’s perturbation is gradient direction rather than gradient sign direction.

CW need parameter finetuning for better attack effect, including and step number and step size etc.
In order to reduce the number of influence factors, we only show the results under norm here and set up the hyperparameter. The First row in Table 5 reports that under the CW attack, label smoothing shows poor robustness, while the two adversarial training models still maintain certain robustness.
Note that in the first experiment, was initialized to 0. To investigate the influence of the starting point on results, we initialize to the original image and test again. The results show that the accuracy of the label smoothing model is improved after the starting point is changed.
Starting point  LabelSmoothing  Madry  Zhang 

8.49  42.12  51.29  
37.83  43.14  52.73 
From the above experiments on CW attack, It can conclude that label smoothing model can be broken by CW attack which is strictly executed according to the original paper, and the key point to break is the selection of the starting point. In view of the fact that the optimization target of CW attack introduces the term of perturbation size and the arbitrary selection of starting point, it is not surprising that the defense effect from label smoothing failure.
3.4 Our Attack
We propose a novel attack algorithm, which combines the advantages of both PGD and CW. This method uses the idea of variable substitution in CW attack and optimization objective in PGD attack. For norm bounded attack, The value of each pixel changes in [, ], while pixel value itself have range [0, 1]. Combine them together, each pixel value satisfies:
(14) 
We propose a new variable substitution to map this finite interval to an infinite interval. Substitute by variable .
(15) 
Thus, we obtain an optimizable parameter without range restriction and the generation of adversarial example turn into an unconstrained optimization problem: . We can solve this problem by gradient decent algorithms and then substituting the solution to Equation to gain the adversarial example . A detailed description of our attack can be found in Algorithm 1.
We use algorithm 1 to do ablation study on step numberand step size, since they are two independent factors in this algorithm.
Step size We vary the step size from 0.01 to 0.1 in a granularity of 0.01. The number of step is set to 500 to ensure the convergence of the attack. The accuracy of three models are illustrated in Figure 3a. As can be observed, the robustness of the label smoothing model is significantly affected by the step size in comparison to two adversarial training models. Larger step size brings better attack effect.
Step number We vary the number of steps from 50 to 500 in a granularity of 50 to test . Here we fix the step size to 0.01 and 0.1. As can be observed in Figure 3(b), Under the same setting, the convergence speed of the two adversarial training models is faster, while the accuracy of label smoothing model has been declining with the increase of step number.
In general, Under the same settings, the step size and the step number are the key to the success rate of model attack, which also supports the previous conclusion.
3.5 Transferability
A significant requirement for robustness is that it is not vulnerable to transfered attack. We test the transferability of the label smoothing model. Attack method is considered to have good transferability When the adversarial example generated on the original model can also mislead the target model. Using a natural training model as the source model and a 0.9 smoothing factor model as the target model, we test the robustness of label smoothing model under transfered attack. For comparison, two adversarial training models are tested for the target model. The results are shown in the Table 6.
Attack Methods  Madry  Zhang  LabelSmoothing 

FGSM  84.75  86.11  50.31 
PGD  85.15  88.63  8.15 
CW  85.19  87.90  9.23 
The results shown in the Table 6 manifest that the label smoothing model is vulnerable to attacks from naturally trained model while two adversarial training model while the other two models are completely uninjured. This means that the change of onehot label to label smoothing model alters the direction of gradient, making it incorrect to generate adversarial example in most cases. However, adversarial examples can still be found by other strategies to adjust the direction. Conversely, adversarial training models ensure that most examples cannot find adversarials, no matter how you change the direction of the perturbation.
4 Related Work
Jiang et al.[22] in their recent paper identify "imbalanced gradients", a new situation where traditional attacks such as PGD can fail and produce overstimated adversarial robustness. They state that the phenomenon of imbalanced gradients occures when the gradient of one term of the margin loss dominates and pushes the attack towards to a suboptimal direction. They mention that label smoothing causes imbalanced gradients but they do not explain how label smoothing causes imbalanced gradients. We think using imbalanced gradient to explain the reason is quite absolute and our study reveals robustness of the label smoothing model in more detail.
5 Discussion
Taken all results together, We can empirically conclude that label smoothing does not bring real robustness or stable robustness. It somehow bypasses most gradientbased attacks or exploits the weaknesses in these attacks. Increasing the step size, increasing the restart step, using an initialization strategy away from the initial image, and so on can greatly reduce the robustness of label smoothing model and even completely overwhelm the model. In other words, label smoothing makes the gradientbased attack method converge to the local suboptimal.
However, These shortcomings do not mean the label smoothing model has no practical effect. First, label smoothing is an extremely fast training method with little additional training expense; second, it is the most accurate method on clean samples in all defense methods, bypassing most of the currently widely used attacks without reducing the original accuracy of the model. Therefore, label smoothing is worth trying in practical situations where a relatively low level of defense needs to be achieved quickly and easily.
On the other hand, research on label smoothing reveals many problems in the current research of adversarial example. First of all, most of the current attack methods are not suitable or applicable to assessing the robustness of models, such as PGD, because the target of optimization does not guarantee the generation of adversarial example, so that the suboptimal of algorithm convergence is not adversarial. In addition, most papers use the CW attack method to assess robustness, but only use margin loss instead of the original algorithm. From our experimental results, it can be seen that there are completely two different results.
So far, our work has explored various properties of the label smoothing model, but we have not found out the reason why label smoothing brings these changes to the model. At the same time, there is no efficient counter attack against the label smoothing model. If we can find a quick attack method, it will mean that we can take advantage of his shortcomings to produce a more efficient attack algorithm, and apply a stronger attack algorithm to adversarial training to produce a more robust model. We will focus on exploring the essence of the effect of label smoothing in future work, and we also hope that the research community will be committed to finding efficient and effective attack methods to evaluate the robustness of the model.
6 Conclusion
In this paper, we studied the effect of label smoothing during training on the model’s adversarial robustness. We theoretically analyze why label smoothing invalidate most gradientbased attacks and evaluate the robustness of the label smoothing model in various experimental settings. We conclude that the robustness produced by label smoothing is incomplete; increasing the step size, adding restart operation, using different initialization strategies and so on, is the key to breaking through the label smoothing model. In general, label smoothing does not bring real robustness or stable robustness. Our research reveals that it is an urgent problem to propose valid and more efficient attack methods in adversarial example research.
References

[1]
Christian Szegedy, Vincent Vanhoucke, Sergery Ioffe, Jon Shlens, and Wojna
Zbigniew.
Rethinking the inception architecture for computer vision.
In Proceedings of the IEEE conference on computer vision and pattern recongnition, pages 2818–2826, 2016. 
[2]
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le.
Learning transferable architectures for scalable image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 8697–8710, 2018.  [3] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence, volume 33, pages 4780–4789, 2019.
 [4] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in neural information processing systems, pages 103–112, 2019.
 [5] Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. Regularizing neural networks by penalizing confident output distributions, 2017.
 [6] Jan Chorowski and Navdeep Jaitly. Towards better decoding and language model integration in sequence to sequence models. Proc. Interspeech 2017, pages 523–527, 2017.
 [7] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
 [8] Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? In Advances in Neural Information Processing Systems, pages 4694–4703, 2019.
 [9] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
 [10] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
 [11] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017.

[12]
Marius Mosbach, Maksym Andriushchenko, Thomas Trost, Matthias Hein, and
Dietrich Klakow.
Logit pairing methods can fool gradientbased attacks.
In
NeurIPS 2018 Workshop on Security in Machine Learning
, 2018.  [13] Francesco Croce, Jonas Rauber, and Matthias Hein. Scaling up the randomized gradientfree adversarial attack reveals overestimation of robustness using established attacks. International Journal of Computer Vision, pages 1–19, 2019.
 [14] Preetum Nakkiran. Adversarial robustness may be at odds with simplicity. arXiv preprint arXiv:1901.00532, 2019.
 [15] JeanBaptiste Alayrac, Jonathan Uesato, PoSen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. Are labels required for improving adversarial robustness? In Advances in Neural Information Processing Systems, pages 12214–12223, 2019.
 [16] Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. In Advances in Neural Information Processing Systems, pages 11192–11203, 2019.
 [17] Amir Najafi, Shinichi Maeda, Masanori Koyama, and Takeru Miyato. Robustness to adversarial perturbations in learning from incomplete data. In Advances in Neural Information Processing Systems, pages 5541–5551, 2019.
 [18] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled tradeoff between robustness and accuracy. In International Conference on Machine Learning, pages 7472–7482, 2019.
 [19] Sven Gowal, Jonathan Uesato, Chongli Qin, PoSen Huang, Timothy Mann, and Pushmeet Kohli. An alternative surrogate loss for pgdbased adversarial testing. arXiv preprint arXiv:1910.09338, 2019.
 [20] Yusuke Tashiro, Yang Song, and Stefano Ermon. Diversity can be transferred: Output diversification for white and blackbox attacks. arXiv preprint arXiv:2003.06878, 2020.
 [21] Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameterfree attacks. arXiv preprint arXiv:2003.01690, 2020.
 [22] Linxi Jiang, Xingjun Ma, Zejia Weng, James Bailey, and YuGang Jiang. Imbalanced gradients: A new cause of overestimated adversarial robustness. arXiv preprint arXiv:2006.13726, 2020.
Comments
There are no comments yet.