Label Smoothing and Adversarial Robustness

09/17/2020 ∙ by Chaohao Fu, et al. ∙ Shanghai Jiao Tong University 0

Recent studies indicate that current adversarial attack methods are flawed and easy to fail when encountering some deliberately designed defense. Sometimes even a slight modification in the model details will invalidate the attack. We find that training model with label smoothing can easily achieve striking accuracy under most gradient-based attacks. For instance, the robust accuracy of a WideResNet model trained with label smoothing on CIFAR-10 achieves 75 subtle robustness, we investigate the relationship between label smoothing and adversarial robustness. Through theoretical analysis about the characteristics of the network trained with label smoothing and experiment verification of its performance under various attacks. We demonstrate that the robustness produced by label smoothing is incomplete based on the fact that its defense effect is volatile, and it cannot defend attacks transferred from a naturally trained model. Our study enlightens the research community to rethink how to evaluate the model's robustness appropriately.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks have become increasingly effective at various tasks. When training a network, label smoothing is a commonly used "trick" to improve deep neural network performance, which uses smoothed uniform label vectors in place of one-hot label vectors when computing the cross-entropy loss during training. Szegedy et al.


originally proposed label smoothing as a regularization strategy that improved the performance of the Inception architecture on the ImageNet dataset. Since then, a set of works

[2, 3, 4] on image classification task start to use this means of regularization to improve generalization. They think label smoothing is a form of output distribution regularization that prevents overfitting of a neural network by softening the ground-truth labels to penalize overconfident outputs[5]. This method has been extended to many fields, including speech recognition [6], machine translation[7]. The recent work have also revealed that label smoothing improves model calibration[8].

Over the last few years, there has been increasing works on the neural network’s brittleness. Szegedy et al.[9] first revealed that the existence of adversarial examples, concretely, imperceptible changes to an image can cause a DNN to mislabel it. Since then, researchers in the field have launched arm races against attack and defense. Among attack methods, a series of methods are commonly used, which exploits gradient information of models to construct or search for the direction of perturbation. We call them gradient-based methods, including PGD[10], CW[11] and so on.

However, recent studies have shown that evaluating robustness by these attack methods is insufficient and lead to overestimation of robustness [12, 13]. Inspired by them, we find that training model with label smoothing can invalidate most gradient-based attacks. The robust accuracy of the label smoothing model exceeds most commonly accepted defense models. We investigate robust and natural accuracy of models on various datasets with and without label smoothing. From Table 1, we can clearly see that model using label smoothing has a significant defensive effect against adversarial attacks no matter on what dataset.

Data set Architecture Circumstance Accuracy(with LS) Accuracy(without LS)
MNIST 4-layer CNN natural 99.11 99.73
FGSM 99.03 0
CIFAR-10 WideResNet-34-10 natural 94.74 95.44
PGD 75.10 0
CIFAR-100 WideResNet-34-10 natural 80.50 81.10
PGD 34.23 0.03
Table 1: Natural accuracy and robust accuracy of differernt models trained with and without label smoothing.

Note that even for the adversarial training model [10] computational cost is required to achieve adversarial robustness. Previous work proves that training adversarially robust model is particularly difficult[10]. Either the network requires more capacity to be robust[10, 14], or the models tend to require more data[15, 16, 17]. Therefore, neither change the network capacity nor use more data, only to modify the training labels makes the model have a defensive effect against adversarial attacks, which make us raise following question:

Does label smoothing bring real robustness to the DNNs?

In order to explore this problem, we study the relationship between label smoothing and adversarial robustness from both theoretical and experimental aspects. Our contributions are as follows:

  • We indicate that label smoothing has a very significant effect to most gradient based attack.

  • We analyze the reasons why the label smoothing model invalidates most gradient-based attacks theoretically.

  • We demonstrate that label smoothing can not bring stable robustness. We study the properties of the label smoothing model and summarize the key points to break through its robustness experimentally.

Note that attacks that can be used to assess robustness should either be able to systematically find adversarial examples or be able to declare with high confidence that no adversarial examples exists. Our work reveal the necessity and urgency of proposing systematic and efficient attack methods to evaluate the robustness of neural networks.

2 Theoretical Analysis

In this section, we will analyze how label smoothing training change the model. We will first make a theoretical analysis, and then give a toy example of why such changes will affect the robustness of the model.

2.1 Preliminaries

A N-layer DNN is a mapping that accepts an input and produces an output with parameter . It can be expressed as:



for non-linear activation function

, model weights and bias . Let denote the outputs of ,

denote the output of the softmax layer. that is,

and . Let

denote the logit of

-th class, and is the -element of , which satisfies and due to the property of softmax. The DNN assigns the label .

For a network trained with hard labels for a classification task with classes, we use one-hot label vectors , where is "1" for the correct class and "0" for the rest. For a network trained with label smoothing, we use the soft label vectors instead. The value of each dimension is modified according to . The parameter which we call smoothing fator measures the extent of smoothness. When is small, the gap between wrong and correct class is relatively lager, smoothness level is low. As the increase, the smoothness of the label vector increases, and the gap between the correct and wrong class decreases.

2.2 Impacts of Label Smoothing

Most of the neural networks are trained with the Cross Entropy (CE) as the objective loss: , where is the number of classes. The loss can be written as follows when is one-hot label:



is output probability of correct class. And if we smooth the label according to

, the CE loss will become:


By substituting in Equation (2) and (3) using

, we can further deduce the relationship between loss function and logits when using one hot label and smoothed label respectively.


Let denote the margin between the corresponding logit of the correct class and one of the other classes, then,


Traditional networks trained with hard labels try to decrease the Equation (6) to increase the margin of the correct logit and other logits. Note that there is one more item on the right side of Equation (7) than Equation (6), when label smoothing is applied to the label, a term opposite to the original optimization objective (reducing ) is added to the loss function. Thus, the original loss will encourage the margin to grow infinitely, the intensity of encouragement (which is gradient) will gradually decrease and eventually converge. By contrast, when the label smoothing is applied, the first term of Equation 6 will suppress the growth of margin. The ultimate effect of the two losses restraining each other is to reach a compromise in a certain range acceptable to both terms. In other words, the value range of is limited to an acceptable range of two terms of loss. Calculate the partial derivative of loss with respect to a specific :


According to the theorem that the partial derivative at the extreme point is 0, we can obtain the approximate range of . More ideally, the value of is a constant and satisfies , and depending on the properties of the softmax function, we can conclude that logits also have a range, and the size of this range is determined by . The approximate range of logits are listed in Table 2 according to the experiment.

range range
0 [-5.00, 15] 0.5 [-0.30, 2.2]
0.1 [-0.55, 4.5] 0.6 [-0.25, 2.0]
0.2 [-0.45, 3.5] 0.7 [-0.20, 1.6]
0.3 [-0.40, 3.0] 0.8 [-0.14, 1.2]
0.4 [-0.35, 2.5] 0.9 [-0.08, 0.7]
Table 2: The approximate ranges of logit for models with different smoothing factor .

2.3 Toy Example

Based on the results of the above theoretical analysis and experimental validation, we try to use a simplified model to explain why label smoothing has an impact on the effectiveness against adversarial attacks.

There are two loss functions commonly used in the algorithm of adversarial example generation. One is the cross-entropy loss, the other is margin loss:

cross-entropy loss (9)
margin loss (10)

Regardless of the loss function, the objective is to use the inverse direction of the gradient as the direction of the search for the adversarial example. Normally along this direction, the logit of the correct class decreases, and the predicted label changes when the search effort crosses the decision boundary (). If the perturbation size meets the requirements (within -radius ball) at this point, the adversarial example is successfully found.

Figure 1: Illustration of how label smoothing impact attack. Due to the decision boundary points of blue model are farther apart, the number of boundary points within the permissible range of perturbation is reduced, resulting in the inability to find adverarial example based on gradients sometimes.

Let us take a toy example of what will happen when label smoothing is applied. Consider a one-dimension classification task, the relationship between the logit of the network output and the input is shown in Figure 1. The solid line represents the correct logit, and the dotted line represents one of the remaining logit. The orange line represents the model with no label smoothing applied, and the blue line represents the applied. Empirically, the two models achieve extremes at similar locations, and the range of logit is smaller due to label smoothing applied, resulting in the different location of the decision boundary points between two models. Two different decision points of the model are far apart, which leads to fewer decision boundary points within the allowable perturbation interval. As the Figure 1 shows, Assuming that we use the logit of the correct class as a loss function to generate adversarial examples, we look for adversarial examples in the direction of the descent of the correct class. When , this method fails to generate adversarial example.

3 Experiment Verification

It has been revealed that most of attack methods for evaluation of defense model are often insufficient or give a wrong impression for robustness over the past years. We have demonstrated that label smoothing greatly improves the model’s performance against the PGD attack. However, it is necessary to have a more comprehensive understanding of the robustness of the label smoothing model. In this section, we use more representative attack methods to test the robustness more comprehensively and try to explain the experiment results with the conclusion of the previous section.

3.1 Experimental Setup

We consider the classification task on CIFAR-10, and all the model’s architecture is WideResNet-34-10. For bounded attak, we use maximum perturbation ; for bounded attack, . We investigate two models that are the most representative defensive models as the baseline. Standard adversarial training (Madry et al.[10]) and TRADES (Zhang et al.[18]). Both of them are typical models based on adversarial training. The specific training settings for both models are the same as their original paper.

Figure 2: Accuracy of model trained with Label smoothing for different smoothing factors.

To avoid much concentration on optimizing the hyper-parameter of label smoothing model, we fix it as a constant. Therefore, We first test when the label smoothing model has the best defense performance. We train a series of networks with different smoothing factors and evaluate them under the same attack setting: maximum perturbation , step size , and total perturbation steps . The results are shown in Figure 2.

From Figure 2, we can intuitively observe that when the smoothing factor is greater than 0.3, the model starts to have a remarkable defense effect against PGD attacks and reaches the maximum value when the factor is 0.9, then decreases. Based on the analysis in the previous section, this pattern of change is logical. Thus, in all the next experiments, the smoothing factor of all label smoothing models is set to 0.9 as it reaches the best performance.

We consider evaluation models in 4 aspects. These are PGD-like attack, CW attack, our proposed attack and transferability test. Details of the experimental settings for each section will be described later.

3.2 PGD Attack and Its Variants

Currently, PGD attack is a mostly used algorithm to test adversarial robustness. This method has the advantage of computationally cheap and performs well in many cases. However, it has been shown that PGD will lead to a significant overestimation of robustness. Recent years, a line of works about PGD variants are proposed. We first give a definition of PGD-like attacks:

Given a classifier

, original example , original target , adversarial loss function . The PGD-like algorithms refer to the algorithms that generate adversarial example by performing the following procedure one or more iterations.

  • Starting disturb at point which satisfies

  • Searching new point by


where represents -ball centered on , is the step size and indicates the operation of projection to the surface of -ball.

We can summarize three characteristics of the PGD-like attack algorithm from the above definition.

  • The starting point of perturbation is in the neighborhood of a certain norm space of the original point.

  • The direction of perturbation is the sign direction of the adversarial loss function.

  • Use projection operations to control boundaries.

Restarts Initialization Steps & Step size Loss function
FGSM w/o original 1, cross-entropy
w/o random 10, cross-entropy
w/o random 20, cross-entropy
w/o random 40, cross-entropy
PGD-CW w/o random 20, margin
MT 18 random 20, different margin
ODI 20 calculated 20, margin
AA 1 random 100, auto cross-entropy
Table 3: The components and specific settings of different PGD-like attack algorithms.
Defense clean FGSM PGD-CW MT ODI AA
Madry 86.83 56.88 52.32 51.24 51.14 50.73 50.34 49.13 49.35
Zhang 84.92 64.87 56.12 55.09 55.89 53.69 52.55 52.71 53.15
LS 94.74 67.17 64.01 74.83 73.15 74.10 25.02 0.15 6.86
Table 4: Robustness evaluation of different models by various PGD-like attack. We report clean test accuracy, the robust accuracy of the various PGD-like attack. Attack algorithms are divided into two categories: the classic PGD-like attack and the latest improved PGD-like attack

We mainly apply two classic methods (FGSM and PGD) and three recent proposed methods(MT, ODI and AA) to test the adversarial robustness of the label smoothing model. These three new attack algorithms are considered to produce more accurate robustness evaluations than traditional PGD. MT represents the MultiTargeted attack [19], ODI represents Output Diversified Initialization attack [20], AA represents AutoAttack [21].

We first explain how these attacks are distinguished from each other and following the same pattern mentioned above, and introduce their specific attack settings in following experiment. Firstly, FGSM and PGD both perform the above procedure once without restarts. FGSM is the simplest PGD-like algorithm because it only perturbs one step from starting point , but the step size is also the largest. For PGD attack, we will test three different attack settings with different steps and step size , which are denoted as ; PGD-CW attack is a way to realize CW attack in most defense papers. In this implementation, only the cross-entropy loss function in PGD attack is replaced by the margin loss function. These attacks’ starting points are random sampled from radius ball . MT attack disturbs with multiple restarts and picks a new target class logit to calculate margin loss at each restart. ODI provides a more effective initialization strategy to determine the starting point with diversified logits at each restart. AA attack is a parameter-free ensemble of four attacks: FAB, two proposed Auto-PGD attacks with different loss functions, and the black-box Square Attack. But here, we also use one of its Auto-PGD attacks—APGD-CE, as a PGD variant. APGD-CE improves the attack effect through searching good initial point with restarts and a step size selection strategy. The components and specific settings of these attacks are listed in Table 3.

The results are shown in three parts as Table 4. Firstly, label smoothing model reaches the best natural accuracy since no noise is added to the training samples. Secondly, under all the classic PGD-like attacks, including FGSM and PGD, the label smoothing model has also achieved the highest defense effect in comparison to two adversarial training models. It should be noted that the accuracy of the two models based on adversarial training does not fluctuate much when step increases (step size decrease). However, the label smoothing model presents an intuitive rising trend. Thirdly, 3 recent proposed attacks reduce the accuracy of the label smoothing model enormously, while the accuracy of two adversarial training models does not decrease much.

Synthesize the performance of the label smoothing model under differnt PGD-like attack, it can empirically conclude that label smoothing model does not have stable robustness. Firstly, The robustness of the label smoothing model is readily affected by the step size and step number. It can be an explanation that the larger step size is beneficial to jump out of the local wrong perturbation direction. Secondly, 3 new improved PGD-like attacks almost destroyed the defense of the model, though to varying degrees. It mainly owes to their restart mechanism. The restart mecanism it to try more perturbation directions and choose the best. MT utilize different classes of margin loss to vary the direction, ODI utilize different initial point to vary the direction and AA utilize the larger step size to vary the direction. Among them, the method of changing the starting point is the best, increasing step size is the second, and the effect of changing the loss function is the worst. In addition, the result of PGD-CW attack manifests that it is not critical whether the loss function is cross-entropy or margin.

3.3 CW Attack

CW attack is proposed by Carlini et al.[11]. As mentioned above, the PDG-CW attacks are not strictly implementation of CW attacks, according to the algorithm proposed by Carlini et al.[11]. Find an adversarial example for image within distance with CW attack is equivalent to solving the following optimization problems,


where is hyper-parameter and the best is the margin loss according to the original paper. CW attacks differ from gradient-based PGD-like attacks in the following points.

  • CW’s starting point can be any point, not necessarily within radius ball .

  • CW use variable substitution to control boundaries, instead of projection operations.

  • CW’s perturbation is gradient direction rather than gradient sign direction.

  • CW need parameter fine-tuning for better attack effect, including and step number and step size etc.

In order to reduce the number of influence factors, we only show the results under norm here and set up the hyper-parameter. The First row in Table 5 reports that under the CW attack, label smoothing shows poor robustness, while the two adversarial training models still maintain certain robustness.

Note that in the first experiment, was initialized to 0. To investigate the influence of the starting point on results, we initialize to the original image and test again. The results show that the accuracy of the label smoothing model is improved after the starting point is changed.

Starting point LabelSmoothing Madry Zhang
8.49 42.12 51.29
37.83 43.14 52.73
Table 5: Accuracy of different model under CW attack using original algorithm. Distance metric is norm, , , confidence bias is -50. The starting point of the first row is the black image(), and the starting point of the second row is the original image ().

From the above experiments on CW attack, It can conclude that label smoothing model can be broken by CW attack which is strictly executed according to the original paper, and the key point to break is the selection of the starting point. In view of the fact that the optimization target of CW attack introduces the term of perturbation size and the arbitrary selection of starting point, it is not surprising that the defense effect from label smoothing failure.

3.4 Our Attack

We propose a novel attack algorithm, which combines the advantages of both PGD and CW. This method uses the idea of variable substitution in CW attack and optimization objective in PGD attack. For norm bounded attack, The value of each pixel changes in [, ], while pixel value itself have range [0, 1]. Combine them together, each pixel value satisfies:


We propose a new variable substitution to map this finite interval to an infinite interval. Substitute by variable .


Thus, we obtain an optimizable parameter without range restriction and the generation of adversarial example turn into an unconstrained optimization problem: . We can solve this problem by gradient decent algorithms and then substituting the solution to Equation to gain the adversarial example . A detailed description of our attack can be found in Algorithm 1.

Input: The initial image, ;label , model .
Output: Adversarial example .
Parameters: Perturbation bound , step size , number of steps .

1:  Initialize ;
2:  Initialize ;
3:  Calculate by Equation (15);
4:  for  do
6:     Calculate by Equation (15).
7:     if  then
9:     end if
10:  end for
11:  return  
Algorithm 1 Our attack algorithm.

We use algorithm 1 to do ablation study on step numberand step size, since they are two independent factors in this algorithm.

Figure 3: Parameter analysis of algorithm 1:(a) the accuracies of 3 models under our attack with different step size; (b) the accuracies of 3 models under our attack with different step number.

Step size We vary the step size from 0.01 to 0.1 in a granularity of 0.01. The number of step is set to 500 to ensure the convergence of the attack. The accuracy of three models are illustrated in Figure 3a. As can be observed, the robustness of the label smoothing model is significantly affected by the step size in comparison to two adversarial training models. Larger step size brings better attack effect.

Step number We vary the number of steps from 50 to 500 in a granularity of 50 to test . Here we fix the step size to 0.01 and 0.1. As can be observed in Figure 3(b), Under the same setting, the convergence speed of the two adversarial training models is faster, while the accuracy of label smoothing model has been declining with the increase of step number.

In general, Under the same settings, the step size and the step number are the key to the success rate of model attack, which also supports the previous conclusion.

3.5 Transferability

A significant requirement for robustness is that it is not vulnerable to transfered attack. We test the transferability of the label smoothing model. Attack method is considered to have good transferability When the adversarial example generated on the original model can also mislead the target model. Using a natural training model as the source model and a 0.9 smoothing factor model as the target model, we test the robustness of label smoothing model under transfered attack. For comparison, two adversarial training models are tested for the target model. The results are shown in the Table 6.

Attack Methods Madry Zhang LabelSmoothing
FGSM 84.75 86.11 50.31
PGD 85.15 88.63 8.15
CW 85.19 87.90 9.23
Table 6: The transferable accuracy of 3 models with different attack methods. Source model is a naturally-trained model.

The results shown in the Table 6 manifest that the label smoothing model is vulnerable to attacks from naturally trained model while two adversarial training model while the other two models are completely uninjured. This means that the change of one-hot label to label smoothing model alters the direction of gradient, making it incorrect to generate adversarial example in most cases. However, adversarial examples can still be found by other strategies to adjust the direction. Conversely, adversarial training models ensure that most examples cannot find adversarials, no matter how you change the direction of the perturbation.

4 Related Work

Jiang et al.[22] in their recent paper identify "imbalanced gradients", a new situation where traditional attacks such as PGD can fail and produce overstimated adversarial robustness. They state that the phenomenon of imbalanced gradients occures when the gradient of one term of the margin loss dominates and pushes the attack towards to a suboptimal direction. They mention that label smoothing causes imbalanced gradients but they do not explain how label smoothing causes imbalanced gradients. We think using imbalanced gradient to explain the reason is quite absolute and our study reveals robustness of the label smoothing model in more detail.

5 Discussion

Taken all results together, We can empirically conclude that label smoothing does not bring real robustness or stable robustness. It somehow bypasses most gradient-based attacks or exploits the weaknesses in these attacks. Increasing the step size, increasing the restart step, using an initialization strategy away from the initial image, and so on can greatly reduce the robustness of label smoothing model and even completely overwhelm the model. In other words, label smoothing makes the gradient-based attack method converge to the local sub-optimal.

However, These shortcomings do not mean the label smoothing model has no practical effect. First, label smoothing is an extremely fast training method with little additional training expense; second, it is the most accurate method on clean samples in all defense methods, bypassing most of the currently widely used attacks without reducing the original accuracy of the model. Therefore, label smoothing is worth trying in practical situations where a relatively low level of defense needs to be achieved quickly and easily.

On the other hand, research on label smoothing reveals many problems in the current research of adversarial example. First of all, most of the current attack methods are not suitable or applicable to assessing the robustness of models, such as PGD, because the target of optimization does not guarantee the generation of adversarial example, so that the sub-optimal of algorithm convergence is not adversarial. In addition, most papers use the CW attack method to assess robustness, but only use margin loss instead of the original algorithm. From our experimental results, it can be seen that there are completely two different results.

So far, our work has explored various properties of the label smoothing model, but we have not found out the reason why label smoothing brings these changes to the model. At the same time, there is no efficient counter attack against the label smoothing model. If we can find a quick attack method, it will mean that we can take advantage of his shortcomings to produce a more efficient attack algorithm, and apply a stronger attack algorithm to adversarial training to produce a more robust model. We will focus on exploring the essence of the effect of label smoothing in future work, and we also hope that the research community will be committed to finding efficient and effective attack methods to evaluate the robustness of the model.

6 Conclusion

In this paper, we studied the effect of label smoothing during training on the model’s adversarial robustness. We theoretically analyze why label smoothing invalidate most gradient-based attacks and evaluate the robustness of the label smoothing model in various experimental settings. We conclude that the robustness produced by label smoothing is incomplete; increasing the step size, adding restart operation, using different initialization strategies and so on, is the key to breaking through the label smoothing model. In general, label smoothing does not bring real robustness or stable robustness. Our research reveals that it is an urgent problem to propose valid and more efficient attack methods in adversarial example research.