Reliable Adversarial Distillation with Unreliable Teachers

06/09/2021 ∙ by Jianing Zhu, et al. ∙ 0

In ordinary distillation, student networks are trained with soft labels (SLs) given by pretrained teacher networks, and students are expected to improve upon teachers since SLs are stronger supervision than the original hard labels. However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students. Therefore, in this paper, we propose reliable introspective adversarial distillation (IAD) where students partially instead of fully trust their teachers. Specifically, IAD distinguishes between three cases given a query of a natural data (ND) and the corresponding adversarial data (AD): (a) if a teacher is good at AD, its SL is fully trusted; (b) if a teacher is good at ND but not AD, its SL is partially trusted and the student also takes its own SL into account; (c) otherwise, the student only relies on its own SL. Experiments demonstrate the effectiveness of IAD for improving upon teachers in terms of adversarial robustness.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep Neural Networks (DNNs) have shown excellent performance on a range of tasks in computer vision 

(He et al., 2016)

and natural language processing 

(Devlin et al., 2019). Nevertheless, Szegedy et al. (2014); Goodfellow et al. (2015) demonstrated that DNNs could be easily fooled by adding a small number of perturbations on natural examples, which increases the concerns on the robustness of DNNs in the trustworthy-sensitive areas, e.g., finance (Kumar et al., 2020) and autonomous driving (Litman, 2017). To overcome this problem, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) is proposed and has shown effectiveness to acquire the adversarially robust DNNs.

Most existing adversarial training approaches focus on learning from data directly. For example, the popular adversarial training (AT) (Madry et al., 2018) leverages multi-step projected gradient descent (PGD) to generate the adversarial examples and feed them into the standard training.  Zhang et al. (2019) developed TRADES on the basis of AT to balance the standard accuracy and robust performance. Recently, there are several methods under this paradigm are developed to improve the model robustness (Wang et al., 2019; Alayrac et al., 2019; Zhang et al., 2020; Jiang et al., 2020; Ding et al., 2020; Wang et al., 2020b; Du et al., 2021; Tian et al., 2021; Zhang et al., 2021). However, directly learning from the adversarial examples is a challenging task on the complex datasets since the loss with hard labels is difficult to be optimized, which limits us to achieve higher robust accuracy.

To mitigate this issue, one emerging direction is distilling robustness from the adversarially pre-trained model intermediately, which has shown promise in the recent study. For example,  Ilyas et al. (2019) used an adversarially pre-trained model to build a “robustified” dataset to learn a robust DNN. Chen et al. (2020); Salman et al. (2020)

explored to boost the model robustness through fine-tuning or transfer learning from adversarially pre-trained models. 

Goldblum et al. (2020)and Chen et al. (2021) investigated distilling the robustness from adversarially pre-trained models, termed as adversarial distillation for simplicity, where they encouraged student models to mimic the outputs (i.e., soft labels) of the adversarially pre-trained teachers.

(a) Unreliability issue
(b) Overview of Introspective Adversarial Distillation (IAD)
Figure 1: (a) Unreliability issue: Comparison of the teacher model’s accuracy on the natural data and the student model’s adversarial data from CIFAR-10 datasets. Different from the ordinary distillation in which the natural data is unchanged and the teacher model has the consistent standard performance, the teacher model’s robust accuracy on the student model’s adversarial data is decreasing during distillation, which means the guidance of the teacher model in adversarial distillation is progressively unreliable. (b) Overview of Introspective Adversarial Distillation (IAD): The student partially trusts the teacher guidance and partially trusts self-introspection during adversarial distillation. Specifically, the student model generates the adversarial data by itself and mimics the outputs of the teacher model and itself partially in IAD.

However, one critical difference is: in the conventional distillation, the teacher model and the student model share the natural training data; while in the adversarial distillation, the adversarial training data of the student model and that of the teacher model are egocentric (respectively generated by themselves) and becoming more adversarial challenging during training. Given this distinction, are the soft labels acquired from the teacher model in adversarial distillation always reliable and informative guidance? To answer this question, we take a closer look at the process of adversarial distillation. As shown in Figure 1(a), we discover that along with the training, the teacher model progressively fails to give a correct prediction for the adversarial data queried by the student model. The reason could be that with the students being more adversarially robust and thus the adversarial data being harder, it is too demanding to require the teachers become always good at every adversarial data queried by the student model, as the teacher model has never seen these data in its pre-training. In contrast, for the conventional distillation, student models are expected to distill the “static” knowledge from the teacher model, since the soft labels for the natural data from the teacher model are always fixed.

The observation in Figure 1(a) raises the challenge: how to conduct reliable adversarial distillation with unreliable teachers

? To solve this problem, we can categorize the training data according to the prediction on natural and adversarial data for three cases. First, if the teacher model can correctly classify both natural and adversarial data, it is reliable; Second, if the teacher model can correctly classify the natural but not adversarial data, it should be partially trusted, and the student model is suggested to trust itself to enhance model robustness as the adversarial regularization 

(Zhang et al., 2019); Third, if the teacher model cannot correctly classify both natural and adversarial data, the student model is recommended to trust itself totally. According to this intuition, we propose an Introspective Adversarial Distillation (IAD) to effectively utilize the knowledge from an adversarially pre-trained teacher model. The framework of our proposed IAD can be seen in Figure 1(b). Briefly, the student model is encouraged to partially instead of fully trust the teacher model, and gradually trust itself more as being more adversarial robust. We conduct extensive experiments on the benchmark CIFAR-10/CIFAR-100 and the more challenging

Tiny-ImageNet

datasets to evaluate the efficiency of our IAD. The main contributions of our work can be summarized as follows.

  1. We take a closer look at adversarial distillation under the teacher-student paradigm. Considering adversarial robustness, we discover that the guidance from the teacher model is progressively unreliable along with the adversarial training.

  2. We construct the reliable guidance for adversarial distillation by flexibly utilizing the robust knowledge from the teacher model: (a) if a teacher is good at adversarial data, its soft labels can be fully trusted; (b) if a teacher is good at natural data but not adversarial data, its soft labels should be partially trusted and the student also takes its own soft labels into account; (c) otherwise, the student only relies on its own soft labels.

  3. We propose an Introspective Adversarial Distillation (IAD) to automatically realize the intuition of the previous reliable guidance during the adversarial distillation. The experimental results confirmed that our approach can improve adversarial robustness across a variety of training settings and evaluations, especially on the challenging (consider adversarial robustness) datasets (e.g., CIFAR-100 (Krizhevsky, 2009) and

    Tiny-ImageNet

     (Le and Yang, 2015)) or using large models (e.g., WideResNet (Zagoruyko and Komodakis, 2016)).

2 Related Work

2.1 Adversarial Training.

Adversarial examples (Goodfellow et al., 2015) motivate many defensive approaches developed in the last few years. Among them, adversarial training has been demonstrated as the most effective method to improve the robustness of DNNs (Cai et al., 2018; Wang et al., 2020a, b; Jiang et al., 2020; Bai et al., 2021; Chen et al., 2021; Wu et al., 2020). The formulation of the popular AT (Madry et al., 2018) and its variants can be summarized as the minimization of the following loss:

(1)

where is the number of training examples, is the adversarial example within the -ball (bounded by an -norm) centered at natural example , is the associated label, is the DNN with parameter and is the standard classification loss, e.g., the cross-entropy loss. Adversarial training leverages adversarial examples to smooth the small neighborhood, making the model prediction locally invariant. To generate the adversarial examples, AT employs a PGD method (Madry et al., 2018). Concretely, given a sample and the step size , PGD recursively searches

(2)

until a certain stopping criterion is satisfied. In Eq. (2), ,

is the loss function,

is adversarial data at step , is the corresponding label for natural data, and is the projection function that projects the adversarial data back into the -ball centered at .

2.2 Knowledge Distillation

The idea of distillation from other models can be dated back to (Craven and Shavlik, 1996), and re-introduced by (Hinton et al., 2015) as knowledge distillation (KD). It has been widely studied in recent years and works well in numerous applications like model compression and transfer learning. For adversarial defense, a few studies have explored obtaining adversarial robust models by distillation. Papernot et al. (2016)

proposed defensive distillation which utilizes the soft labels produced by a standard pre-trained teacher model, while this method is proved to be not resistant to the C&W attacks 

(Carlini and Wagner, 2017); Goldblum et al. (2020) combined AT with KD to transfer robustness to student models, and they found that the distilled models can outperform adversarially pre-trained teacher models of identical architecture in terms of adversarial robustness; Chen et al. (2021) utilized distillation as a regularization for adversarial training, which employed both robust and standard pre-trained teacher models to address the robust overfitting (Rice et al., 2020).

Nonetheless, all these related methods fully trust teacher models and do not consider that whether the guidance of the teacher model in distillation is reliable or not. In this paper, different from the previous studies, we find that the teacher model in adversarial distillation is not always trustworthy. Based on that, we propose reliable IAD to encourage student models to partially instead of fully trust teacher models, which effectively utilizes the knowledge from the adversarially pre-trained models.

3 A Closer Look at Adversarial Distillation

In Section 3.1, we discuss the unreliable issue of adversarial distillation, i.e., the guidance of the teacher model is progressively unreliable along with adversarial training. In Section 3.2, we partition the training examples into three parts and analyze them part by part. Specifically, we expect that the student model should partially instead of fully trust the teacher model and gradually trust itself more along with adversarial training.

3.1 Fully Trust: Progressively Unreliable Guidance

As aforementioned in the Introduction, previous methods (Goldblum et al., 2020; Chen et al., 2021) fully trust the teacher model when distilling robustness from adversarially pre-trained models. Taking Adversarial Robust Distillation (ARD) (Goldblum et al., 2020) as an example, we illustrate its procedure in the left part of Figure 1(b): the student model generates its adversarial data and then optimizes the prediction of them to mimic the output of the teacher model. However, although the teacher model is well optimized on the adversarial data queried by itself, we argue that it might not always be good at the more and more challenging adversarial data queried by the student model.

As shown in Figure 1(a), different from the ordinary distillation in which the teacher model has the consistent standard performance on the natural data, its robust accuracy on the student model’s adversarial data is decreasing during distillation. The guidance of the teacher model gradually fails to give the correct output on the adversarial data queried by the student model.

(a) Toy illustration
(b) Number changes on the three kinds of data during distillation
Figure 2: (a) Toy illustration: An illustration of three situations about the prediction of the teacher model. The blue/orange areas represent the decision region of the teacher/student model, and the red dashed box represents the unit-norm ball of AT. 1) . The generated adversarial example from the natural example is located in the blue area, where the teacher model can correctly predict; 2) . The generated adversarial example from is out of both orange and blue areas, where the teacher model has the wrong prediction; 3) . The natural example that both two models cannot correctly predict. (b) Number changes on the three kinds of data: We trace the number of three types of data during adversarial distillation on CIFAR-10 dataset. Specifically, the number of examples like is decreasing and that of examples like is increasing during adversarial distillation.

3.2 Partially Trust: Construction of Reliable Guidance

The unreliable issue of the teacher model in adversarial distillation raises the challenge of how to conduct reliable adversarial distillation with unreliable teachers? Intuitively, this requires us to re-consider the guidance of adversarially pre-trained models along with the adversarial training. For simplicity, we use () to represent the predicted label of the teacher model on the natural (adversarial) examples, and use to represent the targeted label. We partition the adversarial samples into three parts as shown in the toy illustration (Figure 2(a)), and analyze them part by part.

1) : It can be seen in Figure 2(a) that this part of data whose adversarial variants like is the most trustworthy among the three parts, since the teacher model performs well on both natural and adversarial data. In this case, we could choose to trust the guidance of the teacher model on this part of the data. However, as shown in Figure 2(b), we find that the sample number of this part is decreasing along with the adversarial training. That is, what we can rely on from the teacher model in adversarial distillation is progressively reduced.

2) : In Figure 2(b), we also check the number change of the part of data whose adversarial variants like . Corresponding to the previous category, the number of this kind of data is increasing during distillation. Since the teacher model’s outputs on the small neighborhood of the queried natural data are not always correct, its knowledge may not be robust and the guidance for the student model is not reliable. Think back to the reason for the decrease in the robust accuracy of the teacher model, the student model itself may also be trustworthy since it becomes gradually adversarial robust during distillation.

3) : As for the data which are like in Figure 2(a), the guidance of the teacher model is totally unreliable since the predicted labels on the natural data are wrong. The student model may also trust itself to encourage the outputs to mimic that of their natural data rather than the wrong outputs from the teacher model. First, it removes the potential threat that the teacher’s guidance may be a kind of noisy labels for training. Second, as an adversarial regularization (Zhang et al., 2019), it can improve the model robustness through enhancing the stability of the model’s outputs on the natural and the corresponding adversarial data.

To sum up, we suggest employing reliable guidance from the teacher model and encouraging the student model to trust itself more as the teacher model’s guidance being progressively unreliable and the student model gradually becoming more adversarially robust.

4 Introspective Adversarial Distillation

Based on previous analysis about the adversarial distillation, we propose the Introspective Adversarial Distillation (IAD) to better utilize the guidance from the adversarially pre-trained model. Concretely, we have the following KD-style loss, but composite with teacher guidance and student introspection.

(3)

where is the tempered variant of the student output with the temperature , is the tempered variant of the teacher output , is the adversarial data generated from the natural data , and is the KL-divergence loss. As for the annealing parameter that is used to balance the effect of the teacher model in adversarial distillation, we define it as,

(4)

where

is the prediction probability of the teacher model about the targeted label

and

is a hyperparameter to sharpen the prediction. The intuition behind IAD is to calibrate the guidance from the teacher model automatically based on the prediction of adversarial data. Our

naturally corresponds to the construction in Section 3.2, since the prediction probability of the teacher model for the adversarial data can well represent the categorical information.

Figure 3: Reliability certification about self-introspection of the student model: Left, PGD-10 training accuracy of teacher/student model; Middle, PGD-10 training accuracy of teacher model and that combined with the self-introspection of the student model; Right, PGD-10 test accuracy of the student model which only trusts the soft labels and the student model which also considers self-introspection.

Intuitively, the student model can trust the teacher model when approaches , which means that the teacher model is good at both natural and adversarial data. However, when approaches , it corresponds that the teacher model is good at natural but not adversarial data, or even not good at both, and thus the student model should take its self-introspection into account. In Figure 3, we check the reliability of the student model itself. According to the left panel of Figure 3, we can see that the student model is progressively robust to the adversarial data. And if we incorporate the student introspection into the adversarial distillation, the results in the middle of Figure 3 confirms its potential benefits to improve the accuracy of the guidance. Moreover, as shown in the right panel of Figure 3, adding self-introspection results in better improvement in model robustness compared to only using the guidance of the teacher model. Therefore, automatically encourages the outputs of the student model to mimic more reliable guidance in adversarial distillation.

  Input: student model , teacher model , training dataset , learning rate

, number of epochs

, batch size , number of batches , temperature parameter , teacher model’s predicted probability , adjustable parameter .
  Output: adversarially robust model
  for epoch , ,  do
     for mini-batch , ,  do
        Sample a mini-batch from
        for , , (in parallel)  do
           Obtain adversarial data of by PGD based on Eq. (2).
           Compute for each adversarial data based on Eq. (4).
        end for
        
     end for
  end for
Algorithm 1 Introspective Adversarial Distillation (IAD)

Algorithm 1 summarizes the implementation of Introspective Adversarial Distillation (IAD). Specifically, IAD first leverages PGD to generate the adversarial data for the student model. Secondly, IAD computes the outputs of the teacher model and the student model on the natural data. Then, IAD mimics the outputs of the student model with that of itself and the teacher model partially based on the probability of the teacher model on the adversarial data.

Warming-up period.

During training, we add a warming-up period to activate the student model, where (in Eq. (3)) is hardcoded to 1. This is because the student itself is not trustworthy in the early stage (refer to the left panel of Figure 3). Through that, we expect the student model to first evolve into a relatively reliable learner and then conducts the procedure of introspective adversarial distillation.

4.1 Comparison with Related Methods

In this section, we discuss the difference between IAD and other related approaches in the perspective of the loss functions. Table 1 summarizes all of them.

Method Loss Function
AT
TRADES
ARD
AKD +
IAD
Table 1: Loss comparison. Note that, and .

As shown in Table 1, AT (Madry et al., 2018) utilizes the hard labels to supervise adversarial training; TRADES (Zhang et al., 2019) decomposes the loss function of AT into two terms, one for standard training and the other one for adversarial training with the soft supervision; Motivated by KD (Hinton et al., 2015), Goldblum et al. (2020) proposed ARD to conduct the adversarial distillation, which fully trusts the outputs of the teacher model to learn the student model. As indicated by the experiments in Goldblum et al. (2020), a larger resulted in less robust student models. Thus, they generally set in their experiments; Chen et al. (2021) utilized distillation as a regularization to avoid the robust overfitting issue, which employed both the adversarially pre-trained teacher model and the standard pre-trained model. Thus, there are two KL-divergence loss and for simplicity, we term their method as AKD; Regarding IAD, it consists of two parts, which respectively encourages the student model to partially instead of fully trust the guidance of the teacher model and gradually trust itself more. In the loss function, is gradually decreased as the adversarial examples being more challenging, which reduces dependency on the guidance of the teacher model during training.

5 Experiments

We conduct comprehensive experiments to evaluate the effectiveness of IAD. In Section 5.1, we compare IAD with benchmark adversarial training methods (AT and TRADES) and some related methods which utilize adversarially pre-trained models via KD (ARD and AKD) on CIFAR-10/CIFAR-100 (Krizhevsky, 2009) datasets. In Section 5.2, we compare the previous methods with IAD on a more challenging dataset Tiny-ImageNet (Le and Yang, 2015). In Section 5.3, the ablation studies are conducted to analyze the effects of the hyper-parameter and different warming-up periods for IAD.

Performance evaluations.

Several measures regarding both natural accuracy and adversarial robustness are applied to evaluate the model performance. We compute the natural accuracy on the natural test data and the robust accuracy on the adversarial test data following Wang et al. (2019). Specifically, the adversarial test data are generated by FGSM, PGD-20, and CW attacks with the same perturbation bound and the step size . All the adversarial generation in these attacks has a random start, i.e, the uniformly random perturbation of

added to the natural data before attacking iterations. Moreover, we also estimate the model performance under the AutoAttack (termed as AA for simplicity).

5.1 Evaluation on Cifar-10/cifar-100 Datasets

Datasets CIFAR-10 CIFAR-100
Methods Measure Natural FGSM PGD-20 CW AA Natural FGSM PGD-20 CW AA
AT 83.06% 63.53% 50.21% 49.22% 46.70% 56.21% 33.94% 24.60% 23.35% 21.55%
ARD (AT) 83.13% 64.11% 51.36% 50.37% 48.05% 47.50% 34.24% 28.86% 25.20% 23.14%
AKD (AT) 83.52% 63.91% 51.36% 50.36% 48.08% 56.46% 35.76% 27.18% 25.33% 23.47%
IAD (AT) 82.32% 63.66% 51.70% 50.51% 48.30% 55.88% 35.68% 27.32% 25.60% 23.96%
TRADES 81.26% 63.12% 52.98% 50.29% 49.47% 54.01% 35.23% 28.00% 24.26% 23.46%
ARD (TRADES) 81.50% 63.38% 53.38% 51.27% 49.92% 40.37% 30.56% 26.74% 23.36% 21.86%
AKD (TRADES) 83.49% 64.00% 51.89% 50.25% 48.52% 57.46% 37.32% 28.72% 25.76% 24.36%
IAD (TRADES) 80.01% 63.24% 53.71% 51.54% 50.15% 54.75% 37.19% 30.71% 27.20% 26.10%
Table 2: Test accuracy (%) on CIFAR-10/CIFAR-100 datasets using ResNet-18.

Experiment Setup.

In this part, we follow the setup (learning rate, optimizer, weight decay, momentum) of (Goldblum et al., 2020) to implement the adversarial distillation experiments on the CIFAR-10/CIFAR-100 datasets. Specifically, we train ResNet-18 under IAD, AT, TRADES, ARD and AKD using SGD with momentum for epochs. The initial learning rate is divided by at Epoch and Epoch respectively, and the weight decay=. In the settings of adversarial defense, we set the perturbation bound , the PGD step size , and PGD step numbers . In the settings of distillation, we use and use models pre-trained by AT and TRADES which have the best PGD-10 test accuracy as the teacher models for ARD, AKD and our IAD. For ARD, we set its hyper-parameter as recommend in (Goldblum et al., 2020) for gaining better robustness. For AKD, we set , and as recommanded in (Chen et al., 2021). For IAD, we set and warming-up period as epoch to train on CIFAR-10. On CIFAR-100, the warming-up period is set as epochs.

Results.

We report the results in Table 2, where the results of AT and TARDES are listed in the first and fifth rows of Table 2, and the other methods use these models as the teacher models in distillation. On CIFAR-10 dataset, we note that our IAD has obtained consistent improvements on adversarial robustness in terms of PGD-20, CW and AA accuracy compared with the student models distilled by ARD or AKD and the adversarially pre-trained teacher models. While the natural and FGSM accuracy of our IAD is lower than others since we encourage the student model to partially trust itself which enhances the stability of model outputs but sacrifice a part of standard performance. On CIFAR-100 dataset, the improvements of our IAD for adversarial robustness are more obvious compared with that on CIFAR-10, especially distilling from the teacher model trained by TRADES. However, since the teacher models have poor standard and robust performance on CIFAR-100, the models distilled by the ARD and AKD which fully trust the guidance of teacher models also result in worse standard performance or robust performance compared with our IAD.

Datasets CIFAR-10 CIFAR-100
Methods Measure Natural FGSM PGD-20 CW AA Natural FGSM PGD-20 CW AA
AT 85.24% 66.07% 53.36% 52.85% 50.37% 59.75% 39.67% 31.14% 29.99% 27.46%
ARD (AT) 83.39% 64.43% 54.06% 53.27% 51.57% 53.26% 30.49% 20.63% 18.91% 16.36%
AKD (AT) 86.25% 67.65% 55.05% 54.07% 51.20% 61.57% 41.14% 31.90% 30.45% 27.80%
IAD (AT) 84.10% 66.24% 55.37% 54.20% 52.03% 58.19% 40.34% 32.80% 31.03% 29.11%
TRADES 82.90% 65.55% 55.50% 53.04% 52.00% 59.14% 38.81% 31.21% 27.69% 26.53%
ARD (TRADES) 82.03% 64.67% 54.82% 53.23% 51.08% 50.63% 27.91% 18.08% 15.45% 13.92%
AKD (TRADES) 84.59% 67.14% 55.38% 53.38% 52.12% 61.48% 40.52% 32.34% 29.84% 27.84%
IAD (TRADES) 82.89% 65.58% 55.86% 53.63% 52.29% 56.97% 40.56% 33.42% 30.53% 29.05%
Table 3: Test accuracy (%) on CIFAR-10/CIFAR-100 datasets using WideResNet-34-10.

Experiment Setup.

In this part, we evaluate these methods by the model with larger capacity, i.e, WideResNet-34-10. For these adversarially pre-trained models, we follow the settings of (Zhang et al., 2021) to train AT and TRADES. To be specific, we train WideResNet-34-10 using SGD with momentum for epochs. The initial learning rate is divided by at Epoch and respectively, and the weight decay=. For distillation baselines, we keep the most settings same as that in the previous part. Specially, we adjust the for ARD on CIFAR-100 dataset as (Goldblum et al., 2020) recommend to deal with complex tasks. For IAD, we use and no warming-up period for CIFAR-10 and -epoch as the warming-up period for CIFAR-100.

Results.

We report the results in Tables 3. According the results on CIFAR-10 dataset, our method can achieve better model robustness than ARD, AKD and the original teacher models in terms of PGD-20, CW and AA accuracy. Moreover, our IAD does not sacrifice much standard performance (See the results of IAD (TRADES) and TRADES in Table 3). Since AKD externally utilizes a standard pre-trained teacher model, it can achieve better natural accuracy in adversarial distillation. On CIFAR-100, similar to previous results of ResNet-18

, our IAD gains consistent improvements in model robustness. However, ARD get poor performance across these evaluation metrics since they fully trusting the teacher model. The reason is probably that the models with large capacity fit a part of unreliable guidance which seems to be noisy labels.

Methods Natural FGSM PGD-20 CW AA
AT 46.16% 28.20% 22.16% 19.52% 18.04%
ARD (AT) 25.56% 13.92% 8.02% 5.74% 4.17%
AKD (AT) 47.48% 30.10% 22.70% 20.10% 18.10%
IAD (AT) 43.58% 28.30% 23.08% 20.46% 18.74%
TRADES 46.12% 27.74% 21.00% 16.86% 15.86%
ARD (TRADES) 24.96% 11.90% 5.22% 4.02% 3.57%
AKD (TRADES) 46.72% 28.12% 22.10% 18.28% 17.13%
IAD (TRADES) 42.78% 28.02% 23.10% 19.06% 17.90%
Table 4: Test accuracy (%) on Tiny-ImageNet dataset using PreActive-ResNet-18.

5.2 Evaluation on Tiny-ImageNet Dataset

Experiment Setup.

In this part, we evaluate these methods on a more challenging Tiny-ImageNet dataset. For these adversarially pre-trained models, we follow the settings of (Chen et al., 2021) to train AT and TRADES. To be specific, we train PreActive-ResNet-18 using SGD with momentum for epochs. The initial learning rate is divided by at Epoch and respectively, and the weight decay=. For distillation baselines, we keep most settings the same as Section 5.1. For IAD, we use and epochs as the warming-up period.

Results.

We report the results in Table 4. Overall, our method can still achieve better model robustness than other methods. On Tiny-ImageNet, as the dataset is more challenging, the adversarially pre-trained teacher models have low standard and robust accuracy. In this case, the student model might be threatened by a large amount of unreliable guidance. As a result, ARD gets poor performance across these evaluation metrics, since it fully trusts the teacher model. About AKD, although it does not sacrifice much natural accuracy, the robust performance is worse than IAD.

5.3 Ablation Studies

Figure 4: Analysis about using different and warming-up periods in IAD: Left, the values of (Eq. 4) under different ; Middle, Natural and AA accuracy of teacher model using different ; Right, Natural and AA accuracy of the teacher model with different warming-up periods.

Experiment Setup.

To understand the effects of different and different warming-up periods on CIFAR-10 dataset, we conduct the ablation study in this part. Here, we choose the ResNet-18 as the backbone model, and keep the experimental settings the same as Section 5.1. In the first experiments, we set no warming-up period and study different . Then, in the second experiments, we set and use different warming-up periods.

Results.

We report part of the results in Figure 4. The complete results with other evaluation metrics like FGSM, PGD-20 and CW accuracy is put in Appendix A.1. In Figure 4, we first visualize the values of the using different in the left panel, which shows the proportion of the teacher guidance and student introspection in adversarial distillation. The bigger the beta corresponds to a larger proportion of the student introspection. In the middle panel, we plot the natural and AA accuracy of the student models distilled by different . We note that the AA accuracy is improved when the student model trusts itself more with the larger value. However, the natural accuracy is decreasing along with the increasing of the value. Similarly, we adjust the length of warming-up periods and check the natural and AA accuracy in the right panel of Figure 4. We find that setting the student model partially trust itself at the beginning of the training process leads to inadequate robustness improvements and more sacrifice on natural accuracy. An appropriate warming-up period at the early stage can improve the student model performance on the adversarial examples.

6 Conclusion

In this paper, we study distillation from adversarially pre-trained models. We take a closer look at adversarial distillation and discover that the guidance of teacher model is progressively unreliable by considering the robustness. Hence, we explore the construction of reliable guidance in adversarial distillation and propose a method for distillation from unreliable teacher models, i.e., Introspective Adversarial Distillation. Our methods encourages the student model partially instead of fully trust the guidance of the teacher model and gradually trust its self-introspection more to improve robustness.

References

  • J. Alayrac, J. Uesato, P. Huang, A. Fawzi, R. Stanforth, and P. Kohli (2019) Are labels required for improving adversarial robustness?. In NeurIPS, Cited by: §1.
  • Y. Bai, Y. Zeng, Y. Jiang, S. Xia, X. Ma, and Y. Wang (2021) Improving adversarial robustness via channel-wise activation suppressing. In ICLR, Cited by: §2.1.
  • Q. Cai, C. Liu, and D. Song (2018) Curriculum adversarial training. In IJCAI, Cited by: §2.1.
  • N. Carlini and D. A. Wagner (2017) Towards evaluating the robustness of neural networks. In Symposium on Security and Privacy (SP), Cited by: §2.2.
  • T. Chen, S. Liu, S. Chang, Y. Cheng, L. Amini, and Z. Wang (2020) Adversarial robustness: from self-supervised pre-training to fine-tuning. In CVPR, Cited by: §1.
  • T. Chen, Z. Zhang, S. Liu, S. Chang, and Z. Wang (2021) Robust overfitting may be mitigated by properly learned smoothening. In ICLR, Cited by: §1, §2.1, §2.2, §3.1, §4.1, §5.1, §5.2.
  • M. Craven and J. Shavlik (1996) Extracting tree-structured representations of trained networks. NeurIPS. Cited by: §2.2.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL, Cited by: §1.
  • G. W. Ding, Y. Sharma, K. Y. C. Lui, and R. Huang (2020) Mma training: direct input space margin maximization through adversarial training. In ICLR, Cited by: §1.
  • X. Du, J. Zhang, B. Han, T. Liu, Y. Rong, G. Niu, J. Huang, and M. Sugiyama (2021) Learning diverse-structured networks for adversarial robustness. In ICML, Cited by: §1.
  • M. Goldblum, L. Fowl, S. Feizi, and T. Goldstein (2020) Adversarially robust distillation. In AAAI, Cited by: §1, §2.2, §3.1, §4.1, §5.1, §5.1.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In ICLR, Cited by: §1, §2.1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1.
  • G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. In arXiv, Cited by: §2.2, §4.1.
  • A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry (2019) Adversarial examples are not bugs, they are features. In NeurIPS, Cited by: §1.
  • Z. Jiang, T. Chen, T. Chen, and Z. Wang (2020) Robust pre-training by adversarial contrastive learning. In NeurIPS, Cited by: §1, §2.1.
  • A. Krizhevsky (2009) Learning multiple layers of features from tiny images. In arXiv, Cited by: item 3, §5.
  • R. S. S. Kumar, M. Nyström, J. Lambert, A. Marshall, M. Goertzel, A. Comissoneru, M. Swann, and S. Xia (2020) Adversarial machine learning-industry perspectives. In 2020 IEEE Security and Privacy Workshops (SPW), Cited by: §1.
  • Y. Le and X. Yang (2015) Tiny imagenet visual recognition challenge. Cited by: item 3, §5.
  • T. Litman (2017) Autonomous vehicle implementation predictions. Victoria Transport Policy Institute Victoria, Canada. Cited by: §1.
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2018)

    Towards deep learning models resistant to adversarial attacks

    .
    In ICLR, Cited by: §1, §1, §2.1, §4.1.
  • N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (SP), Cited by: §2.2.
  • L. Rice, E. Wong, and J. Z. Kolter (2020) Overfitting in adversarially robust deep learning. In ICML, Cited by: §2.2.
  • H. Salman, A. Ilyas, L. Engstrom, A. Kapoor, and A. Madry (2020) Do adversarially robust imagenet models transfer better?. In NeurIPS, Cited by: §1.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2014) Intriguing properties of neural networks. In ICLR, Cited by: §1.
  • Q. Tian, K. Kuang, K. Jiang, F. Wu, and Y. Wang (2021) Analysis and applications of class-wise robustness in adversarial training. In KDD, Cited by: §1.
  • H. Wang, T. Chen, S. Gui, T. Hu, J. Liu, and Z. Wang (2020a) Once-for-all adversarial training: in-situ tradeoff between robustness and accuracy for free. In NeurIPS, Cited by: §2.1.
  • Y. Wang, X. Ma, J. Bailey, J. Yi, B. Zhou, and Q. Gu (2019) On the convergence and robustness of adversarial training. In ICML, Cited by: §1, §5.
  • Y. Wang, D. Zou, J. Yi, J. Bailey, X. Ma, and Q. Gu (2020b) Improving adversarial robustness requires revisiting misclassified examples. In ICLR, Cited by: §1, §2.1.
  • D. Wu, S. Xia, and Y. Wang (2020) Adversarial weight perturbation helps robust generalization. NeurIPS 33. Cited by: §2.1.
  • S. Zagoruyko and N. Komodakis (2016) Wide residual networks. arXiv:1605.07146. Cited by: item 3.
  • H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan (2019) Theoretically principled trade-off between robustness and accuracy. In ICML, Cited by: §1, §1, §3.2, §4.1.
  • J. Zhang, X. Xu, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli (2020) Attacks which do not kill training make adversarial learning stronger. In ICML, Cited by: §1.
  • J. Zhang, J. Zhu, G. Niu, B. Han, M. Sugiyama, and M. Kankanhalli (2021) Geometry-aware instance-reweighted adversarial training. In ICLR, Cited by: §1, §5.1.

Appendix A Experiment

In this section, we provide additional experimental results about the IAD. All of the experiments are conducted on Tesla V100-SXM2 GPUs.

a.1 Complete results of ablation studies.

Natural FGSM PGD-20 CW AA
0.01 83.16% 63.97% 51.28% 50.38% 48.20%
0.05 82.82% 64.12% 51.51% 50.43% 48.19%
0.1 82.32% 63.66% 51.70% 50.51% 48.30%
0.5 80.39% 63.17% 52.62% 50.76% 49.01%
1.0 78.61% 61.96% 52.81% 50.79% 49.21%
Table 5: Test accuracy (%) of IAD using different .
warming-up period Natural FGSM PGD-20 CW AA
0 epochs 82.32% 63.66% 51.70% 50.51% 48.30%
20 epochs 82.56% 63.68% 51.50% 50.50% 48.41%
40 epochs 82.02% 63.11% 52.15% 50.92% 48.73%
60 epochs 83.33% 63.90% 51.77% 50.63% 48.59%
80 epochs 82.72% 63.70% 51.75% 50.62% 48.49%
Table 6: Test accuracy (%) of IAD using different warming-up periods.

In this part, we report the complete results of our ablation studies in Tables 5 (about ) and 6 (about warming-up periods). In Table 5, we can see that the natural and FGSM accuracy will decrease, and the robust accuracy (PGD-20, CW, AA) will increase with the rise of . In Table 6, we adjust the length of warming-up periods. We can see that letting the student network partially trust itself at the beginning of the training process would result in inadequate robustness improvements.