Preparing Lessons: Improve Knowledge Distillation with Better Supervision

11/18/2019 ∙ by Tiancheng Wen, et al. ∙ Xi'an Jiaotong University 0

Knowledge distillation (KD) is widely used for training a compact model with the supervision of another large model, which could effectively improve the performance. Previous methods mainly focus on two aspects: 1) training the student to mimic representation space of the teacher; 2) training the model progressively or adding extra module like discriminator. Knowledge from teacher is useful, but it is still not exactly right compared with ground truth. Besides, overly uncertain supervision also influences the result. We introduce two novel approaches, Knowledge Adjustment (KA) and Dynamic Temperature Distillation (DTD), to penalize bad supervision and improve student model. Experiments on CIFAR-100, CINIC-10 and Tiny ImageNet show that our methods get encouraging performance compared with state-of-the-art methods. When combined with other KD-based methods, the performance will be further improved.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Knowledge Distillation (KD) methods have drawn great attention recently, which are proposed to solve the contradiction between accuracy and depolyment. The teacher-student techniques use ”knowledge” to represent the recognition ability of deep model. Thus, adopting teacher’s knowledge as supervision will guide student to have more discrimination. To improve transfer efficiency, many recent related papers focus on designing different kinds of knowledge [1, 5, 14, 15, 17, 23, 24, 32, 33, 39, 41], or extending training strategies [7, 10, 11, 22, 28, 33, 36, 37, 38, 40, 42, 43]. The works have obtained positive results.

Though teacher usually has stronger discrimination than student. There still exist incorrect or uncertain predictions. Using these kinds of knowledge as supervision will lead to bad result for student model. Most previous works hold little discussion about the topic. Adding a cross entropy loss with ground truth or using teacher ensemble can alleviate this problem. But both of them do not handle it from the source and still cannot guarantee validity of supervision.

In this paper, we revisit KD framework and analysis the phenomenon of bad supervision. We propose two simple and universal methods, Knowledge Adjustment (KA) and Dynamic Temperature Distillation (DTD), to obtain better supervision. Figure 1 describes our idea, teacher could make incorrect predictions and uncertain predictions towards some samples, and the proposed KA and DTD methods are respectively targeted to the two problems. KA finds teacher’s wrong predictions for these examples and corrects them according to corresponding ground truth. For DTD, we find that some uncertainty of soft targets comes from excessive temperature, thus we turn to design dynamic sample-wise temperature to compute soft targets. By conducting so, student training will receive more discriminative information on confusing samples. From this perspective, DTD can be viewed as a process of (online) hard example mining.

Figure 1:

Illustration of the proposed methods. The numbers denote the teacher predicting probabilities of corresponding classes. KA deals with incorrect supervision and DTD handles uncertain supervision.

Our contributions of this work are summarized as follows:

1. We give analyses about the bad supervision problem of existing KD-based methods, including teacher’s incorrect predictions and overly uncertain predictions.

2. Knowledge Adjustment is proposed to handle wrong predictions from teacher model. Two distinct implementations are used. Both of them will make the incorrect predictions go away and generate better supervision for student.

3. We propose Dynamic Temperature Distillation to avoid overly uncertain supervision from teacher model. We also give two implementations to achieve the goal. More certain but still soft supervision will make student model be more discriminative.

4. We evaluate the proposed methods on CIFAR-100, CINIC-10 and Tiny ImageNet. The experimental results show they can not only improve performance independently, but also get better results when combined with other KD-based methods. Furthermore, when we adopt KA, DTD and other KD-based methods together, the final result could even achieve the state-of-the-art.

2 Related works

Knowledge Distillation and Transfer.

Strategy of transferring knowledge from one neural network to another has developed for over a decade. As far as we know, Bucilua

et al

. [4] firstly explored training a single neural network to mimic an ensemble of networks. Ba and Caruana [2] further compress complex networks by adopting logits from cubersome model instead of ground truth as supervision. Inspired by [2], Hinton

et al. [16] propose the method of KD. With keeping the basic teacher-student framework, KD modifies the softmax function with an extra parameter . Then logits from both teacher model and student model are softened. Student is trained to mimic the distribution of teacher soft targets.

Based on the above works, many recently studies focus on designing different knowledge representation for efficient transfer. Romero et al. [24] use ”hints” to transfer knowledge, which are actually feature maps produced by some chosen teacher’s hidden layers. Yim et al

. [39] transfer the variation of feature maps from both ends of some sequential blocks. In [41], activation-based attention and gradient-based attention are investigated to carry out knowledge transfer. Huang and Wang [17] match the neuron activation distribution to carry out distillation. Park

et al. [23] explore sample relations as knowledge carrier. Tung and Mori [32] think inner-batch similarities help to train student better. Ahn et al. [1] propose to maximize mutual information between teacher and student. And Chen et al. [5] realize transfer with feature embedding.

There are also works improving or extending KD with training strategies. Furlanello et al. [10] , Mirzadeh et al. [22] and Gao et al. [11] propose multi-stage training scheme. Xu et al. [37], Chen et al. [6] and Shen et al. [28] introduce adversarial learning into KD framework, adopting GAN or part of GAN as an auxiliary. Self-distillation is discussed in [36], [40] and [42], in which knowledge is exploited inside a single network.

Label Regularization. Label Regularization improves deep models’ performance by using modified labels instead of traditional one-hot labels. Most works aim to solve the problem of overfitting. Szegedy et al. [30] propose Label Smoothing Regularization (LSR) to give the non-ground-truth labels tiny values. Xie et al. [35] replace some labels with random ones to prevent overfitting. Recently, updating labels iteratively has been widely investigated. Bagherinezhad et al

. [3] train Label Refinery Network to modify labels at every epoch of training. Ding

et al. [9] use dynamic residual labels to supervise the network training. Moreover, Yuan et al. [40] investigate the relation between label regularization and KD.

Hard Example Mining for Deep Models. Hard example mining has brought discussions recently, which can make the training pay more attention to those imperfectly learned samples. Shrivastava et al. [29] propose Online Hard Example Mining (OHEM), selecting samples according to their losses and refeeding network those samples. Wang et al. [34] generate hard examples via GAN instead of filtering the original data. Method of Focal Loss [19] weights samples with gamma nonlinearity about their classification scores. And Li et al. [18] investigate hard examples’ gradients and design Gradient Harmonizing Mechanism (GHM) to solve the imbalance in difficulty.

Figure 2: PS on a misjudged sample’s soft target. The image is from CIFAR-100 training data, whose ground truth label is ”leopard” but ResNet-50 teacher’s prediction is ”rabbit”. The value of ”leopard” is still large, which indicates that the teacher does not go ridiculous. Shift operation is carried out towards these two classes.

3 The proposed method

In this section, we firstly investigate two bad phenomenons existing in KD and KD-based methods, then introduce corresponding solutions named as Knowledge Adjustment (KA) and Dynamic Temperature Distillation (DTD). For both of them, we propose two distinct implementations.

3.1 Genetic errors and knowledge adjustment

In KD and KD-based methods, the student model is trained under the predictions of teacher model, no matter if it is true or not. We use ”Genetic Error” to note student’s incorrect predictions that are same with teacher’s. Obviously when teacher makes mistakes, student can hardly correct the error knowledge itself, thus genetic errors occur.

First we take a review of soft targets from KD. For a sample with ground truth , KD uses softmax function with temperature parameter to soften the student’s logits and teacher’s logits , the -th output is given by Eq. (1)


We denote KD loss using cross entropy between student’s softened logits and teacher’s softened logits .


where denotes cross entropy.

The pretrained teacher model can not guarantee is always right, whether the supervision exists in soft targets or feature maps. Actually Eq. (2) and some other methods [14, 17, 32, 33] add an extra cross entropy with ground truth to alleviate this situation. However, this kind of correction is slight and there remains a noteworthy proportion of genetic errors. For a ResNet-18 student educated (KD) by a ResNet-50 teacher on CIFAR-100, the genetic errors make up 42.4% of total 2413 misjudged samples in the test set.

We then propose KA, trying to fix the prediction of teacher with reference to the ground truth label. The exact operation is conducting an adjusting function on teacher’s softened logits, and the modified loss is expressed with KL divergence


is invariable during student training, so the optimization object is equivalent to cross entropy. In addition, we don’t need the cross entropy computed with , for is totally correct.

will fix the incorrect and do nothing on correct ones. We adopt Label Smooth Regularization (LSR) [30] and propose Probability Shift (PS) to implement .

Label Smooth Regularization. LSR provides a simple technique to soften one-hot label. Considering a sample of class with ground truth label distribution , where is impulse signal, the LSR label is given as


is number of classes. In KA, we directly replace the incorrect soft targets with Eq. (4). It gives a less confident probability (but still the most confident among all the classes) to the ground truth label, and allocates the remainder of the probability equally to other classes. Meanwhile the correct soft targets stay the same. In addition, LSR replacements should get involved after distilling softmax to ensure logits to be softened only once.

Probability Shift.

Given an incorrect soft target, PS simply swaps the value of ground truth (the theoretical maximum) and the value of predicted class (the predicted maximum), to assure the maximum confidence is reached at ground truth label. The process is illustrated in Figure 2. PS operation adjusts the theoretical maximum (leopard) and the predicted maximum (rabbit). It keeps all the probabilities produced by teacher model instead of discarding wrong ones. And the fixed tensor keeps the overall numerical distribution of the soft target.

Compared with directly replacing soft targets with ground truth, both LSR and PS retain some dark knowledge from tiny probability, which is pointed out to be useful according to [16]. The methods also keep the numerical distribution of soft targets, which is helpful to stabilize the training process.

In fact, the incorrect predicted class often shares some similar features with the sample. That is to say, the incorrect predicted class may contain more information than the other classes. In addition, teacher’s misjudgments come from global optimization on the whole training data set, and inordinate partial corrections may break the convergence of student model. Therefore, PS is a more promising solution compared with LSR.

Figure 3: Soft target’s distributions with different . The image is from CIFAR-100 training set, whose ground truth label is ”leopard”. Inter-class difference gets small when goes up. Specifically, values of ”kangaroo”, ”leopard” and ”rabbit” get extremely close at high temperature, which makes ”kangaroo” and ”rabbit” disturbance items during training. Thus a relatively lower is proper.

3.2 Dynamic temperature distillation

Although previous works [2, 16, 27] indicate that student can benefit from uncertainty of supervision, overly uncertain predictions of teacher may also affect the student. We analysis this problem from perspective of soft targets and temperature . The effect of distilling softmax is visualized in Figure 3. The distribution of soft logits becomes ”flat” when is set more than 1. This indicates that the supervision may lose some discriminative information with high temperature. Therefore, student may be confused on some samples who get significant and similar scores on several classes. As a solution, we propose a method named as Dynamic Temperature Distillation (DTD), to customize supervision for each sample.

The basic idea of DTD is to make vary on training samples. Especially for samples who cause confusion easily, should be smaller to enlarge inter-class similarity. And for easily learned samples, as [16] points out, a bigger will help to utilize the classificatory information about the classes. The object function of DTD can be expressed as


where is sample-wise temperature, and we use Eq. (6) to compute .


where is batch size, and denote the base temperature and bias. And is batch-wise normalized weight for sample , describing the extent of confusion. is designed to go numerically big when is confusing and teacher’s prediction is uncertain. Then is computed to be smaller than , and soft target gets more discriminative. In addition, note that Eq. (5) is written in KL divergence rather than cross entropy, since varies with and no longer a constant.

As analyzed above, confusing samples should get larger weights and further get lower temperature, which leads to more discriminative soft targets. In this way, DTD selects confusing samples and pays more attention to them, which can also be viewed as hard example mining.

Then we propose two methods to obtain . Inspired by Focal Loss [19], we propose Focal Loss Style Weights (FLSW) to compute sample-wise weights. Our another method computes following the max output of student’s prediction, which we call Confidence Weighted by Student Max (CWSM).

Focal Loss Style Weights. The original focal loss is proposed towards the task of object detection and defined as Eq. (7)


where is the classification score of one sample, denoting the model confidence towards the ground truth category. actually differentiates samples with learning difficulty. A hard sample makes more contribution to the total loss, so the model pays more attention on hard samples during training.

In out method, the learning difficulty can be measured with the similarity between student logits and teacher logits . Then we can denote as


is the inner product of two distributions, which we can use to evaluate student’s learning about the sample. goes relatively big when student’s prediction is far from teacher’s, which meets the monotonicity discussed before.

Confidence Weighted by Student Max. We also weight samples by the max predictions of student, which to some extent reflects the learning situation of samples. Confusing samples generally have uncertain predictions. And we assume it is teacher’s undemanding supervision that leads to student’s uncertain predictions. In CWSM, is computed using Eq. (9)


and student logits should be normalized. can be deemed to represent student’s confidence towards the sample. Obviously, samples with less confidence get larger weights. And gradients from these samples will contribute more during the distillation.

computed following neither FLSW nor CWSM should be directly involved into computation of the overall loss due to some numerical issue. Specifically, may get tiny for all the samples in one batch along training, because most of the training samples can be predicted correctly by deep model. And the loss in this batch is small. However, whether a sample is ”easy” or ”hard” is a relative value. So we conduct L1 normalization to make numerically comparable and controllable.

3.3 Total loss function and algorithm

In this section, we combine KA and DTD together, the overall loss function is designed as Eq. (10).


Eq. (10) is similar to Eq. (3), but the difference is that Eq. (10) uses sample-wise temperature to soften logits. It is worthy to point out that the supervision tensor varies with the learning situation, so the KL divergence is not equivalent to cross entropy. In addition, the cross entropy with ground truth is needless here, because is totally correct and have already utilized ground truth information.

brings gradients when optimizing student network. In limited varying space, goes low when temperature increases, because high temperature closes the KL divergence between soft logits from teacher and student. Thus minimization of helps to find a relative higher temperature for each sample, which equals to make student regard the sample as an easier one.

To make the combination method intuitive, we describe the process in Algorithm 1. We conduct first DTD and then KA to modify the distillation training process. (The combination is actually flexible. We can firstly correct teacher logits and then compute sample-wise temperature.) Additionally, to avoid softening repeatedly, LSR replacement should go after distilling softmax, while PS does not mind the order. Thus we place KA after softening for expression consistency.

0:  Input: teacher logits ;student logits ;ground truth ;
0:  Output: training loss ;
1:  get weights via FLSW or CWSM according to ;
2:  for each sample compute with Eq. (6);
3:  calculate softened logits and using KD softmax function of temperature ;
4:  get the sample whose does not match ;
5:  replace with Eq. (4) or adjust with PS, getting ;
6:  compute with Eq. (10);
7:  return ;
Algorithm 1 DTD-KA Loss Computation
Figure 4: Transfer pairs of AT and NST for CIFAR-100 experiments. The upper teacher model and lower student model can be clearly divided into 4 stages. Feature maps’ resolution keeps unchanged inside each stage. AT loss is computed using the first 3 pairs, and NST loss is computed using the last pair.

4 Experiments

In the following section we firstly conduct experiments on CIFAR-100 classification dataset, then validate the method on CINIC-10 [8] and Tiny ImageNet [31], which are constructed based on ImageNet [25] but much lighter, for ImageNet training requires vast resources. (Distillation training actually infers twice, once in teacher and once in student in one iteration, thus more time and memory are required.) For each dataset we compare our methods with the conventional KD and some state-of-the-art KD-based works, which are introduced below. Considering different papers adopt various teacher-student pairs, we replicate these methods in the same experimental conditions. The codes will be published once the paper is accepted.

Standard KD [16]. We adopt Eq. (2) to carry out KD training, where is set to 0.7. And we implement the cross entropy between two distributions with KL divergence as a convenience.

AT [41]. AT introduces attention mechanism to KD and achieves an obvious performance improvement. As a comparison, we transfer activation-based attention information combined with KD, whose loss is


where and are the -th pair of L2 normalized attention maps, which respectively come from student model and teacher model.

Method Model Validation Accuracy Genetic Errors/Total Errors
Teacher Preact-ResNet-50 77.01
KD [16] Preact-ResNet-18 75.87 1022/2413 = 42.35%
AT+KD [41] Preact-ResNet-18 76.26 1054/2374 = 44.40%
NST+KD [17] Preact-ResNet-18 78.12 1087/2188 = 49.68%
DTD-KA Preact-ResNet-18 77.24 950/2276 = 41.74%
AT+DTD-KA Preact-ResNet-18 78.06 959/2194 = 43.65%
NST+DTD-KA Preact-ResNet-18 78.38 1020/2162 = 47.19%
Table 1: Results of CIFAR-100 experiments

NST [17]. NST method has recently done solid works and achieved state-of-the-art performance, whose loss is counted using MMD. We adopt the NST+KD combinations as comparison, which performs best among previous works and easy to further modified with our methods. The loss is written in Eq. (12).


denotes the squared MMD distance between feature maps from certain layers of the teacher model and student model. [17] proposes several different realizations of , among which the polynomial-kernel-implementation takes most part in experiments. Thus we replicate NST with polynomial kernel for comparison.

To avoid performance jittering, we keep initializing seed constant for students in the same structure, and carry out each experiment for several times.

4.1 Cifar-100

In CIFAR-100 experiments, we use the official divide of training data and test data, which respectively consist of 50,000 images and 10,000 images, both at a resolution of . Random cropping, random horizontal flipping and random rotation are carried out for data argumentation. We design teacher and student network based on ResNet with pre-activation residual units [12], specifically Preact-ResNet50 for teacher and Preact-ResNet18 for student. The student model is optimized for 200 epoches using SGD with mini-batch size of 128, whose momentum is 0.9 and weight decay parameter is . The start learning rate is set to 0.01 for KD and the proposed method, 0.1 for AT, NST and combinations. For all experiments the learning rate falls with coefficient 0.1 at 60, 120, 160 epoch.

For LSR, a smoothing factor of 0.985 is suitable after experimental attempts. For DTD, and of Eq. (6) are respectively set to 10 and 40. And there are some extra operations. It is observed that might get very tiny, even turns to negative in some extreme cases. Thus we bound with a threshold of 3. We also cancel the batch average when computing loss. Because we find that average operation could counteract sample-wise learning effect, which comes from customizing dynamic loss computation.

To implement AT and NST, we orderly pair the respective building layers of teacher and student. The residual blocks can be grouped into 4 pairs according to variation of feature map size, just as Figure 4 shows. The first 3 pairs are chosen to compute

, and last pair for . and are set according to suggestions in [41] and [17]. For experiments of AT+DTD-KA and NST+DTD-KA experiments, we train students using loss of Eq. (13) and Eq. (14), where , , and . is expressed as Eq. (10).


Table 1 shows the CIFAR-100 classification performance. The proposed DTD-KA scheme outperforms KD and AT+KD, but performs weaker than NST+KD. The AT+DTD-KA and NST+DTD-KA combinations improve the original AT+KD and NST+KD, and reach state-of-the-art. Genetic errors of all the methods are counted. AT and NST to some extent aggravate the phenomenon because the methods pay more attention to network’s intermediate feature, making the student imitate teacher’s reference better, no matter whether right or wrong. With bringing strong corrections, our method obviously reduces the genetic errors.

As genetic errors have been reduced, we then investigate performance on uncertain supervision problem. In Figure 5, we illustrate the accumulation curve of sample along its difference between top 2 student-predicted probabilities. For samples whose difference of top 2 is slight, there exist at leat one disturbance. This denotes that student is lack discrimination or robustness on these samples. It can be observed that methods contain DTD-KA reduce the number of samples who have close top probabilities. It indicates that our method helps the student recognizes more clearly, and means an improvement on robustness from sides.

We also validate the performance of distinct KA and DTD implementations, and results are listed in Table 2. The ablation and crossing combination experiments are designed on KD, AT+KD and NST+KD benchmarks. For each benchmark we firstly add single DTD operation (FLSW/CWSM) or single KA operation (LSR/PS). Then all kinds of DTD-KA combinations are validated. The results show that all implementations indeed make efforts, whether working alone or together.

Figure 5: Statistic of confusing samples. Color lines denote the number of samples whose distance of top 2 predicted probabilities is less than abscissa value
Benchmark [width=1.5cm]KADTD FLSW CWSM
75.87 76.57 76.63
KD [16] LSR 76.24 77.24 76.88
PS 76.05 76.72 76.79
76.26 76.98 76.80
AT+KD [41] LSR 77.70 77.97 78.06
PS 77.75 77.89 77.78
78.12 78.33 78.24
NST+KD [17] LSR 78.16 78.38 78.30
PS 78.31 78.36 78.37
Table 2: CIFAR-100 ablation and combination experiments. ”—” represents no extra operation is conducted.

4.2 Cinic-10

CINIC-10 extends CIFAR-10 dataset with downsampled ImageNet images, which consists of 90,000 training images, 90,000 validation images and 90,000 test images, all at a resolution of . We adopt the same data preprocessing as those of CIFAR-100 experiments. The teacher model and student model here are constructed using ResNet [13] of 50 layers and 18 layers. The training lasts for 200 epoches and the learning rate falls along cosine annealing [20]. We also set the SGD optimizer with batch size 128, momentum 0.9 and weight decay . Parameters in computing loss keep aligned with CIFAR-100 experiments, except for , , , and . The above settings help to numerically balance the contributions of , and , since the KL divergence goes enormous when the number of classes is small (CINIC-10 has 10 classes).

In Table 3 we compare the CINIC-10 performance among KD, AT+KD, NST+KD and the proposed methods. As a simple end-to-end distillation method, DTD-KA shows competitive performance compared with inner-feature-based methods AT and NST. And obviously, all previous works get increase when served by our methods. Furthermore, genetic errors also get reduced.

While it can be observed that all the improvement methods seem not to increase KD as much as that in CIFAR-100 experiments. Besides data issues, we guess it is because activated feature maps might be weaker than nonactivated ones, which are transferred in Section 4.1. According to [26], feature maps after activation (ReLU) might lose some information, especially in low channel case. Thus we assume that feature maps before rather than after activation might be more suitable for knowledge transfer, which certainly deserves more future discussions.

4.3 Tiny ImageNet

Figure 6: Transfer pairs of AT and NST for Tiny ImageNet experiments. The ResNet teacher’s last three stages are respectively paired with the ShuffleNet student’s three stages. AT loss is computed using the first 2 pairs, and NST loss is computed using the last pair.
Method Model Validation Accuracy Genetic Errors/Total Errors
Teacher ResNet-50 87.48
KD [16] ResNet-18 86.52 5891/12130 = 48.57%
AT+KD [41] ResNet-18 87.03 6094/11669 = 52.22%
NST+KD [17] ResNet-18 86.97 6259/11730 = 53.36%
DTD-KA ResNet-18 87.02 5527/11678 = 47.33%
AT+DTD-KA ResNet-18 87.33 5703/11395 =50.05%
NST+DTD-KA ResNet-18 87.17 5983/11549 = 51.81%
Table 3: Results of CINIC-10 experiments.
Method Model Validation Accuracy Genetic Errors/Total Errors
Teacher ResNet-50 61.90
KD [16] ShuffleNetV2-1.0 61.31 1348/3910 = 34.48%
AT+KD [41] ShuffleNetV2-1.0 61.27 1375/3873 = 35.50%
NST+KD [17] ShuffleNetV2-1.0 60.83 1399/3917 = 35.72%
DTD-KA ShuffleNetV2-1.0 61.77 1144/3823 = 29.92%
AT+DTD-KA ShuffleNetV2-1.0 61.47 1261/3853 = 32.73%
NST+DTD-KA ShuffleNetV2-1.0 61.19 1283/3881 = 33.06%
Table 4: Results of Tiny ImageNet experiments.

Tiny ImageNet selects 120,000 images from ImageNet and downsamples them to . These images come from 200 categories, each of which has 500 training images, 50 validation images, and 50 test images.

On Tiny ImageNet we validate performance of transferring knowledge with heterogeneous teacher and student, which stumps most KD-based methods. A ResNet-50 teacher model and a student model implemented in ShuffleNet V2-1.0 [21] are adopted as subjects. We pair the last three ResNet building stages and the three ShuffleNet stages, whose output feature maps match in resolution, thus no additional mapping operation is needed. The process is illustrated in Figure 6. Student is trained for 200 epoches with learning rate starting from 0.01 and decaying with cosine annealing. And SGD of batch size 128, momentum 0.9 and weight decay is adopted. , , and are respectively set to 0.08, 1, 0.08 and 10. A relatively small weight for can help student to find correct convergence direction. And DTD-KA loss is computed also with the sum rather than the average for one batch.

Table 4 summarizes the comparison results. The proposed DTD-KA gets highest accuracy and lowest genetic errors. We can observe regressions of AT+KD and NST+KD. As predicted above, the inner-feature-based methods perform weaker than end-to-end methods. Nevertheless, DTD-KA still improves them and reduced genetic errors.

Actually there always lies a gap between student and teacher. For AT, NST and other inner-feature-based approaches, the raw intermediate information from a heterogeneous teacher may not be reliable enough for target student. While end-to-end methods are more robust, since these methods do not impose strict constraints on inference process. Our results prove that better and stronger soft targets may help in such kinds of circumstances.

5 Conclusions

Supervision plays a significant role in knowledge distillation (or transfer) works. We propose Knowledge Adjustment (KA) and Dynamic Temperature Distillation (DTD), trying to answer the challenge of bad supervision in existing KD and KD-based methods. We divide the problem into incorrect supervision and uncertain supervision, for both phenomenons we give our comprehension and propose our corresponding solutions. KA corrects teacher’s wrong predictions according to ground truth. DTD fixes teacher’s uncertain predictions with dynamic temperature. Validation experiments on three different datasets prove that the proposed methods can increase accuracy. Statistical results further indicate the proposed methods indeed reduce genetic errors and improve student on discrimination. In addition, combination experiments show that our methods can easily attach to other KD-based methods. We believe that our methods can be applied in many knowledge distillation frameworks, and sample-based idea of this paper might lead to further progress.


  1. Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D. Lawrence, and Zhenwen Dai. Variational Information Distillation for Knowledge Transfer.

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , 2019.

  2. Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? Advances in Neural Information Processing Systems, 3(1): 2654–2662, 2014.

  3. Hessam Bagherinezhad, Maxwell Horton, Mohammad Rastegari and Ali Farhadi. Label refinery: Improving ImageNet classification through label progression. arXiv preprint arXiv:1805.02641, 2018.

  4. Cristian Bucilǎ, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006.

  5. Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, and Dacheng Tao. Learning Student Networks via Feature Embedding. arXiv preprint arXiv:1812.06597, 2018.

  6. Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, and Qi Tian. DAFL: Data-free learning of student networks. The IEEE International Conference on Computer Vision (ICCV), 2019.

  7. Wei-Chun Chen, Chia-Che Chang, Chien-Yu Lu, and Che-Rung Lee. Knowledge distillation with feature maps for image classification. The Asian Conference on Computer Vision (ACCV), 2018.

  8. Luke N. Darlow, Elliot J. Crowley, Antreas Antoniou, and Amos J. Storkey. CINIC-10 is not ImageNet or CIFAR-10. arXiv preprint arXiv:1810.03505, 2018.

  9. Qianggang Ding, Sifan Wu, Hao Sun, Jiadong Guo, and Shu-Tao Xia. Adaptive regularization of labels. arXiv preprint arXiv:1908.05474, 2019.

  10. Tommaso Furlanello, Zachary C. Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. Born-again neural networks.

    Proceedings of 35th International Conference on Machine Learning, ICML 2018

    , vol. 4, pp. 2615–2624, 2018.

  11. Mengya Gao, Yujun Shen, Quanquan Li, Chen Change Loy, and Xiaoou Tang. An embarrassingly simple approach for knowledge distillation. arXiv preprint arXiv:1812.01819, 2018.

  12. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. The European Conference on Computer Vision (ECCV), 2016.

  13. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

  14. Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, and Jin Young Choi. A comprehensive overhaul of feature distillation. The IEEE International Conference on Computer Vision (ICCV), 2019.

  15. Byeongho Heo, Minsik Lee, Sangdoo Yun, and Jin Young Choi. Knowledge transfer via distillation of activation boundaries formed by hidden neurons.

    Proceedings of the AAAI Conference on Artificial Intelligence

    , vol. 33, pp. 3779–3787, 2019.

  16. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.

  17. Zehao Huang and Naiyan Wang. Like what you like: Knowledge distill via neuron selectivity transfer. arXiv preprint arXiv:1707.01219, 2017.

  18. Buyu Li, Yu Liu, and Xiaogang Wang. Gradient harmonized single-stage detector. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 8577–8584, 2019.

  19. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. The IEEE International Conference on Computer Vision (ICCV), 2017.

  20. Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts.

    Intertional Coference on Learning Representations, 2016.

  21. Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet V2: Practical guidelines for efficient cnn architecture design. The European Conference on Computer Vision (ECCV), 2018.

  22. Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, and Hassan Ghasemzadeh. Improved knowledge distillation via teacher assistant: Bridging the gap between student and teacher. arXiv preprint arXiv:1902.03393, 2019.

  23. Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho. Relational knowledge distillation. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

  24. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. FitNets: Hints for thin deep nets. Intertional Coference on Learning Representations, 2015

  25. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Karuse, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015

  26. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. MobileNetV2: Inverted residuals and linear bottlenecks. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.

  27. Bharat B. Sau and Vineeth N. Balasubramanian. Deep Model Compression: Distilling Knowledge from Noisy Teachers. arXiv preprint arXiv:1610.09650, 2016.

  28. Zhiqiang Shen, Zhankui He, and Xiangyang Xue. MEAL: Multi-model ensemble via adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 4886–4893, 2019.

  29. Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick. Training region-based object detectors with online hard example mining. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

  30. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

  31. Tiny ImageNet visual recognition challenge, [Accessed; 2019-11].

  32. Frederick Tung and Greg Mori. Similarity-preserving knowledge distillation. The IEEE International Conference on Computer Vision (ICCV), 2019.

  33. Hui Wang, Hanbin Zhao, Xi Li, and Xu Tan. Progressive blockwise knowledge distillation for neural network acceleration. IJCAI International Joint Conference on Artificial Intelligence, 2018.

  34. Xiaolong Wang, Abhinav Shrivastava, and Abhinav Gupta. A-Fast-RCNN: Hard positive generation via adversary for object detection. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

  35. Lingxi Xie, Jingdong Wang, Zhen Wei, Meng Wang, and Qi Tian. DisturbLabel: Regularizing CNN on the loss layer. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

  36. Ting-Bing Xu and Cheng-Lin Liu. Data-distortion guided self-distillation for deep neural networks. AAAI Conference on Artificial Intelligence, 2019.

  37. Zheng Xu, Yen-Chang Hsu, and Jiawei Huang. Training shallow and thin networks for acceleration via knowledge distillation with conditional adversarial networks.

    arXiv preprint arXiv:1709.00513, 2017.

  38. Chenglin Yang, Lingxi Xie, Siyuan Qiao, and Alan Yuille. Knowledge distillation in generations: More tolerant teachers educate better students. arXiv preprint arXiv: 1805.05551, 2018.

  39. Junho Yim, Donggyo Joo, Jihoon Bae, and Junmo Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning.

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

  40. Li Yuan, Francis E. H. Tay, Guilin Li, Tao Wang, and Jiashi Feng. Revisit knowledge distillation: A teacher-free framework. arXiv preprint arXiv:1909.11723, 2019.

  41. Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer.

    Intertional Coference on Learning Representations, 2016.

  42. Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, and Kaisheng Ma. Be your own teacher: Improve the performance of convolutional neural networks via self distillation. arXiv preprint arXiv:1905.08094, 2019.

  43. Haoran Zhao, Xin Sun, Junyu Dong, Changrui Chen, and Zihe Dong. Highlight every step: Knowledge distillation via collaborative teaching. arXiv preprint arXiv:1907.09643, 2019.