Student Network Learning via Evolutionary Knowledge Distillation

03/23/2021 ∙ by Kangkai Zhang, et al. ∙ 0

Knowledge distillation provides an effective way to transfer knowledge via teacher-student learning, where most existing distillation approaches apply a fixed pre-trained model as teacher to supervise the learning of student network. This manner usually brings in a big capability gap between teacher and student networks during learning. Recent researches have observed that a small teacher-student capability gap can facilitate knowledge transfer. Inspired by that, we propose an evolutionary knowledge distillation approach to improve the transfer effectiveness of teacher knowledge. Instead of a fixed pre-trained teacher, an evolutionary teacher is learned online and consistently transfers intermediate knowledge to supervise student network learning on-the-fly. To enhance intermediate knowledge representation and mimicking, several simple guided modules are introduced between corresponding teacher-student blocks. In this way, the student can simultaneously obtain rich internal knowledge and capture its growth process, leading to effective student network learning. Extensive experiments clearly demonstrate the effectiveness of our approach as well as good adaptability in the low-resolution and few-sample visual recognition scenarios.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Deep neural networks have proved success in many visual tasks like image classification 

[39, 54], object detection [9, 11], semantic segmentation [19] and other fileds [71, 24, 67] due to their powerful knowledge extraction capability from massive available data. Beyond these remarkable successes, there is still increasing concern that the development of effective learning approaches for the real-world scenarios where the high-quality data often is not available or insufficient. In this case, network learning will encounter obstacles. Knowledge distillation [23] provides an economic way that can transfer knowledge from a pre-trained teacher network to facilitate the learning of a new student network.

Fig. 1: Unlike classical knowledge distillation (KD) approaches with fixed teacher, an evolutionary teacher can enable more efficient knowledge transfer by minimizing the capability gap between teacher and student. Thus, we propose evolutionary knowledge distillation (EKD) to facilitate student network learning.

Knowledge distillation approaches mainly follow offline or online strategies. Offline strategies try to design more effective knowledge representation methods to learn from powerful teacher. Park et al. [41] and Liu et al. [37] focused on the structured knowledge about the instances, while other approaches [47, 56, 63] paid attention to some internal information in the network during distillation process. These offline approaches usually use a fixed pre-trained model as teacher, as shown in Fig. 1 (a), and this manner usually brings in a big capability gap between teacher and student during learning (see Fig. 1 (c)), leading to transfer difficulties. The capability gap refers to the performance difference between teacher and student network, in the image classification task, it specifically refers to the difference in accuracy. By contrast, the online distillation strategies attempt to reduce the capability gap by some training schemes for the absence of pre-trained teachers to improve the learning of student. An on-the-fly native ensemble (ONE) scheme [30] was proposed for one-stage online distillation. Instead of borrowing supervision signals from previous generations, snapshot distillation [62] extracted information from earlier iteration in the same generation. These online practices provide some good solutions to the shackles of capability gap brought by the fixed teacher, resulting in relatively better knowledge transfer, while they need high demands on the efficiency of knowledge representation because they rely more on their relatively less reliable peers’ or their own predictions to provide additional supervision [30, 2, 8]. Hence, there is a question: how to ensure both small capability gap and efficient knowledge representation to facilitate student learning?

Inspired by the recent observations [62, 40, 27] that a small teacher-student capability gap is beneficial to knowledge transfer, we propose an evolutionary knowledge distillation (EKD) approach to improve the learning of student, as shown in Fig. 1 (b). The approach uses an evolutionary teacher whose performance is continuously improved as the training process to constantly transfer intermediate knowledge to the student, the evolutionary teacher could provide richer supervision information for the learning of student and reduce the capability gap. In addition, to improve the knowledge representation ability, the teacher and student networks are both divided into several blocks, and a simple guided module pair is introduced between each corresponding block. In short, the evolutionary teacher not only solves the problem of capability gap between fixed teacher and student of offline knowledge distillation, but also solves the problem of insufficient and relatively unreliable supervision information caused by the absence of a qualified teacher for online knowledge distillation. In addition, the guided modules further promote the representation of intermediate knowledge. In this way, the student can continuously and adequately learn the intermediate knowledge from teacher as well as its growth process.

The main contributions of our work could be summarized in three folds: 1) We propose an evolutionary knowledge distillation approach to facilitate the student network learning by narrowing the teacher-student capability gap; 2) We introduce guided module pairs to enhance the knowledge representation and transfer ability; 3) We conduct extensive experiments to verify that our approach is superior to the state-of-the-arts and exhibits better adaptability in the low-resolution and few-sample scenarios.

Fig. 2: The network details of each batch data process of our evolutionary knowledge distillation approach. We divide the teacher stream and student stream into blocks, between each block of teacher and student, a pair of guided modules is introduced to help knowledge representation and transfer, so that the student can effectively capture both what knowledge the teacher learns and how it grows.

Ii Related Work

Ii-a The Basic of Knowledge Distillation

Knowledge distillation (KD) provides a concise but effective solution for transferring knowledge from a pre-trained large teacher model to a smaller student model [23]. Since the introduction of knowledge distillation, it has been widely used in image recognition, semantic segmentation, and other fields, especially model compression [23, 6, 3, 46, 36, 34]. In practice, the student model learns the prediction of pre-trained teacher model to make itself more powerful than it’s trained alone. Compared to hard ground-truth labels, fine-grained class information in soft predictions helps the small student model to reach flatter local minima, which results in more robust performance and improves generalization ability [45, 28]. Several recent works attempt to further improve that transfer knowledge between varying-capacity network models with offline or online knowledge distillation approaches [56, 30, 8, 70, 1, 10].

Ii-B Offline Knowledge Distillation

The offline knowledge distillation approaches often adopt a two-stage training mode, it first trains the teacher model, and then trains the student network by various distillation strategies. Classical FitNet [47] tried to transfer more supervision information by using the feature map of the teacher network middle layer firstly. Crowley et al. [14] proposed structural model distillation for memory reduction using a strategy that produced a student architecture that was a simple transformation of the teacher’s: no redesign is needed, and the same hyper-parameters can be used. And some recent approaches [37, 42] attempted to pay more attention to the relationship information of the instances. Tian et al. [56] proposed contrastive representation distillation, the main idea is very general: learn a representation that is close in some metric space for “positive” pairs and push apart the representation between “negative” pairs. Furlanello et al. [16] interactively absorbed the distilled student models into the teacher model group, through which the better generalization ability on test data is obtained. Sukmin Yun et al. [65] proposed a new regularization method that penalizes the predictive distribution between similar samples via self knowledge distillation to mitigate the issue that deep neural networks with millions of parameters may suffer from poor generalization due to overfitting. [64, 48, 50, 51] proposed utilizing multiple teachers to provide more supervision for the learning of student network.

Generally speaking, for offline knowledge distillation, in order to achieve a great effect, the following conditions need to be met: a high-quality pre-trained teacher model, difference between teacher and student, adequate supervision information [63, 62, 61]. In particular, offline knowledge distillation methods rely heavily on a fixed pre-trained model, however, the huge capability gap between fixed teacher and student model will bring great challenges to knowledge transfer. To bridge this gap, Mirzadeh et al. [40] introduced multi-step knowledge distillation, which employed an intermediate-sized network (teacher assistant); Jin et al. proposed a method named RCO [27], which utilizes the route in parameter space teacher network passed by as a constraint to bring a better optimization to student. None of these methods can completely solve the obstacles of knowledge transfer caused by the capability gap due to the fact that they also rely to some extent on the pre-trained teacher model. Therefore, we need to rethink how to break the shackles of offline distillation approaches to improve the learning of student network more effectively.

Ii-C Online Knowledge Distillation

Different from the conventional two-stage offline distillation approaches, the current approaches increasingly focus on the online distillation strategies, which attempt to reduce the capability gap by some training schemes in the absence of pre-trained teachers to improve the learning of student. A group of networks or sub-networks will be trained almost synchronously, which aims to improve the performance of student by using the predictions of their peers or sub-networks as supervision instead of the high-quality pre-trained teachers’. Deep Mutual Learning (DML) [70] applied distillation losses mutually, treating each other as teachers, and it achieved good results. However, DML lacks an appropriate teacher role, hence provides only limited information to each network. Guocong Song et al. [53]

introduced collaborative learning in which multiple classifier heads of the same network are simultaneously trained on the same training data to improve generalization and robustness to label noise with no extra inference cost. A similar learning strategy named On-the-fly Native Ensemble (ONE) for one-stage online distillation proposed by Xu Lan 

et al. [30]. Specifically, ONE trains only a single multi-branch network while simultaneously establishing a strong teacher on-the-fly to enhance the learning of target network. Chung et al. [13]

proposed an online knowledge distillation method that transfers the knowledge of the class probabilities and the feature map using the adversarial training framework. Zhang

et al. [68] proposed an online training framework called self-distillation, which forces student to refine its knowledge inside the network, thereby improving itself. And a framework Snapshot Distillation was proposed for teacher-student optimization in one generation [62]

, which extracted such information from earlier epochs in the same generation, meanwhile made sure that the difference between teacher and student is sufficiently large so as to prevent under-fitting. After these, a novel two-level framework OKDDip 

[8] was proposed to perform distillation during training with multiple auxiliary peers and one group leader for effective online distillation. In OKDDip framework, the first-level distillation works as diversity maintained group distillation with several auxiliary peers, while the second-level distillation transfers the diversity enhanced group knowledge to the ultimate student model called group leader.

These online, timely and efficient training methods are promising and some good progress has been made in narrowing the capability gap between teacher and student model [30, 62, 68]. However, for online distillation, two factors still hold them back. First, although the teachers of online distillation methods are dynamic and have narrowed the capability gap, the gap still exists due to the lack of representation in detail and the process of learning. Second, owing to the absence of a qualified teacher role for online knowledge distillation, the insufficient and relatively unreliable supervision information will restrict the learning of student to some extent. There is still great room for improvement in knowledge transfer and representation. Therefore, we propose the evolutionary knowledge distillation approach to enhance the performance of student network learning by using an evolutionary teacher and focusing on intermediate knowledge of the teaching process dynamically.

Iii The Proposed Approach

In our evolutionary knowledge distillation (EKD) approach, the teacher and student network are trained almost synchronously, and an evolutionary teacher can provide supervision information for the learning of student. The performance of teacher is continuously improved as the training process, which can provide richer supervision information and reduce the capability gap. As shown in Fig. 2, EKD consists of teacher stream, student stream and some extra guided modules. The network of each stream is divided into several blocks, depending on the specific network. Previous experiences have shown that using early blocks helps to exploit information inside the network [2, 70, 21], in order to utilize the intermediate knowledge, we improve bottleneck module [21, 12] to form some guided module pairs that are applied to assist the representation and transfer of knowledge. For each corresponding block between teacher and student stream, a guided module pair is inserted to make distillation process effective and efficient within stream and cross stream. Within-stream knowledge distillation aims to enhance knowledge representation, while cross-stream knowledge distillation helps to improve knowledge transfer.

0:  Training set ;
0:  Optimized student network weight: ;
1:  Initialize the Teacher and Student;
2:  for The student model did not converge do
3:     Get a batch of data;
4:     Feed the data into teacher stream;
5:     Get the feature map, soften probabilities of the teacher stream;
6:     Compute the with-stream distillation loss with Eq. (3) and Eq. (4);
7:     Compute the classification loss with Eq. (9);
8:     Compute the total loss of teacher with Eq. (11);
9:     Compute gradient to model parameters and update with the SGD optimizer.
10:     Feed the data into student stream;
11:     Get the feature map, soften probabilities of the student stream;
12:     Compute the with-stream distillation loss with Eq. (3) and Eq. (4);
13:     Compute the cross-stream distillation loss with Eq. (6) and Eq. (7);
14:     Compute the classification loss with Eq. (9);
15:     Compute the total loss of student with Eq. (11);
16:     Compute gradient to model parameters and update with the SGD optimizer.
17:  end for
18:  Return the optimized student network weight .
Algorithm 1 Student network learning via EKD

Iii-a Problem Formulation

Denote the training set is , where is the th sample with label . Here is the number of classes. The objective of distillation is transferring the knowledge of teacher into the student , where and are the model parameters of teacher and student, respectively. In our setting, the network of each stream is divided into blocks. Between the corresponding teacher and student blocks, a pair of guided modules is introduced to assist the representation, transfer of knowledge, where and are the feature maps and model parameters from the th block of teacher () and student () respectively. Therefore, during student network learning, the problem of EKD is formulated as:

(1)

where is the function to measure distillation loss. Different from traditional knowledge distillation where is fixed, EKD simultaneously learns both and from with the help of several guided module pairs . After learning, the temporal network within the dashed box (see the Fig. 2) will be discarded and only the parameters of the main model of student stream will be used for inference.

Iii-B Evolutionary Distillation

The proposed evolutionary knowledge distillation adopts online training of teacher and student model synchronously, the parameters of student and teacher models are updated in each batch data process, which can reduce the teacher-student capability gap. When coupled with Fig. 2, we can see that teacher and student streams are divided into blocks. Each block followed by a guided module with a fully connected layer constitutes multiple classifiers. We assume a stream with classifiers. The training process includes two simultaneous stages, within-stream distillation and cross-stream distillation. For within-stream distillation, deeper classifiers provide supervision to help the learning of shallow classifiers, which can improve the ability of the stream itself to represent knowledge. Cross-stream distillation can improve knowledge transfer from the evolutionary teacher to student. The guided modules can help facilitate knowledge representation and transfer.

Within-stream Distillation. For within-stream distillation, we use the shallower classifiers as students, and use the deeper classifiers and , here , as teachers. The students are trained by the supervision provided by teachers, supervision includes Kullback-Leibler (KL) divergence [23] and L2 distance [47] between the feature maps before the final FC layers of teacher and student. In order to simplify the form, here we only show the loss between the backbone network and each guided modules .

(2)

In practice, we use the classic distillation loss, KL divergence, and L2 distance loss. We compute KL divergence loss between deep classifiers and shallow classifiers within stream. The output of teacher and student is and respectively, . And the distillation loss is calculated by the following.

(3)

where denotes the KL divergence loss. As multiple different teachers provide different knowledge, we could achieve the more robust and accurate knowledge representation.

We compute feature loss by L2 distance which can be obtained through computing the feature maps of deep classifier and shallow classifiers.

(4)

where and represent the feature maps of teacher or student before the FC layer respectively.

Cross-stream Distillation. For the cross stream knowledge transfer, the student will learn under the supervision of the corresponding guided modules of the teacher stream:

(5)

The cross-stream distillation loss contains two types of losses: distillation loss and feature loss. For distillation loss between backbone networks and the one between each pair of guided modules, they were calculated by the KL divergence.

(6)

where the outputs of guided module attached to the teacher and student are and , respectively, . The and denote the outputs of the backbone networks.

The form of feature loss is as follows,

(7)

where and represent the feature map of classifier before the FC layer respectively. The feature loss between each pair of guided modules is to measure differences in the feature map, which can promote intermediate knowledge transfer cross stream. However, we divide the stream of teacher and student into several blocks, the output dimensions between each corresponding block may not match. Through the processing of the guided modules, we can ensure the consistency of the dimensions while ensuring the least loss of intermediate knowledge during the transfer process.

Then, we combine the two parts of the guided loss,

(8)

For each classifier, we compute cross entropy loss between and . In this way, the label directs each classifier’s probability as possible. There are multiple classifiers and we sum each cross entropy loss as follows:

(9)

As the parameters of teacher and student are constantly updated in each iteration, the evolutionary knowledge distillation process can be formulated as:

(10)

where

denotes the number of iterations. Specifically, for teacher stream and student stream, we used different loss functions,

and , respectively.

(11)

where is an indicator function which equals 1 if the stream is teacher and 0 otherwise. This means that the training of teacher and student streams are similar but different. The student learns from evolutionary teacher constantly with the supervision provided by teacher network and guided modules. For each iteration, we first optimize a teacher network to provide supervision to the learning of student, after that, the optimization of teacher and student will be carried out synchronously. In general, the evolutionary teacher reduces the capability gap between teacher and student. The guided modules not only facilitate the knowledge representation of teacher and student stream but also promote the transfer of more intermediate and process knowledge from teacher to student.

Teacher Network VGG19(bn) VGG11(bn) ResNet50 ResNet18 ResNet101 ResNet50 WRN50-2
Student Network VGG11(bn) VGG11(bn) ResNet18 ResNet18 ResNet50 ResNet50 WRN50-2
Teacher Acc. (%) 74.17 70.76 78.71 76.61 80.05 78.71 80.24
Student Acc. (%) 70.76 70.76 76.61 76.61 78.71 78.71 80.24
KD [23] 73.28 72.12 77.57 78.97 79.16 79.54 80.41
Fitnet [47] 71.51 71.00 76.59 77.58 79.76 79.18 80.84
Attention [66] 72.94 70.09 76.82 77.37 80.01 78.81 80.58
Factor [25] 68.12 68.12 75.26 73.81 77.54 73.07 80.12
PKT [43] 73.15 72.43 78.44 78.51 79.51 79.66 80.85
RKD [41] 72.81 71.88 78.21 78.10 80.07 78.73 80.40
Similarity [57] 73.49 72.10 78.71 78.55 80.39 79.44 80.87
Correlation [44] 70.90 71.35 77.48 77.83 78.17 77.18 80.37
VID [ahn2019variational] 72.83 72.32 77.71 78.16 78.17 76.76 80.88
Abound [22] 71.88 71.07 77.70 77.47 79.02 78.46 80.95
CRD [56] 71.73 73.00 78.92 78.18 80.25 79.45 81.14
EKD (Ours) 74.12 74.52 82.05 80.74 81.91 82.48 82.53
TABLE I: Classification accuracy with peer-architecture setting on CIFAR100.
Teacher Network ResNet18 VGG11(bn) ResNet18 WRN50-2 WRN50-2
Student Network VGG8(bn) ShuffleNetV1 ShuffleNetV2 ShuffleNetV1 VGG8(bn)
Teacher Acc. (%) 76.61 70.76 76.61 80.24 80.24
Student Acc. (%) 69.21 66.18 70.48 66.18 69.21
KD [23] 71.17 72.40 75.03 71.78 70.31
Fitnet [47] 70.59 70.50 72.24 70.46 70.04
Attention [66] 71.62 69.64 73.83 70.55 69.78
Factor [25] 68.06 68.16 69.99 70.51 70.12
PKT [43] 72.74 72.06 74.31 69.80 69.76
RKD [41] 71.03 70.92 73.26 70.58 70.41
Similarity [57] 73.07 72.31 74.95 70.70 70.00
Correlation [44] 69.82 70.70 72.21 70.66 69.96
VID [ahn2019variational] 71.75 70.59 72.07 71.61 71.00
Abound [22] 70.42 72.56 74.64 74.28 69.81
CRD [56] 73.17 72.38 74.88 71.08 72.50
EKD (Ours) 73.82 73.18 75.26 73.61 74.05
TABLE II: Classification accuracy with cross-architecture setting on CIFAR100.

Iii-C Implementation Details

The detailed training of the EKD is shown in Algorithm 1

. We introduce several identical guided modules on each stream, and each module is followed by a fully connected layer and a softmax layer. In training process, the two networks are trained at the same time, and they will adopt a certain degree of randomization in order to ensure they are not completely synchronized. This kind of incomplete synchronization can provide enough second-hand knowledge to promote student’s learning 

[61, 62]. Once the training is completed, we only keep the student’s parameters of the backbone network for inference.

The experimental results of other distillation methods are based on the open source code of CRD 

[56], we implement experiments according to their hyper-parameters settings. The networks are trained with SGD optimizer [5]. The hyper-parameters are set as follows, batch size is 64, the number of threads is 8, the initial learning rate is 0.1, and it will be multiplied by 0.1 when the epoch is equal to 75,130 and 180, respectively. We fix the random seed to 5 and the temperature of distillation [23]

to 4. All the experiments are implemented by PyTorch on a NVIDIA TITAN GPU.

Iv Experiments

Iv-a Experimental Settings

To verify the effectiveness and adaptability of our EKD approach, we conduct experiments on five benchmarks, including CIFAR10 [29], CIFAR100 [29]

, Tiny-ImageNet 

[31], UMDFaces [4] and UCCS [20]. We first compare EKD with several state-of-the-arts offline distillation approaches, including KD [23], Fitnet [47], Attention [66], Factor [25], PKT [43], RKD [41], Similarity [57], Correlation [44], VID [ahn2019variational], Abound [22], CRD [56], Res-KD [33] and AFD [26]. We also compare with several online distillation approaches that do not use pre-trained teacher, including DML [70], CL-ILR [53], ONE [30], Snapshot-KD [62], OKDDip [8] and Self-KD [68]. Then, ablation experiments are conducted to study the impact of evolutionary teacher, guided modules and component of loss function. Finally, we further conduct the experiments on low-resolution and few-sample scenarios to verify the adaptability of our approach. In our experiments, we use VGG(bn) [52], ResNet[21], Wide ResNet [49], ShuffleNetV1 [69] and ShuffleNetV2 [38] as the backbone networks.

CIFAR10. The dataset consists of 60,000 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.

CIFAR100. This dataset is just like the CIFAR10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class.

Tiny-ImageNet. It is an image classification dataset, which contains about

sized 100,000 training images and 10,000 verification images with 200 classes. It is more difficult than CIFAR100 dataset. On Tiny-ImageNet, we first randomly adjust the crop size to

, apply random horizontal flip and normalize it finally . For testing, we only resize the pictures to .

UMDFaces.  This face dataset is collected from Internet, and contains 367,888 face annotations for 8,277 subjects. In our Experiments IV-F Adaptability Analysis, UMDFaces is as a training dataset.

UCCS.  This dataset contains16,149 images in 1,732 subjects in the wild conditions. It is a very challenging benchmark with various levels of challenges, including blurry image, occluded appearance and bad illumination. We follow the setting as [18], randomly select a 180-subject subset, and separate the images into 3,918 training images and 907 testing images, and report the results with the standard top-1 accuracy.

(a) Baseline
(b) KD
(c) CRD
(d) EKD
Fig. 3: t-SNE Visualisation of ResNet18 trained with KD [23], CRD [56] and EKD on the Tiny-ImageNet dataset. Different numbers indicate different classes.
Network Baseline DML [70] CL-ILR [53] Snapshot-KD [62] ONE [30] Self-KD [68] OKDDip [8] EKD (Ours)
VGG16(bn) 73.81 74.67 74.38 - 74.37 75.38 75.12 76.86
ResNet110 75.88 77.50 78.44 73.51 78.33 77.04 78.91 79.28
TABLE III: Classification accuracy (%) on CIFAR100 with different online distillation approaches.
Method Student KD [23] FitNet [47] Attention [66] RKD [41] CRD [56] Res-KD [33] AFD [26] EKD (Ours)
Accuracy (%) 65.30 68.18 67.79 67.82 67.72 68.19 68.62 68.80 69.46
TABLE IV: Performance comparison on Tiny-ImageNet. The teacher is ResNet34 and the student is ResNet18.

Iv-B Comparisons with Offline Distillation Approaches

We conduct experiments on CIFAR100 dataset and perform the comparisons on the offline distillation with peer-architecture setting and cross-architecture setting. The experimental results of other offline distillation methods are based on the open source code RepDistiller of CRD [56], we implement experiments according to their hyper-parameters setting.

Peer-Architecture Setting. In this setting, the teacher and student networks share similar or the same structures. From the results shown in Tab. I, we get some meaningful observations. First, our approach outperforms all other offline distillation approaches, implying its remarkable effectiveness in improving student network learning, such as, when the teacher and student is VGG19(bn) and VGG11(bn) respectively, we achieve 74.12% accuracy on CIFAR100 which is 0.63% higher than the state-of-art method Similarity [57], and we gain 3.13% improvement when the teacher and student is ResNet50 and ResNet18 compared to CRD [56]. Second, the effect is also obvious when the teacher stream and student stream adopt the same structure. For example, when both the teacher and student are VGG11(bn), we gain 1.20% improvement even higher than learning from VGG19(bn), and the same is true when teacher and student are ResNet50. We speculate that it’s due to the smaller capability gap between two identical backbone networks, which facilitates the knowledge transfer from evolutionary teacher to student.

Cross-Architecture Setting. According to the experimental results of the previous section, we can find that the effect of proposed approach in the same or similar network architecture is superior to the conventional offline knowledge distillation methods. Then, to verify the generalization performance of our approach, we conduct further experiments on more challenging knowledge transfer tasks, cross-architecture setting, which means the architecture of teacher and student network are completely different.

We verify the performance of cross-architecture setting on CIFAR100 dataset, and learning rate, update strategy and simple data augmentation methods keep the same settings as before. The results are shown in Tab. II. We achieve the best results under the condition that the teacher and student networks are completely different. Specifically, when the teacher is ResNet18 and the student is VGG8(bn), our approach is at least 0.65% higher than the other offline distillation strategies, and for the teacher is WRN50-2, the student (VGG8 (bn)) gains a clear advantage, its performance improvement is more than 1.55%. When the teacher and student network are WRN50-2 and ShuffleNetV1 respectively, we achieve an improvement of at least 1.83% except when compared to Similarity [57]. These are due to the evolutionary teacher reduces the shackles caused by the capability gap between teacher and student, provides richer supervision information, and the guided modules ensure the efficiency of knowledge representation and transfer. Results on cross-architecture setting clearly demonstrate that our method does not rely on architecture-specific cues.

Comparisons on Tiny-ImageNet. In order to further verify the effectiveness of our proposed method, we conduct experiments on the more challenging Tiny-ImageNet. We mainly compare our approach with some typical methods, including KD [23], FitNet [47], Attention [66], RKD [41], CRD [56], Res-KD [33], AFD [26]. Here, Res-KD used the knowledge gap between teacher and student as a guide to train a more lightweight student network, which we call “res-student”, and AFD is an attention-based feature matching distillation method utilizing all the feature levels of the teacher. As illustrated with Tab. IV, the proposed method EKD achieves better performance on large-scale datasets than other baseline knowledge distillation methods, for example, our method achieves a 0.66% performance improvement compared with the previous best results of AFD.

Iv-C Comparisons with Online Distillation Approaches

We also compare the proposed EKD to several recent online knowledge distillation approaches, including Deep Mutual Learning (DML) [70], Collaborative Learning for Deep Neural Networks (CL-ILR) [53], Knowledge Distillation by On-the-Fly Native Ensemble (ONE) [30], Snapshot-KD [62], Self Distillation (Self-KD) [68] and Online knowledge distillation with diverse peers (OKDDip) [8]. The results in Tab. III are based on the code of OKDDip [8], in which the results of Snapshot-KD are based data from [62], and the results of Self-KD are obtained by re-implementing the framework of original research paper [68].

As shown in the Tab. III, the “Baseline” approach trains a model by ground-truth labels only. Compared with the other online knowledge distillation methods, our EKD shows some advantages, when the backbone network is VGG16(bn), our method is 1.48% higher in accuracy than the current best approach Self-KD [68]. And when the backbone network is ResNet110, our method has a 0.37% improvement than OKDDip [8], which is an increase of 3.40% from the “Baseline”. We suspect that the capability gap still exits due to the lack of representation in detail and the process of learning for previous online knowledge distillation methods. So that the insufficient and relatively unreliable supervision information from peers or itself will restrict the learning of student network to some extent. Our evolutionary teacher not only reduces the capability gap brought by fixed pre-trained teacher, but also provides a more flexible teacher role for online knowledge distillation instead of themselves or their peers. Above results illustrate that our method can transfer knowledge more adequately and effectively by combining guided modules and evolutionary teacher.

Iv-D Visualisation

Previous experimental results quantitatively demonstrate the superiority of our proposed evolutionary knowledge distillation. In order to further visually demonstrate the advantages of our approach, we use the t-SNE [58]

for visualization. t-SNE is a tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. The Fig. 

3 gives the visualisation results of ResNet18 trained with KD [23], CRD [56] and EKD on the Tiny-ImageNet dataset. The “Baseline” in Fig. 3 denotes that we train ResNet18 without any distillation methods, for KD, CRD and EKD, we train the student with the helps of teacher ResNet50. We randomly select ten classes in this dataset for visualization experiment, with 50 samples for each class, different numbers indicate different classes in Fig. 3. To begin with, it is obvious that our approach achieves more concentrated clusters than KD and CRD. In addition, as demonstrated in Fig. 3, the changes of the distances in classifiers of KD and CRD are more severe than that in classifier of EKD. We speculate that the evolutionary teacher can facilitate the student network learning by narrowing the teacher-student capability gap and the guided modules can enhance intermediate knowledge representation to improvement of student network learning.

Iv-E Further Analysis

The approach we proposed mainly includes two parts, evolutionary teacher and guided modules. In this section, we conduct further experiments to explore the specific influence of evolutionary teacher, guided modules and components of loss function. In addition, we compare the influence of our approach in training time with other distillation approaches. All ablation experiments were performed in the same setting as previous ones.

Approach Accuracy (%)
KD 77.57
KD+ET 79.71 (+2.14)
KD+S_G 78.85
KD+S_G+ET 81.34 (+2.49)
KD+T_G+S_G 80.01
KD+T_G+S_G+ET (EKD) 82.05 (+2.04)
TABLE V: Classification accuracy on CIFAR100. The teacher is ResNet50 and the student is ResNet18. “ET” denotes evolutionary teacher without guided modules; “T_G” denotes teacher stream with guided modules; “S_G” denotes student stream with guided modules.
Fig. 4: The influence of guided modules. We train student models with different numbers of guided module pairs on CIFAR100 dataset.

Influence of Evolutionary Teacher. Here, we make ResNet50 and ResNet18 as teacher and student, respectively. As illustrated in Tab. V, for the classic knowledge distillation method KD [23], the accuracy of student on the CIFAR100 dataset is 77.57%. When we use an evolutionary teacher to teach student network, and other settings remain unchanged, the performance of our student network will increase by 2.14%, reach 79.71%. We believe it is because the evolutionary teacher provides more sufficient supervision information to promote student network learning. What’s more, the guided modules play a positive role in within-stream distillation and cross-stream distillation, so, when the guided modules are working, can the evolutionary teacher still play an active role in the learning of student? In order to further verify the influence of evolutionary teacher, we added guided modules to teacher stream and student stream respectively, and compared the performance differences of student model before and after adopting evolutionary teacher strategy. As shown in Tab. V, Introducing the guided modules for the student stream only, the evolutionary teacher strategy improves the accuracy by 2.04%, while the introduction of the guided modules for the teacher and student streams, the evolutionary teacher strategy improves the performance by 2.49%. It’s necessary to point out that the teacher model also has a good performance with the help of its guided modules, when our evolutionary teacher without guided modules to supervise the learning of student, it can still increase by 3.77% compared to KD (81.37% VS. 77.57%). The above results demonstrate that evolutionary teacher can promote student network learning by reducing the capability gap between teacher and student.

Influence of Guided Modules. In order to explore the influence of introduced guided modules, we conduct further experiments. As shown in Tab. V, The effect of the guided modules to improve the performance of the student network is obvious. When only the student stream uses the guided modules, the performance of student is improved by 1.28% compared to KD (78.85% VS. 77.57%). When the teacher stream and student stream use the guided modules at the same time, the performance is improved by 2.44% (80.01% VS. 77.57%). These results show that the guided modules promote the effective transfer of knowledge by making full use of the intermediate information of the network, and improve the performance of student.

In addition, the number of guided modules also has an important impact on knowledge transfer, so, we conduct experiments base on VGG11(bn). For the network structure, we use up to four guided modules and at least zero module. When the number of guided modules is zero, our approach essentially degenerates into classic knowledge distillation with evolutionary teacher. As shown in Fig. 4, experimental results show that guided modules play a vital role. When the number of guided modules increases, the effect is continuously improved, but as the number increases, the improvement becomes less obvious. It should be pointed out that due to the extremely simple structure of guided modules, their consumption of computing resources is negligible.

Influence of Loss Function. The training process of EKD includes two simultaneous stages, within-stream distillation and cross-stream distillation. The loss function mainly includes within-stream distillation loss (, ) and cross-distillation loss (), is classification loss of each classifier. We conduct related experiments to explore the influence of components of loss function (Eq. (11)). The results are shown in the Fig. 5, it is obvious that the loss function has a promotion effect on the improvement of student network, which shows that the guided modules effectively promote the evolutionary teacher to transfer knowledge to student. and are the distillation losses that act on the within-stream feature level and predicted distribution level, respectively. All of them play a positive role in the learning of the student network, and the effect of is more obvious, which indicate that the softened label distribution and feature maps that contain rich information about image intensity and spatial correlation provide sufficient supervision for the learning of students in within-stream and cross-stream distillation. In general, for within-stream distillation, deeper classifiers provide supervision to help the learning of shallow ones, thereby bringing about the performance improvement of the stream itself. Cross-stream distillation can improve knowledge transfer from the evolutionary teacher with the help of guided modules.

Fig. 5: Ablation study about components of loss function on CIFAR10 dataset. The teacher and student are VGG19(bn) and VGG11(bn) respectively.
Fig. 6: Low-resolution () image classification accuracy on CIFAR100 dataset.

Training Time Analysis. We compare training time with other knowledge distillation methods, all the comparative experiments are implemented on NVIDIA TITAN GPU with identical hyper-parameters. In fact, the guided modules won’t increase the amount of calculation too much because of their extremely simple structure, what’s more, since the traditional distillation method needs to train the teacher and student model separately in two stages, our approach trains them at the same time, so the training time will not increase significantly. Specifically, when the teacher and student are ResNet50 and ResNet18 respectively, the training time of our approach is 277.5s/epoch, the traditional knowledge distillation method KD [23] training time for teacher and student are 205.29s/epoch and 171.07s/epoch. The memory of GPU occupied during training is slightly higher than that of traditional distillation methods. In general, our training time is not significantly higher than traditional methods, but brings considerable performance improvements.

Iv-F Adaptability Analysis

Low-resolution Scenario.

 To verify the performance of EKD in low-resolution scenario, we conduct experiments on challenging low-resolution image classification and low-resolution face recognition tasks.

For image classification task, we first train ResNet50 as teacher, and then make ResNet18 as student. The dataset used is down-sampled to reduce the resolution to 1616, and the classification loss is classic cross-entropy loss.The experimental results are shown in Fig. 6, our approach shows good adaptability on more challenging low-resolution image classification task, which is 1.31% higher than the current best method HORKD [17]. In addition, HORKD needs to consume more computing resources due to the introduction of the assistant network. These results indicate the effectiveness of evolutionary teacher supervision and the introduced guided modules in representing and transferring knowledge in low-resolution scenario.

Method Top-1 Accuracy (%) Parameters Year
VLRR [60] 59.03 - 2016
SphereFace [35] 78.73 37M 2017
CosFace [59] 91.83 37M 2018
SKD [18] 67.25 0.79M 2019
AGC-GAN [55] 70.68 - 2019
VGGFace2 [7] 84.56 26M 2019
ArcFace [15] 88.73 37.8M 2019
LRFRW [32] 93.40 4.2M 2019
HORKD [17] 92.11 7.8M 2020
EKD (Ours) 93.85 0.61M -
TABLE VI: The performance of various methods on UCCS benchmark. Our student model achieves great performance when working at low resolution () and costing less parameters.
Fig. 7: Few-sample image classification results on CIFAR100 dataset. EKD trains teacher and student with subsets. KD, FitNet and CRD train the teacher models on full set and train the students with subsets.

Then we conduct experiments on the low-resolution face recognition task which is very helpful in many real-world applications, e.g., recognizing low-resolution surveillance faces in the wild. In our experiments, the teacher uses a recent face recognizer VGGFace2 [7] with ResNet50 structure and the student network is based on streamlined ResNet18 with only 0.61M parameters, they are trained on UMDFaces [4] and tested on UCCS dataset [20]. In order to verify the validity of our low-resolution models, we emphatically check the accuracy when the input resolution is 1616. As shown in Tab. VI, our student model achieves better low-resolution face recognition performance and costs less parameters. Specifically, we achieve a top-1 accuracy of 93.85% on the UCCS benchmark, which is 0.45% higher than the state-of-art LRFRW [32], and the amount of parameters is reduced by nearly ten times. Classical face recognition methods, such as ArcFace and CosFace, do not perform well in low-resolution scenario, with their highest recognition accuracy only reaching 88.73%. Moreover, their models have more parameters, which will lead to a significant increase in the computing cost for inference. It is worth mentioning that whether it is VLRR [60], SKD [18], HORKD [17] or LRFRW [32] in the distillation process, high-resolution images corresponding to low-resolution faces are necessary to provide more information, but such high-resolution images are not always easy to obtain, our approach only uses low-resolution face images for training, which adopts real-world application scenarios.

Few-sample Scenario. In practical scenario (e.g., in the wild), the number of samples available for training is usually limited to a certain extent. In order to study the adaptability of our approach in a few-sample scenario, we conduct experiments on different subsets of CIFAR100 dataset. We randomly select images of each class to form new subsets, and use the newly designed training set to train the student models while maintaining the same test set. ResNet50 and ResNet18 are chosen as teacher and student, respectively. We compare the performance of student models with several typical distillation methods include KD [23], FitNet [47] and CRD [17]. The percentages of retained samples are 100%, 75%, 50% and 25%. For a fair comparison, we use the same data in different distillation approaches to train student models and train their teacher models on full dataset.

The results in Fig. 7 show that our approach remarkably surpasses other distillation approaches under few-sample scenario, even when the teacher stream of EKD is learned and distilled knowledge from less training data. Specifically, as the amount of training data decreases, the performance of distillation methods represented by KD, FitNet, and CRD in the figure will decrease significantly, while the downward trend of EKD proposed by us is significantly milder, even the advantage is more obvious when the training samples are less. When the percentage of training samples is 25%, our approach is nearly 10% higher than CRD.

V Conclusion

In this paper, we propose an evolutionary knowledge distillation and show its superiority by comparing it with the state-of-the-art offline distillation and online distillation approaches. Our approach uses an evolutionary teacher to supervise the learning of student from scratch, which can reduce the teacher-student capability gap to promote knowledge transfer. What’s more, through the introduction of some simple guided module pairs between corresponding teacher-student blocks, the efficiency of intermediate knowledge representation is improved. We believe this evolutionary knowledge transfer manner and simple guided mechanism are very promising in knowledge distillation community. In the future, we will explore the potential of our approach in combining with existing knowledge distillation schemes, and performing more practical tasks.

Acknowledgement. This work was partially supported by grants from the National Key Research and Development Plan (2020AAA0140001), National Natural Science Foundation of China (61772513), Beijing Natural Science Foundation (19L2040), and the project from Beijing Municipal Science and Technology Commission (Z191100007119002). Shiming Ge is also supported by the Youth Innovation Promotion Association, Chinese Academy of Sciences.

References

  • [1] S. Ahn, S. X. Hu, A. C. Damianou, N. D. Lawrence, and Z. Dai (2019) Variational information distillation for knowledge transfer. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    pp. 9163–9171. Cited by: §II-A.
  • [2] R. Anil, G. Pereyra, A. Passos, R. Ormándi, G. E. Dahl, and G. E. Hinton (2018) Large scale distributed neural network training through online distillation. In International Conference on Learning Representations, Cited by: §I, §III.
  • [3] J. Ba and R. Caruana (2014) Do deep nets really need to be deep?. In Conference on Neural Information Processing Systems, pp. 2654–2662. Cited by: §II-A.
  • [4] A. Bansal, A. Nanduri, C. D. Castillo, R. Ranjan, and R. Chellappa (2017) Umdfaces: an annotated face dataset for training deep networks. In International Joint Conference on Biometrics, pp. 464–473. Cited by: §IV-A, §IV-F.
  • [5] L. Bottou (2012) Stochastic gradient descent tricks. In Neural networks: Tricks of the trade - Second Edition, Vol. 7700, pp. 421–436. Cited by: §III-C.
  • [6] C. Buciluǎ, R. Caruana, and A. Niculescu-Mizil (2006) Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535–541. Cited by: §II-A.
  • [7] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman VGGFace2: A dataset for recognising faces across pose and age. In 13th IEEE International Conference on Automatic Face & Gesture Recognition, FG 2018, Xi’an, China, May 15-19, 2018, pp. 67–74. Cited by: §IV-F, TABLE VI.
  • [8] D. Chen, J. Mei, C. Wang, Y. Feng, and C. Chen (2020) Online knowledge distillation with diverse peers. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    pp. 3430–3437. Cited by: §I, §II-A, §II-C, §IV-A, §IV-C, §IV-C, TABLE III.
  • [9] G. Chen, W. Choi, X. Yu, T. Han, and M. Chandraker (2017) Learning efficient object detection models with knowledge distillation. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 742–751. Cited by: §I.
  • [10] S. Chen, C. Zhang, and M. Dong (2018)

    Coupled end-to-end transfer learning with generalized fisher information

    .
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4329–4338. Cited by: §II-A.
  • [11] Y. Chen, J. Wang, B. Zhu, M. Tang, and H. Lu (2017) Pixelwise deep sequence learning for moving object detection. IEEE Transactions on Circuits and Systems for Video Technology 29 (9), pp. 2567–2579. Cited by: §I.
  • [12] X. Cheng, Z. Rao, Y. Chen, and Q. Zhang (2020) Explaining knowledge distillation by quantifying the knowledge. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12925–12935. Cited by: §III.
  • [13] I. Chung, S. Park, J. Kim, and N. Kwak (2020) Feature-map-level online adversarial knowledge distillation. In

    International Conference on Machine Learning

    ,
    pp. 2006–2015. Cited by: §II-C.
  • [14] J. Crowley, G. Gavin, and A. Storkey (2018) Moonshine: distilling with cheap convolutions. In Conference on Neural Information Processing Systems, pp. 2893–2903. Cited by: §II-B.
  • [15] J. Deng, J. Guo, N. Xue, and S. Zafeiriou (2019) Arcface: additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4690–4699. Cited by: TABLE VI.
  • [16] T. Furlanello, Z. Lipton, M. Tschannen, L. Itti, and A. Anandkumar (2018) Born again neural networks. In International Conference on Machine Learning, pp. 1607–1616. Cited by: §II-B.
  • [17] S. Ge, K. Zhang, H. Liu, Y. Hua, S. Zhao, X. Jin, and H. Wen (2020) Look one and more: distilling hybrid order relational knowledge for cross-resolution image recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 10845–10852. Cited by: §IV-F, §IV-F, §IV-F, TABLE VI.
  • [18] S. Ge, S. Zhao, C. Li, and J. Li (2018) Low-resolution face recognition in the wild via selective knowledge distillation. TIP, pp. 2051–2062. Cited by: §IV-A, §IV-F, TABLE VI.
  • [19] S. Ghosh, N. Das, I. Das, and U. Maulik (2019)

    Understanding deep learning techniques for image segmentation

    .
    ACM Computing Surveys (CSUR) 52 (4), pp. 1–35. Cited by: §I.
  • [20] M. Günther, P. Hu, C. Herrmann, C. H. Chan, M. Jiang, S. Yang, A. R. Dhamija, D. Ramanan, J. Beyerer, J. Kittler, M. A. Jazaery, M. I. Nouyed, G. Guo, C. Stankiewicz, and T. E. Boult (2017) Unconstrained face detection and open-set face recognition challenge. In International Joint Conference on Biometrics, Cited by: §IV-A, §IV-F.
  • [21] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §III, §IV-A.
  • [22] B. Heo, M. Lee, S. Yun, and J. Y. Choi (2019)

    Knowledge transfer via distillation of activation boundaries formed by hidden neurons

    .
    In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 3779–3787. Cited by: TABLE I, TABLE II, §IV-A.
  • [23] G. Hinton and J. Dean (2015) Distilling the knowledge in a neural network. In Deep Learning and Representation Learning Workshop on Neural Information Processing Systems, Cited by: §I, §II-A, §III-B, §III-C, TABLE I, TABLE II, Fig. 3, §IV-A, §IV-B, §IV-D, §IV-E, §IV-E, §IV-F, TABLE IV.
  • [24] Y. Hu, J. Li, Y. Huang, and X. Gao (2019)

    Channel-wise and spatial feature modulation network for single image super-resolution

    .
    IEEE Transactions on Circuits and Systems for Video Technology 30 (11), pp. 3911–3927. Cited by: §I.
  • [25] K. Jangho, P. Seonguk, and K. Nojun (2018) Paraphrasing complex network: network compression via factor transfer. In Conference on Neural Information Processing Systems, pp. 2765–2774. Cited by: TABLE I, TABLE II, §IV-A.
  • [26] M. Ji, B. Heo, and S. Park (2021) Show, attend and distill: knowledge distillation via attention-based feature matching. Proceedings of the AAAI Conference on Artificial Intelligence. Cited by: §IV-A, §IV-B, TABLE IV.
  • [27] X. Jin, B. Peng, Y. Wu, Y. Liu, J. Liu, D. Liang, J. Yan, and X. Hu (2019) Knowledge distillation via route constrained optimization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1345–1354. Cited by: §I, §II-B.
  • [28] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang (2017) On large-batch training for deep learning: generalization gap and sharp minima. In International Conference on Learning Representations, Cited by: §II-A.
  • [29] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §IV-A.
  • [30] X. Lan, X. Zhu, and S. Gong (2018) Knowledge distillation by on-the-fly native ensemble. In Conference on Neural Information Processing Systems, pp. 7528–7538. Cited by: §I, §II-A, §II-C, §II-C, §IV-A, §IV-C, TABLE III.
  • [31] Y. Le and X. Yang (2015) Tiny imagenet visual recognition challenge. CS 231N 7, pp. 7. Cited by: §IV-A.
  • [32] P. Li, L. Prieto, D. Mery, and P. J. Flynn (2019) On low-resolution face recognition in the wild: comparisons and new techniques. IEEE Transactions on Information Forensics and Security 14 (8), pp. 2000–2012. Cited by: §IV-F, TABLE VI.
  • [33] X. Li, S. Li, B. Omar, and X. Li (2020) ResKD: residual-guided knowledge distillation. arXiv preprint arXiv:2006.04719. Cited by: §IV-A, §IV-B, TABLE IV.
  • [34] T. Liu, K. Lam, R. Zhao, and G. Qiu (2021) Deep cross-modal representation learning and distillation for illumination-invariant pedestrian detection. IEEE Transactions on Circuits and Systems for Video Technology. Cited by: §II-A.
  • [35] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song (2017) Sphereface: deep hypersphere embedding for face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 212–220. Cited by: TABLE VI.
  • [36] Y. Liu, K. Chen, C. Liu, Z. Qin, Z. Luo, and J. Wang (2019) Structured knowledge distillation for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2604–2613. Cited by: §II-A.
  • [37] Y. Liu, J. Cao, B. Li, C. Yuan, W. Hu, Y. Li, and Y. Duan (2019) Knowledge distillation via instance relationship graph. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7096–7104. Cited by: §I, §II-B.
  • [38] N. Ma, X. Zhang, H. Zheng, and J. Sun (2018) Shufflenet v2: practical guidelines for efficient cnn architecture design. In Proceedings of the European conference on computer vision, pp. 116–131. Cited by: §IV-A.
  • [39] M. R. Minar and J. Naher (2018) Recent advances in deep learning: an overview. International Journal of Machine Learning and Computing, pp. 747–750. Cited by: §I.
  • [40] S. Mirzadeh, M. Farajtabar, A. Li, and H. Ghasemzadeh (2020) Improved knowledge distillation via teacher assistant: bridging the gap between student and teacher. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 5191–5198. Cited by: §I, §II-B.
  • [41] W. Park, D. Kim, Y. Lu, and M. Cho (2019) Relational knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3967–3976. Cited by: §I, TABLE I, TABLE II, §IV-A, §IV-B, TABLE IV.
  • [42] N. Passalis, M. Tzelepi, and A. Tefas (2020) Heterogeneous knowledge distillation using information flow modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2336–2345. Cited by: §II-B.
  • [43] N. Passalis and A. Tefas (2018) Learning deep representations with probabilistic knowledge transfer. In Proceedings of the European Conference on Computer Vision, pp. 268–284. Cited by: TABLE I, TABLE II, §IV-A.
  • [44] B. Peng, X. Jin, J. Liu, D. Li, Y. Wu, Y. Liu, S. Zhou, and Z. Zhang (2019) Correlation congruence for knowledge distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5007–5016. Cited by: TABLE I, TABLE II, §IV-A.
  • [45] G. Pereyra, G. Tucker, J. Chorowski, L. Kaiser, and G. E. Hinton (2017) Regularizing neural networks by penalizing confident output distributions. In International Conference on Learning Representations, Cited by: §II-A.
  • [46] A. Polino, R. Pascanu, and D. Alistarh (2018) Model compression via distillation and quantization. In International Conference on Learning Representations, Cited by: §II-A.
  • [47] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio (2015) FitNets: hints for thin deep nets. In International Conference on Learning Representations, Cited by: §I, §II-B, §III-B, TABLE I, TABLE II, §IV-A, §IV-B, §IV-F, TABLE IV.
  • [48] S. Ruder, P. Ghaffari, and J. G. Breslin (2017) Knowledge adaptation: teaching to adapt. arXiv preprint arXiv:1702.02052. Cited by: §II-B.
  • [49] N. K. S. Zagoruyko (2016) Wide residual networks. In The British Machine Vision Conference, Cited by: §IV-A.
  • [50] Z. Shen, Z. He, and X. Xue (2019) Meal: multi-model ensemble via adversarial learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 4886–4893. Cited by: §II-B.
  • [51] Z. Shen and M. Savvides (2020) Meal v2: boosting vanilla resnet-50 to 80%+ top-1 accuracy on imagenet without tricks. arXiv preprint arXiv:2009.08453. Cited by: §II-B.
  • [52] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, Cited by: §IV-A.
  • [53] G. Song and W. Chai (2018) Collaborative learning for deep neural networks. In Conference on Neural Information Processing Systems, pp. 1837–1846. Cited by: §II-C, §IV-A, §IV-C, TABLE III.
  • [54] R. Takahashi, T. Matsubara, and K. Uehara (2020) Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30 (9), pp. 2917–2931. Cited by: §I.
  • [55] V. Talreja, F. Taherkhani, M. C. Valenti, and N. M. Nasrabadi (2019) Attribute-guided coupled gan for cross-resolution face recognition. In 2019 IEEE 10th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–10. Cited by: TABLE VI.
  • [56] Y. Tian, D. Krishnan, and P. Isola (2020) Contrastive representation distillation. In International Conference on Learning Representations, Cited by: §I, §II-A, §II-B, §III-C, TABLE I, TABLE II, Fig. 3, §IV-A, §IV-B, §IV-B, §IV-B, §IV-D, TABLE IV.
  • [57] F. Tung and G. Mori (2019) Similarity-preserving knowledge distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1365–1374. Cited by: TABLE I, TABLE II, §IV-A, §IV-B, §IV-B.
  • [58] L. Van der Maaten and G. Hinton (2008) Visualizing data using t-sne. Journal of machine learning research 9 (11). Cited by: §IV-D.
  • [59] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu (2018) Cosface: large margin cosine loss for deep face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5265–5274. Cited by: TABLE VI.
  • [60] Z. Wang, S. Chang, Y. Yang, D. Liu, and T. S. Huang (2016) Studying very low resolution recognition using deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4792–4800. Cited by: §IV-F, TABLE VI.
  • [61] C. Yang, L. Xie, S. Qiao, and A. Yuille (2018) Knowledge distillation in generations: more tolerant teachers educate better students. arXiv preprint arXiv:1805.05551. Cited by: §II-B, §III-C.
  • [62] C. Yang, L. Xie, C. Su, and A. L. Yuille (2019) Snapshot distillation: teacher-student optimization in one generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2859–2868. Cited by: §I, §I, §II-B, §II-C, §II-C, §III-C, §IV-A, §IV-C, TABLE III.
  • [63] J. Yim, D. Joo, J. Bae, and J. Kim (2017) A gift from knowledge distillation: fast optimization, network minimization and transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4133–4141. Cited by: §I, §II-B.
  • [64] S. You, C. Xu, C. Xu, and D. Tao (2017) Learning from multiple teacher networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1285–1294. Cited by: §II-B.
  • [65] S. Yun, J. Park, K. Lee, and J. Shin (2020) Regularizing class-wise predictions via self-knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13876–13885. Cited by: §II-B.
  • [66] S. Zagoruyko and N. Komodakis (2017)

    Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer

    .
    In International Conference on Learning Representations, Cited by: TABLE I, TABLE II, §IV-A, §IV-B, TABLE IV.
  • [67] H. Zhai, S. Lai, H. Jin, X. Qian, and T. Mei (2021)

    Deep transfer hashing for image retrieval

    .
    IEEE Transactions on Circuits and Systems for Video Technology 31 (2), pp. 742–753. Cited by: §I.
  • [68] L. Zhang, J. Song, A. Gao, et al. (2019) Be your own teacher: improve the performance of convolutional neural networks via self distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3712–3721. Cited by: §II-C, §II-C, §IV-A, §IV-C, §IV-C, TABLE III.
  • [69] X. Zhang, X. Zhou, M. Lin, and J. Sun (2018) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6848–6856. Cited by: §IV-A.
  • [70] Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu (2018) Deep mutual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4320–4328. Cited by: §II-A, §II-C, §III, §IV-A, §IV-C, TABLE III.
  • [71] M. Zhao, C. Zhang, J. Zhang, F. Porikli, B. Ni, and W. Zhang (2019) Scale-aware crowd counting via depth-embedded convolutional neural networks. IEEE Transactions on Circuits and Systems for Video Technology 30 (10), pp. 3651–3662. Cited by: §I.