Snapshot Distillation: Teacher-Student Optimization in One Generation

12/01/2018 ∙ by Chenglin Yang, et al. ∙ 0

Optimizing a deep neural network is a fundamental task in computer vision, yet direct training methods often suffer from over-fitting. Teacher-student optimization aims at providing complementary cues from a model trained previously, but these approaches are often considerably slow due to the pipeline of training a few generations in sequence, i.e., time complexity is increased by several times. This paper presents snapshot distillation (SD), the first framework which enables teacher-student optimization in one generation. The idea of SD is very simple: instead of borrowing supervision signals from previous generations, we extract such information from earlier epochs in the same generation, meanwhile make sure that the difference between teacher and student is sufficiently large so as to prevent under-fitting. To achieve this goal, we implement SD in a cyclic learning rate policy, in which the last snapshot of each cycle is used as the teacher for all iterations in the next cycle, and the teacher signal is smoothed to provide richer information. In standard image classification benchmarks such as CIFAR100 and ILSVRC2012, SD achieves consistent accuracy gain without heavy computational overheads. We also verify that models pre-trained with SD transfers well to object detection and semantic segmentation in the PascalVOC dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A large portion of recent advances in computer vision have been built upon deep learning, in particular training very deep neural networks. With the depth increasing from tens

[25][37][40] to hundreds [18][22], the issue of network optimization becomes a more and more important yet challenging problem, in which researchers proposed various approaches to deal with both under-fitting [30], over-fitting [39] and numerical instability [23].

As an alternative approach to assist training, teacher-student (T-S) optimization was originally designed for training a smaller network to approximate the behavior of a larger one, i.e., model compression [19], but later researcher found its effectiveness in providing complementary cues to training the same network [11][2]. These approaches require a teacher model which is often obtained from a standalone training process. Then, an extra loss term which measures the similarity between the teacher and the student is added to the existing cross-entropy loss term. It was believed that such an optimization process benefits from so-called secondary information [49], i.e., class-level similarity that allows the student not to fit the one-hot class distribution. Despite their success in improving recognition accuracy, these approaches often suffer much heavier computational overheads, because a sequence of models need to be optimized one by one. A training process with one teacher and students requires more training time compared to a single model.

SA? IN? G?
Knowledge Distillation (2015) [19]
FitNet (2015) [35]
Net2Net (2016) [5]
A Gift from KD (2017) [50]
Label Refinery (2018) [2]
Born-Again Network (2018) [11]
Tolerant Teacher (2018) [49]
Snapshot Distillation (this work)
Table 1: The attributes of different teacher-student optimization approaches, where SA indicates that teacher and student have the same architecture, IN indicates being evaluated on ImageNet, and G indicates that the entire process is done within one generation. See Section 2 for a detailed survey.

This paper presents an algorithm named snapshot distillation (SD) to perform T-S optimization in one generation which, to the best of our knowledge, was not achieved in prior research. The differences between SD and previous methods are summarized in Table 1. The key idea of SD is straightforward: taking extra supervision (a.k.a. the teacher signal) from the prior iterations (in the same generation) instead of the prior generations. Based on this framework, we investigate several factors that impact the performance of T-S optimization, and summarize three principles, namely, (i) the teacher model has been well optimized; (ii) the teacher and student models are sufficiently different from each other; and (iii) the teacher provides secondary information [49] for the student to learn. Summarizing these requirements leads to our solution that using a cyclic learning rate policy, in which the last snapshot of each cycle (which arrives at a high accuracy and thus satisfies (i)), serves as the teacher for all iterations in the next cycle (these iterations are pulled away from the teacher after a learning rate boost, which satisfies (ii)). We also introduce a novel method to smooth the teacher signal in order to provide mild and more effective supervision (which satisfies (iii)).

Experiments are performed in two standard benchmarks for image classification, namely, CIFAR100 [24] and ILSVRC2012 [36]. SD consistently outperforms the baseline (direct optimization) especially in deeper networks. In addition, SD requires merely extra training time beyond the baselines, which is much faster than the existing multi-generation approaches. We also fine-tune the models trained by SD for object detection and semantic segmentation in the PascalVOC dataset [10] and observe accuracy gain, implying that the improvement brought by SD is transferrable.

The remainder of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 describes snapshot distillation and provides practical guides for T-S optimization in one generation. After experiments are shown in Section 4, we conclude this work in Section 5.

2 Related Work

Recently, the research field of computer vision has been largely boosted by the theory of deep learning [26]. With the availability of large-scale image datasets [7] and powerful computational resources, researchers designed deep networks to replace traditional handcrafted features [32] for visual understanding. The fundamental idea is to build a hierarchical network structure containing multiple layers, each of which contains a number of neurons having the same or similar mathematical functions, e.g., convolution, pooling, normalization, etc

. The strong ability of deep networks at fitting complicated feature-space distributions is widely verified in the previous literature. In a fundamental task known as image classification, deep convolutional neural networks 

[25] have been dominating in the large-scale competitions [36]. To further improve classification accuracy, researchers designed deeper and deeper networks [37][40][18][22][20], and also explored the possibility of discovering network architectures automatically [46][57][27].

The rapid progress of deep neural networks has helped a lot of visual recognition tasks. Features extracted from pre-trained classification networks can be transferred to small datasets for image classification 

[8], retrieval [33] or object detection [14]. To transfer knowledge to a wider range of tasks, researchers often adopt a technique named fine-tuning, which replaces the last few layers of a classification network with some specially-designed modules (e.g., up-sampling for semantic segmentation [28][3] and edge detection [48] or regional proposal extraction for object detection [13][34]), so that the network can take advantage of the properties of the target problem while borrowing visual features from basic classification.

On the other hand, optimizing a deep neural network is a challenging problem. When the number of layers becomes very large (e.g., more than

layers), vanilla gradient descent approaches often encounter stability issues and/or over-fitting. To deal with them, researchers designed verious approaches such as ReLU activation 

[30], Dropout [39]

and batch normalization 

[23]. However, as depth increases, the large number of parameters makes it easy for the neural networks to be over-confident [15], especially in the scenarios of limited training data. An effective way is to introduce extra priors or biases to constrain the training process. A popular example is to assume that some visual categories are more similar than others [6]

, so that a class-level similarity matrix is added to the loss function 

[43][45]. However, this method still suffers the lack of modeling per-image class-level similarity (e.g., a cat in one image may look like a dog, but in another image, it may be closer to a rabbit), which is observed in previous research [44][1][52].

Teacher-student optimization is an effective way to formulate per-image class-level similarity. In this flowchart, a teacher student is first trained, and then used to guide the student network, in which process the output (e.g., confidence scores) of the teacher network carries class-level similarity for each image. This idea was first proposed to distill knowledge from a larger teacher network and compress it to a smaller student network [19][35], or initialize a deeper/wider network with pre-trained weights of a shallower/narrower network [5][37]. Later, it was extended in various aspects, including using an adjusted way of teacher supervision [41][31], using multiple teachers towards a better guidance [42], adding supervision to intermediate neural responses [50], and allowing two networks to provide supervision to each other [55]. Recently, researchers noted that this idea can be used to optimize deep networks in many generations [2][11], namely, a few networks with the same architecture are optimized one by one, in which the next one borrows supervision from the previous one. It was argued that the softness of the teacher signal plays an important role in educating a good student [49]. Despite the success of these approaches in boosting recognition accuracy, they suffer from lower training efficiency, as in a -generation process (one teacher and students) requires more training time. An inspiring cue comes from the effort of training a few models for ensemble within the same time [21], in which the number of iterations for training each model was largely reduced.

3 Snapshot Distillation

This section presents snapshot distillation (SD), the first approach that achieves teacher-student (T-S) optimization within one generation. We first briefly introduce a general flowchart of T-S optimization and build a notation system. Then, we analyze the main difficulties that limit its efficiency, based on which we formulate SD and discuss principles and techniques to improve its performance.

3.1 Teacher-Student Optimization

Let a deep neural network be , where denotes the input image, denotes the output data (e.g., a

-dimensional vector for classification with

being the number of classes), and denotes the learnable parameters. These parameters are often initialized as random noise, and then optimized using a training set with data samples, .

Conventional optimization algorithm works by sampling mini-batches or subsets from the training set. Each of them, denoted as

, is fed into the current model to estimate the difference between prediction and ground-truth labels:

(1)

This process searches over the parameter space to find the approximately optimal that interprets or fits . However, the model trained in this way often over-fits the training set, i.e., cannot be transferred to the testing set to achieve good performance as in the training set. As observed in prior work [15], this is partly because the supervision was provided in one-hot vectors, which forces the network to prefer the true class overwhelmingly to all other classes – this is often not the optimal choice because rich information of class-level similarity is simply discarded [45][49].

To alleviate this issue, teacher-student (T-S) optimization was proposed, in which a pre-trained teacher network added an extra term to the loss function to measure the KL-divergence between teacher and student [11]:

(2)

where and denote the parameters in teacher and student models, respectively. This is to say, the fitting goal of the student is no longer the ground-truth one-hot vector which is too strict, but leans towards the teacher signal (a softened vector most often with correct prediction). This formulation can be applied in the form of multiple generations. Let be the total number of generations [2][11][49]. These approaches started with a so-called patriarch model , and in the -th generation, was used to teach . [49] showed the necessity of setting a tolerant teacher so that the students can absorb richer information from class-level similarity and achieve higher accuracy.

Despite the ability of T-S optimization in improving recognition accuracy, it often suffers the weakness of being computationally expensive. Typically, a T-S process with one teacher and students costs more time, yet this process is often difficult to parallelize111To make fair comparison, researchers often train deep networks using a fixed number of GPUs. T-S optimization trains models serially, which is often difficult to accelerate even with a larger number of GPUs.. This motivates us to propose an approach named snapshot distillation (SD), which is able to finish T-S optimization in one generation.

3.2 The Flowchart of Snapshot Distillation

The idea of SD is very simple. To finish T-S optimization in one generation, during the training process, we always extract the teacher signal from an earlier iteration, by which we refer to an intermediate status of the same model, rather than another model that was optimized individually.

Mathematically, let be the randomly initialized parameters. The baseline training process contains a total of iterations, the -th of which samples a mini-batch , computes the gradient of Eqn (1), and updates the parameters from to . SD works by assigning a number for the -th iteration, indicating a previous snapshot as the teacher to update . Thus, Eqn (3.1) becomes:

(3)

Here and are weights for one-hot and teacher supervisions. When , the teacher signal is ignored at the current iteration, and thus Eqn (3.2) degenerates to Eqn (1). The pseudo code of SD is provided in Algorithm 1. In what follows, we will discuss several principles required to improve the performance of SD.

Input :  training set , number of iterations , training configurations ;
1 Initialize ;
2 for  do
3       Sample a mini-batch from ;
4       Compute loss using Eqn (3.2);
5      
6 end for
Return :  .
Algorithm 1 Snapshot Distillation

3.3 Principles of Snapshot Distillation

This subsection forms the core contribution of our work, which discusses the principles that should be satisfied to improve the performance of SD. In practice, this involves how to design the hyper-parameters . We first describe three principles individually, and summarize them to give our solution in the final part.

3.3.1 Principle #1: The Quality of Teacher

In prior work, the importance of having a high-quality teacher model has been well studied. At the origin of T-S optimization [19][35][50], a more powerful teacher model was used to guide a smaller and thus weaker student model, so that the teacher knowledge is distilled and compressed into the student. This phenomenon persists in a multi-generation T-S optimization in which teacher and student share the same network architecture [2].

Mathematically, the teacher model determines the second term on the right-hand side of Eqn (3.2), i.e., the KL-divergence between teacher and student. If the teacher is not well optimized and provides noisy supervision, the risk that two terms conflict with each other becomes high. As we shall see later, this principle is even more important in SD, as the number of iterations allowed for optimizing each student becomes smaller, and the efficiency (or the speed of convergence) impacts the final performance heavier.

3.3.2 Principle #2: Teacher-Student Difference

In the context of T-S optimization in one generation, one more challenge emerges. In each iteration, the teacher and student are two snapshots from the same training process, and so the similarity between them is higher than that in multi-generation T-S optimization. This makes the second term on the right-hand side of Eqn 3.2 degenerate and, consequently, its contribution to the gradient that receives for updating itself is considerably changed.

Table 2: Classification error rates () on CIFAR100 with different T-S similarities. All these models are trained for epochs, and all numbers are the average of two individual runs. The first row (self) shows the accuracies of standard models (no T-S optimization), and in the following rows, when teaches , they share the first common epochs. Some T-S pairs that are probabilistically identical, so only one of them is tested (see Section 3.3.2 for details).

We evaluate the impact of T-S similarity using the -layer DenseNet [22] on the CIFAR100 dataset [24]. All models are trained with the cosine annealing learning rate policy [29] for a total of epochs. Detailed settings are elaborated in Section 4.1. To construct T-S pairs with different similarities, we first perform a complete training process containing standard epochs and starting from scratch, and denote the final model by . Then, we take the snapshots at , and (scratch) epochs, and denote them by , and , respectively, with the number after indicating the number of elapsed epochs. Then, we continue training these snapshots with the same configurations (mini-batch size, learning rates, etc.) but different randomization which affects the sampled mini-batch in each iteration and the data augmentation performed at each training sample. These models are denoted by , and , respectively, where the superscript implies being used as a teacher model, and each number after indicates the number of common epochs shared with . All these teacher models have exactly epochs.

Now, we use these models to teach the intermediate snapshots, i.e., , and . When is used to teach , their common part, i.e., the first epochs are preserved, i.e., the first epochs used Eqn (1) and the remaining epochs used Eqn (3.1). Results are summarized in Table 2. Note that from a probabilistic perspective, , and are identical to each other in classification accuracy, and from the previous part we expect them to provide the same teaching ability. We start with observing their behavior when is the student. This case degenerates to a two-generation T-S optimization. Since all teachers are probabilistically identical, we only evaluate one of these pairs, reporting a accuracy which is higher than the baseline (the average of , and is ). However, when is the student, serves as a better teacher because it does not share the first epochs with . This offers a larger difference between teacher and student and, consequently, produces better classification performance ( vs. ). When is chosen to be the student, this phenomenon preserves, i.e., T-S optimization prefers a larger difference between teacher and student.

3.3.3 Principle #3: Secondary Information

The last factor, also being the one that was most studied before, is how knowledge is delivered from teacher to student. There are two arguments, both of which suggesting that a smoother teacher signal preserves richer information, but they differ from each other in the way of achieving this goal. The distillation algorithm [19] used a temperature term to smooth both input and output scores, and the tolerant teacher algorithm [49] trained a less confident teacher by adding a regularization term in the first generation (a.k.a. the patriarch), and this strategy was verified the advantageous over the non-regularized version [11].

In the context of snapshot distillation, we follow [19] to divide the teacher signal (in logits

, the neural responses before the softmax layer) by a temperature coefficient

. In the framework of knowledge distillation, the student signals should also be softened before the KL divergence is computed with the teacher signals. The reason is that, the student with a shallow architecture is not capable of perfectly fitting the outputs of the teacher with a deep architecture, and thus matching the soft versions of their outputs is a more rational choice. The aim of knowledge distillation is to match the outputs, forcing the student to predict what the teacher predicts as much as possible. However, our goal is to generate secondary information in T-S optimization, instead of matching. As a result, we do not divide the student signal by . This strategy also aligns with Eqn 1 used in the very first iterations (i.e., no teacher signals are provided). In experiments, we observe a faster convergence as well as consistent accuracy gain – see Section 4.1 for detailed numbers. We name it as asymmetric distillation.

3.3.4 Summary

Summarizing the above three principles, we present our solution to improve the performance of SD. We partition the entire training process with iterations into mini-generations with iterations, respectively, and . The last iteration in each mini-generation serves as the teacher of all iterations in the next mini-generation. This is to say, there are teachers. The first teacher is the snapshot at iterations, the second one at iterations, and the last one at iterations. We have:

(4)

For , we define for later convenience, and in this case , and Eqn (3.2) degenerates to Eqn (1). Following Principle #2, we shall assume that the iterations right after each teacher have large learning rates, in order to ensure the sufficient difference between the teacher and student models. Meanwhile, according to Principle #1, the teacher itself should be good, which implies that the iterations before each teacher have small learning rates, making the network converge to an acceptable state. To satisfy both conditions, we require the learning rates within each mini-generation to start from a large value and gradually go down. In practice, we use the cosine annealing strategy [29] which was verified to converge better:

(5)

Here, is the index of mini-generation of , and is the starting learning rate at the beginning of this mini-generation (often set to be large). Finally, we follow Section 3.3.3 to use asymmetric distillation in order to satisfy Principle #3.

3.4 Discussions

If we set and switch off asymmetric distillation, the above solution degenerates to snapshot ensemble (SE) [21]. In experiments, we compare these two approaches under the same setting, and find that both approaches work well on CIFAR100 (SD reports better results), but on ILSVRC2012, SD achieves higher accuracy over the baseline while SE does not222The SE paper [21] reported a higher accuracy on ResNet50, but it was compared to the baseline with the stepwise learning rate policy, not the cosine annealing policy that should be the direct baseline. The latter baseline is more than higher than the former, and also outperforms SE.. This is arguably because CIFAR100 is relatively simple, so that the original setting ( iterations) are over-sufficient for convergence, and thus reducing the number of iterations of each mini-generation does not cause significant accuracy drop. ILSVRC2012, however, is much more challenging and thus convergence becomes a major drawback of both SD and SE. SD, with the extra benefit brought by T-S optimization, bridges this gap and outperforms the baseline.

Also, note that the above solution is only one choice. Under the generalized framework (Algorithm 1) and following these three principles, other training strategies can be explored, e.g., using super-convergence [38] to alleviate the drawback of weaker convergence. These options will be studied in the future.

4 Experiments

4.1 The CIFAR100 Dataset

Backbone Alg. best ensemble SOTA
ResNet20 BL N/A Year –—
SE N/A
SD  [51]
SD
ResNet32 BL N/A  [54]
SE N/A
SD  [56]
SD
ResNet56 BL N/A  [47]
SE N/A
SD  [22]
SD
ResNet110 BL N/A  [16]
SE N/A
SD  [53]
SD
DenseNet100 BL N/A  [9]
SE N/A
SD  [12]
SD
DenseNet190 BL N/A  [11]
SE N/A
SD  [49]
SD
Table 3: CIFAR100 classification errors () obtained by different network backbones. Regarding the algorithm option, BL indicate the baseline model trained with cosine annealing learning rates, SE indicates snapshot ensemble with the same learning rate policy as SD during the entire training process. is the temperature term. We report the accuracy at the end of each mini-generation, at the best epoch, and obtained from model ensemble ( through ), respectively. The logits of are multiplied by for ensemble of SD. Among the state-of-the-art (SOTA) methods, an asterisk indicates that model ensemble was used to achieve the corresponding error rate. In addition, [12] used complicated data augmentation to achieve an error rate of – we just applied standard data augmentation.

4.1.1 Settings and Baselines

We first evaluate SD on the CIFAR100 dataset [24], a low-resolution () dataset containing RGB images. These images are split into a training set of images and a testing set of

images, and in both of them, images are uniformly distributed over all

classes ( superclasses each of which contains fine-level classes). We do not perform experiments on the CIFAR10 dataset because it does not contain fine-level visual concepts, and thus the benefit brought by T-S optimization is not significant (this was also observed in [11] and analyzed in [49]).

We investigate two groups of baseline models. The first group contains standard deep ResNets [18] with , , and layers. Let the total number of layers be , then is the number of residual blocks in each stage. Given a input image, a convolution is first performed without changing its spatial resolution. Three stages followed, each of which has residual blocks (two convolutions summed up with an identity connection). Batch normalization [23] and ReLU activation [30] are applied after each convolutional layer. The spatial resolution changes in the three stages (, and ), as well as the number of channels (, and ). An average pooling layer is inserted after each of the first two stages. The network ends with global average-pooling followed by a fully-connected layer with outputs. The second group has two DenseNets [22] with and layers, respectively. These networks share the similar architecture with the ResNets, but the building blocks in each stage are densely-connected, with the output of each block concatenated to the accumulated feature vector and fed into the next block. The base feature length and growth rate are and for DenseNet100, and and for DenseNet190.

Following the conventions, we train all these networks from scratch. We use the standard Stochastic Gradient Descent (SGD) with a weight decay of

and a Nesterov momentum of

. In ResNets, we train the network for epochs with a mini-batch size of and a base learning rate of . In DenseNets, we train the network for epochs with a mini-batch size of and a base learning rate of . The cosine annealing learning rate [29] is used, in order to make fair comparison between the baseline and SD. In the training process, standard data-augmentation is used, i.e.

, each image is symmetrically-padded with a

-pixel margin on each of the four sides. In the enlarged image, a subregion with

pixels is randomly cropped and flipped with a probability of

. We do not use any data augmentation in the testing stage.

To apply SD, we evenly partition the entire training process into mini-generations, i.e., . For ResNets, we have , and , and for DenseNets, , and . The same learning rate is used at the beginning of each mini-generation, and decayed following Eqn (5). We use an asymmetric distillation strategy (Section  3.3.3) with and , respectively. In Eqn 3.2, we set and to approximately balance two sources of gradients in their magnitudes [19].

4.1.2 Quantitative Results and Analysis

Results are summarized in Table 3. Towards fair comparison, for different instances of the same backbone, network weights are initialized in the same way, although randomness during the training process (e.g., data shuffle and augmentation) is not unified. In addition, the first mini-generation (, no T-S optimization) is shared between SE (snapshot ensemble) and SD.

We first observe that SD brings consistent accuracy gain for all models, regardless of network backbones, and surpassing both the baseline and SE. In DenseNet190, the most powerful baseline, SD with achieves an error rate of at the best epoch, which is competitive among the state-of-the-arts (all of which reported the best epoch). Moreover, in terms of model ensemble from through , SD provides comparable numbers to SE, although we emphasize that SD focuses on optimizing a single model while SE, with weaker single models, requires ensemble to improve classification accuracy. Another explanation comes from the optimization policy of SD. By introducing a teacher signal to optimize each student, different snapshots in SD tend to share a higher similarity than SE, and this is the reason that SD reports a smaller accuracy gain from a single model to model ensemble.

Another important topic to discuss is how asymmetric distillation impacts T-S optimization, for which we show several evidences. With a temperature term , the student tends to become smoother, i.e., the entropy of the class distribution is larger. However, as shown in [11] and [49], T-S optimization achieves satisfying performance via finding a balancing point between certainty and uncertainty, so, as the latter gradually increases, we can observe a peak in classification accuracy. In DenseNet190 with , this peak appears during the third mini-generation which achieves the lowest error rate at , but the final error rate goes up . A similar phenomenon also appears in DenseNet100 with , which also achieves the lowest error at the third mini-generation (the lowest error of vs. the last error ), and in ResNets with . This reveals that the optimal temperature term is closely related to the network backbone. For a deeper backbone (e.g., DenseNet190) which itself has a strong ability of fitting data, we use a smaller to introduce less soft labels, decreasing the ambiguity.

4.2 The ILSVRC2012 Dataset

4.2.1 Settings and Baselines

We now investigate a much more challenging dataset, ILSVRC2012 [36], which is a popular subset of the ImageNet database [7]. It contains training images and testing images, all of which are high-resolution, covering object classes in total. The distribution over classes is approximately uniform in the training set and and strictly uniform in the testing set.

We use deep ResNets [18] with and layers. They share the same overall design with the ResNets used for CIFAR100, but in each residual block, there is a so-called bottleneck structure which, in order to accelerate, compresses the number of channels by and later recovers the original number. Each input image has a size of . After the first

convolutional layer with a stride of

and a max-pooling layer, four main stages follow with different numbers of blocks (ResNet101: ; ResNet152: ). The spatial resolutions in these four stages are , , and , and the number of channels are , , and , respectively. Three max-pooling layers are inserted between these four stages. The network ends with global average-pooling followed by a fully-connected layer with outputs.

We follow the conventions to configure the training parameters. The standard Stochastic Gradient Descent (SGD) with a weight decay of and a Nesterov momentum of is used. In a total of epochs, the mini-batch size is fixed to be . We still use the cosine annealing learning rate [29] starting with . A series of data-augmentation techniques [40] are applied in training to alleviate over-fitting, including rescaling and cropping the image, randomly mirroring and rotating (slightly) the image, changing its aspect ratio and performing pixel jittering. In the testing stage, the standard single-center-crop is used.

To apply SD, we set which partitions the training process into two equal sections (each has epochs). The reason of using a smaller (compared to CIFAR experiments) is that on ILSVRC2012 with high-resolution images and more complex semantics, it is much more difficult to guarantee convergence with a fewer number of iterations within each mini-generation. Regarding the temperature term, we fix . Other settings are the same as in the CIFAR experiments.

4.2.2 Quantitative Results

Backbone Alg.
Top- Top- Top- Top-
ResNet101 BL
ResNet101 SE
ResNet101 SD
ResNet152 BL
ResNet152 SE
ResNet152 SD
Table 4: ILSVRC2012 classification errors () obtained by different network backbones. Regarding the algorithm option, BL indicate the baseline model trained with cosine annealing learning rates, and SD snapshot distillation with .

Experimental results are summarized in Table 4. To make fair comparison, the first mini-generation of SE and SD are shared, and thus the only difference lies in whether the teacher signal is provided in the second mini-generation. We can see that the performance of SE is consistently worse than that of both BL and SD. Even if the two mini-generations of SE are fused, the error rates ( on ResNet101 and on ResNet152) are slightly higher than BL. This reveals that reducing the number of training epochs harms the ability of learning from a challenging dataset such as ILSVRC2012. Our approach, SD applies a teacher signal as a remedy, so that the training process becomes more efficient especially under a limited number of iterations.

In addition, SD achieves consistent accuracy gain over the baseline in terms of both top- and top- error rates. On ResNet101, the top- and top- errors drop by and absolutely, or and relatively; on ResNet152, the top- and top- errors drop by and absolutely, or and relatively. These improvement seems small, but we emphasize that (i) to the best of our knowledge, this is the first time that a model achieves higher accuracy on ILSVRC2012 with T-S optimization within one generation; and (ii) these accuracy gain transfers well to other visual recognition tasks, as shown in the next subsection.

We plot the curves of both the baseline and SD in the training process of ResNet152. We can see that, in the second mini-generation, SD achieves a higher training error but a lower testing error, i.e., the gap between training and testing accuracies becomes smaller, which aligns with our motivation that T-S optimization alleviates over-fitting.

Figure 1: The training and testing curves of ResNet152.

4.3 Transfer Experiments

Last but not least, we fine-tune the models pre-trained on ILSVRC2012 to the object detection and semantic segmentation tasks in the PascalVOC dataset [10], a widely used benchmark in computer vision. The most powerful models, i.e., the baseline and SD versions of ResNet152, are transferred using a standard approach, which preserves the network backbone (all layers before the final pooling layer), and introduces a network head known as Faster R-CNN [34] for object detetion, and DeepLab-v3 [4] for semantic segmentation.

Backbone mAP @ 2007 mIOU @ 2012
ResNet152-BL
ResNet152-SD
Table 5: PascalVOC object detection (2007, mAP, ) and semantic segmentation (2012, mIOU, ) results, both obtained by fine-tuning the pre-trained deep networks on ILSVRC2012 with Faster R-CNN [34] and DeepLab-v3 [4].

This model is fine-tuned in an end-to-end manner. For object detection on PascalVOC 2007, training images are fed into the network through epochs with a mini-batch size of . We start a learning rate of and divide it by after epochs. For semantic segmentation on PascalVOC 2012, training images [17] are fed into the network through epochs with a mini-batch size of . We use “poly” learning rate policy where the initial learning rate is and the power is . Results in terms of mAP and mIOU are summarized in Table 5. One can see that, the model with a higher accuracy on ILSVRC2012 also works better in both tasks, i.e.

, the benefit brought by SD preserves after fine-tuning. Also, we emphasize that SD, providing the same network architecture but being stronger, does not require any additional costs in transfer learning, which claims its potential applications in a wide range of vision problems.

5 Conclusions

In this paper, we present a framework named snapshot distillation (SD), which finishes teacher-student (T-S) optimization within one generation. To the best of our knowledge, this goal was never achieved before. The key contribution is to take teacher signals from the previous iterations of the same training process, and discuss on three principles that impact the performance of SD. The final solution is easy to implement yet efficient to carry out. With around extra training time, SD boosts the classification accuracy of several baseline models on CIFAR100 and ILSVRC2012 consistently, and the performance gain persists after the trained model is fine-tuned on other vision tasks, e.g., object detection, semantic segmentation.

Our research reduces the basic unit of T-S optimization from a complete generation to a mini-generation which is composed of a number of iterations. The essential difficulty that prevents us from further partitioning this unit is the requirement of T-S difference. We believe there exists, though not yet found, a way of eliminating this constraint so that the basic unit can be even smaller, e.g., one single iteration. If this is achieved, we may directly integrate supervision from the previous iteration into the current iteration, obtaining a new loss function in which the teacher signal appears as a term of higher-order gradients. We leave this topic for future research.

Acknowledgements This paper is supported by NSF award CCF-1317376 and ONR award N00014-15-1-2356. We thank Siyuan Qiao, Huiyu Wang and Chenxi Liu who provided insight and expertise to improve the research.

References