A large portion of recent advances in computer vision have been built upon deep learning, in particular training very deep neural networks. With the depth increasing from tens to hundreds , the issue of network optimization becomes a more and more important yet challenging problem, in which researchers proposed various approaches to deal with both under-fitting , over-fitting  and numerical instability .
As an alternative approach to assist training, teacher-student (T-S) optimization was originally designed for training a smaller network to approximate the behavior of a larger one, i.e., model compression , but later researcher found its effectiveness in providing complementary cues to training the same network . These approaches require a teacher model which is often obtained from a standalone training process. Then, an extra loss term which measures the similarity between the teacher and the student is added to the existing cross-entropy loss term. It was believed that such an optimization process benefits from so-called secondary information , i.e., class-level similarity that allows the student not to fit the one-hot class distribution. Despite their success in improving recognition accuracy, these approaches often suffer much heavier computational overheads, because a sequence of models need to be optimized one by one. A training process with one teacher and students requires more training time compared to a single model.
|Knowledge Distillation (2015) |
|FitNet (2015) |
|Net2Net (2016) ||✓|
|A Gift from KD (2017) |
|Label Refinery (2018) ||✓||✓|
|Born-Again Network (2018) ||✓|
|Tolerant Teacher (2018) ||✓||✓|
|Snapshot Distillation (this work)||✓||✓||✓|
This paper presents an algorithm named snapshot distillation (SD) to perform T-S optimization in one generation which, to the best of our knowledge, was not achieved in prior research. The differences between SD and previous methods are summarized in Table 1. The key idea of SD is straightforward: taking extra supervision (a.k.a. the teacher signal) from the prior iterations (in the same generation) instead of the prior generations. Based on this framework, we investigate several factors that impact the performance of T-S optimization, and summarize three principles, namely, (i) the teacher model has been well optimized; (ii) the teacher and student models are sufficiently different from each other; and (iii) the teacher provides secondary information  for the student to learn. Summarizing these requirements leads to our solution that using a cyclic learning rate policy, in which the last snapshot of each cycle (which arrives at a high accuracy and thus satisfies (i)), serves as the teacher for all iterations in the next cycle (these iterations are pulled away from the teacher after a learning rate boost, which satisfies (ii)). We also introduce a novel method to smooth the teacher signal in order to provide mild and more effective supervision (which satisfies (iii)).
Experiments are performed in two standard benchmarks for image classification, namely, CIFAR100  and ILSVRC2012 . SD consistently outperforms the baseline (direct optimization) especially in deeper networks. In addition, SD requires merely extra training time beyond the baselines, which is much faster than the existing multi-generation approaches. We also fine-tune the models trained by SD for object detection and semantic segmentation in the PascalVOC dataset  and observe accuracy gain, implying that the improvement brought by SD is transferrable.
2 Related Work
Recently, the research field of computer vision has been largely boosted by the theory of deep learning . With the availability of large-scale image datasets  and powerful computational resources, researchers designed deep networks to replace traditional handcrafted features  for visual understanding. The fundamental idea is to build a hierarchical network structure containing multiple layers, each of which contains a number of neurons having the same or similar mathematical functions, e.g., convolution, pooling, normalization, etc
. The strong ability of deep networks at fitting complicated feature-space distributions is widely verified in the previous literature. In a fundamental task known as image classification, deep convolutional neural networks have been dominating in the large-scale competitions . To further improve classification accuracy, researchers designed deeper and deeper networks , and also explored the possibility of discovering network architectures automatically .
The rapid progress of deep neural networks has helped a lot of visual recognition tasks. Features extracted from pre-trained classification networks can be transferred to small datasets for image classification, retrieval  or object detection . To transfer knowledge to a wider range of tasks, researchers often adopt a technique named fine-tuning, which replaces the last few layers of a classification network with some specially-designed modules (e.g., up-sampling for semantic segmentation  and edge detection  or regional proposal extraction for object detection ), so that the network can take advantage of the properties of the target problem while borrowing visual features from basic classification.
On the other hand, optimizing a deep neural network is a challenging problem. When the number of layers becomes very large (e.g., more than
layers), vanilla gradient descent approaches often encounter stability issues and/or over-fitting. To deal with them, researchers designed verious approaches such as ReLU activation, Dropout 23]. However, as depth increases, the large number of parameters makes it easy for the neural networks to be over-confident , especially in the scenarios of limited training data. An effective way is to introduce extra priors or biases to constrain the training process. A popular example is to assume that some visual categories are more similar than others 
, so that a class-level similarity matrix is added to the loss function. However, this method still suffers the lack of modeling per-image class-level similarity (e.g., a cat in one image may look like a dog, but in another image, it may be closer to a rabbit), which is observed in previous research .
Teacher-student optimization is an effective way to formulate per-image class-level similarity. In this flowchart, a teacher student is first trained, and then used to guide the student network, in which process the output (e.g., confidence scores) of the teacher network carries class-level similarity for each image. This idea was first proposed to distill knowledge from a larger teacher network and compress it to a smaller student network , or initialize a deeper/wider network with pre-trained weights of a shallower/narrower network . Later, it was extended in various aspects, including using an adjusted way of teacher supervision , using multiple teachers towards a better guidance , adding supervision to intermediate neural responses , and allowing two networks to provide supervision to each other . Recently, researchers noted that this idea can be used to optimize deep networks in many generations , namely, a few networks with the same architecture are optimized one by one, in which the next one borrows supervision from the previous one. It was argued that the softness of the teacher signal plays an important role in educating a good student . Despite the success of these approaches in boosting recognition accuracy, they suffer from lower training efficiency, as in a -generation process (one teacher and students) requires more training time. An inspiring cue comes from the effort of training a few models for ensemble within the same time , in which the number of iterations for training each model was largely reduced.
3 Snapshot Distillation
This section presents snapshot distillation (SD), the first approach that achieves teacher-student (T-S) optimization within one generation. We first briefly introduce a general flowchart of T-S optimization and build a notation system. Then, we analyze the main difficulties that limit its efficiency, based on which we formulate SD and discuss principles and techniques to improve its performance.
3.1 Teacher-Student Optimization
Let a deep neural network be , where denotes the input image, denotes the output data (e.g., a
-dimensional vector for classification withbeing the number of classes), and denotes the learnable parameters. These parameters are often initialized as random noise, and then optimized using a training set with data samples, .
Conventional optimization algorithm works by sampling mini-batches or subsets from the training set. Each of them, denoted as
, is fed into the current model to estimate the difference between prediction and ground-truth labels:
This process searches over the parameter space to find the approximately optimal that interprets or fits . However, the model trained in this way often over-fits the training set, i.e., cannot be transferred to the testing set to achieve good performance as in the training set. As observed in prior work , this is partly because the supervision was provided in one-hot vectors, which forces the network to prefer the true class overwhelmingly to all other classes – this is often not the optimal choice because rich information of class-level similarity is simply discarded .
To alleviate this issue, teacher-student (T-S) optimization was proposed, in which a pre-trained teacher network added an extra term to the loss function to measure the KL-divergence between teacher and student :
where and denote the parameters in teacher and student models, respectively. This is to say, the fitting goal of the student is no longer the ground-truth one-hot vector which is too strict, but leans towards the teacher signal (a softened vector most often with correct prediction). This formulation can be applied in the form of multiple generations. Let be the total number of generations . These approaches started with a so-called patriarch model , and in the -th generation, was used to teach .  showed the necessity of setting a tolerant teacher so that the students can absorb richer information from class-level similarity and achieve higher accuracy.
Despite the ability of T-S optimization in improving recognition accuracy, it often suffers the weakness of being computationally expensive. Typically, a T-S process with one teacher and students costs more time, yet this process is often difficult to parallelize111To make fair comparison, researchers often train deep networks using a fixed number of GPUs. T-S optimization trains models serially, which is often difficult to accelerate even with a larger number of GPUs.. This motivates us to propose an approach named snapshot distillation (SD), which is able to finish T-S optimization in one generation.
3.2 The Flowchart of Snapshot Distillation
The idea of SD is very simple. To finish T-S optimization in one generation, during the training process, we always extract the teacher signal from an earlier iteration, by which we refer to an intermediate status of the same model, rather than another model that was optimized individually.
Mathematically, let be the randomly initialized parameters. The baseline training process contains a total of iterations, the -th of which samples a mini-batch , computes the gradient of Eqn (1), and updates the parameters from to . SD works by assigning a number for the -th iteration, indicating a previous snapshot as the teacher to update . Thus, Eqn (3.1) becomes:
Here and are weights for one-hot and teacher supervisions. When , the teacher signal is ignored at the current iteration, and thus Eqn (3.2) degenerates to Eqn (1). The pseudo code of SD is provided in Algorithm 1. In what follows, we will discuss several principles required to improve the performance of SD.
3.3 Principles of Snapshot Distillation
This subsection forms the core contribution of our work, which discusses the principles that should be satisfied to improve the performance of SD. In practice, this involves how to design the hyper-parameters . We first describe three principles individually, and summarize them to give our solution in the final part.
3.3.1 Principle #1: The Quality of Teacher
In prior work, the importance of having a high-quality teacher model has been well studied. At the origin of T-S optimization , a more powerful teacher model was used to guide a smaller and thus weaker student model, so that the teacher knowledge is distilled and compressed into the student. This phenomenon persists in a multi-generation T-S optimization in which teacher and student share the same network architecture .
Mathematically, the teacher model determines the second term on the right-hand side of Eqn (3.2), i.e., the KL-divergence between teacher and student. If the teacher is not well optimized and provides noisy supervision, the risk that two terms conflict with each other becomes high. As we shall see later, this principle is even more important in SD, as the number of iterations allowed for optimizing each student becomes smaller, and the efficiency (or the speed of convergence) impacts the final performance heavier.
3.3.2 Principle #2: Teacher-Student Difference
In the context of T-S optimization in one generation, one more challenge emerges. In each iteration, the teacher and student are two snapshots from the same training process, and so the similarity between them is higher than that in multi-generation T-S optimization. This makes the second term on the right-hand side of Eqn 3.2 degenerate and, consequently, its contribution to the gradient that receives for updating itself is considerably changed.
We evaluate the impact of T-S similarity using the -layer DenseNet  on the CIFAR100 dataset . All models are trained with the cosine annealing learning rate policy  for a total of epochs. Detailed settings are elaborated in Section 4.1. To construct T-S pairs with different similarities, we first perform a complete training process containing standard epochs and starting from scratch, and denote the final model by . Then, we take the snapshots at , and (scratch) epochs, and denote them by , and , respectively, with the number after indicating the number of elapsed epochs. Then, we continue training these snapshots with the same configurations (mini-batch size, learning rates, etc.) but different randomization which affects the sampled mini-batch in each iteration and the data augmentation performed at each training sample. These models are denoted by , and , respectively, where the superscript implies being used as a teacher model, and each number after indicates the number of common epochs shared with . All these teacher models have exactly epochs.
Now, we use these models to teach the intermediate snapshots, i.e., , and . When is used to teach , their common part, i.e., the first epochs are preserved, i.e., the first epochs used Eqn (1) and the remaining epochs used Eqn (3.1). Results are summarized in Table 2. Note that from a probabilistic perspective, , and are identical to each other in classification accuracy, and from the previous part we expect them to provide the same teaching ability. We start with observing their behavior when is the student. This case degenerates to a two-generation T-S optimization. Since all teachers are probabilistically identical, we only evaluate one of these pairs, reporting a accuracy which is higher than the baseline (the average of , and is ). However, when is the student, serves as a better teacher because it does not share the first epochs with . This offers a larger difference between teacher and student and, consequently, produces better classification performance ( vs. ). When is chosen to be the student, this phenomenon preserves, i.e., T-S optimization prefers a larger difference between teacher and student.
3.3.3 Principle #3: Secondary Information
The last factor, also being the one that was most studied before, is how knowledge is delivered from teacher to student. There are two arguments, both of which suggesting that a smoother teacher signal preserves richer information, but they differ from each other in the way of achieving this goal. The distillation algorithm  used a temperature term to smooth both input and output scores, and the tolerant teacher algorithm  trained a less confident teacher by adding a regularization term in the first generation (a.k.a. the patriarch), and this strategy was verified the advantageous over the non-regularized version .
, the neural responses before the softmax layer) by a temperature coefficient. In the framework of knowledge distillation, the student signals should also be softened before the KL divergence is computed with the teacher signals. The reason is that, the student with a shallow architecture is not capable of perfectly fitting the outputs of the teacher with a deep architecture, and thus matching the soft versions of their outputs is a more rational choice. The aim of knowledge distillation is to match the outputs, forcing the student to predict what the teacher predicts as much as possible. However, our goal is to generate secondary information in T-S optimization, instead of matching. As a result, we do not divide the student signal by . This strategy also aligns with Eqn 1 used in the very first iterations (i.e., no teacher signals are provided). In experiments, we observe a faster convergence as well as consistent accuracy gain – see Section 4.1 for detailed numbers. We name it as asymmetric distillation.
Summarizing the above three principles, we present our solution to improve the performance of SD. We partition the entire training process with iterations into mini-generations with iterations, respectively, and . The last iteration in each mini-generation serves as the teacher of all iterations in the next mini-generation. This is to say, there are teachers. The first teacher is the snapshot at iterations, the second one at iterations, and the last one at iterations. We have:
For , we define for later convenience, and in this case , and Eqn (3.2) degenerates to Eqn (1). Following Principle #2, we shall assume that the iterations right after each teacher have large learning rates, in order to ensure the sufficient difference between the teacher and student models. Meanwhile, according to Principle #1, the teacher itself should be good, which implies that the iterations before each teacher have small learning rates, making the network converge to an acceptable state. To satisfy both conditions, we require the learning rates within each mini-generation to start from a large value and gradually go down. In practice, we use the cosine annealing strategy  which was verified to converge better:
Here, is the index of mini-generation of , and is the starting learning rate at the beginning of this mini-generation (often set to be large). Finally, we follow Section 3.3.3 to use asymmetric distillation in order to satisfy Principle #3.
If we set and switch off asymmetric distillation, the above solution degenerates to snapshot ensemble (SE) . In experiments, we compare these two approaches under the same setting, and find that both approaches work well on CIFAR100 (SD reports better results), but on ILSVRC2012, SD achieves higher accuracy over the baseline while SE does not222The SE paper  reported a higher accuracy on ResNet50, but it was compared to the baseline with the stepwise learning rate policy, not the cosine annealing policy that should be the direct baseline. The latter baseline is more than higher than the former, and also outperforms SE.. This is arguably because CIFAR100 is relatively simple, so that the original setting ( iterations) are over-sufficient for convergence, and thus reducing the number of iterations of each mini-generation does not cause significant accuracy drop. ILSVRC2012, however, is much more challenging and thus convergence becomes a major drawback of both SD and SE. SD, with the extra benefit brought by T-S optimization, bridges this gap and outperforms the baseline.
Also, note that the above solution is only one choice. Under the generalized framework (Algorithm 1) and following these three principles, other training strategies can be explored, e.g., using super-convergence  to alleviate the drawback of weaker convergence. These options will be studied in the future.
4.1 The CIFAR100 Dataset
4.1.1 Settings and Baselines
We first evaluate SD on the CIFAR100 dataset , a low-resolution () dataset containing RGB images. These images are split into a training set of images and a testing set of
images, and in both of them, images are uniformly distributed over allclasses ( superclasses each of which contains fine-level classes). We do not perform experiments on the CIFAR10 dataset because it does not contain fine-level visual concepts, and thus the benefit brought by T-S optimization is not significant (this was also observed in  and analyzed in ).
We investigate two groups of baseline models. The first group contains standard deep ResNets  with , , and layers. Let the total number of layers be , then is the number of residual blocks in each stage. Given a input image, a convolution is first performed without changing its spatial resolution. Three stages followed, each of which has residual blocks (two convolutions summed up with an identity connection). Batch normalization  and ReLU activation  are applied after each convolutional layer. The spatial resolution changes in the three stages (, and ), as well as the number of channels (, and ). An average pooling layer is inserted after each of the first two stages. The network ends with global average-pooling followed by a fully-connected layer with outputs. The second group has two DenseNets  with and layers, respectively. These networks share the similar architecture with the ResNets, but the building blocks in each stage are densely-connected, with the output of each block concatenated to the accumulated feature vector and fed into the next block. The base feature length and growth rate are and for DenseNet100, and and for DenseNet190.
Following the conventions, we train all these networks from scratch. We use the standard Stochastic Gradient Descent (SGD) with a weight decay of
and a Nesterov momentum of. In ResNets, we train the network for epochs with a mini-batch size of and a base learning rate of . In DenseNets, we train the network for epochs with a mini-batch size of and a base learning rate of . The cosine annealing learning rate  is used, in order to make fair comparison between the baseline and SD. In the training process, standard data-augmentation is used, i.e.
, each image is symmetrically-padded with a-pixel margin on each of the four sides. In the enlarged image, a subregion with
pixels is randomly cropped and flipped with a probability of. We do not use any data augmentation in the testing stage.
To apply SD, we evenly partition the entire training process into mini-generations, i.e., . For ResNets, we have , and , and for DenseNets, , and . The same learning rate is used at the beginning of each mini-generation, and decayed following Eqn (5). We use an asymmetric distillation strategy (Section 3.3.3) with and , respectively. In Eqn 3.2, we set and to approximately balance two sources of gradients in their magnitudes .
4.1.2 Quantitative Results and Analysis
Results are summarized in Table 3. Towards fair comparison, for different instances of the same backbone, network weights are initialized in the same way, although randomness during the training process (e.g., data shuffle and augmentation) is not unified. In addition, the first mini-generation (, no T-S optimization) is shared between SE (snapshot ensemble) and SD.
We first observe that SD brings consistent accuracy gain for all models, regardless of network backbones, and surpassing both the baseline and SE. In DenseNet190, the most powerful baseline, SD with achieves an error rate of at the best epoch, which is competitive among the state-of-the-arts (all of which reported the best epoch). Moreover, in terms of model ensemble from through , SD provides comparable numbers to SE, although we emphasize that SD focuses on optimizing a single model while SE, with weaker single models, requires ensemble to improve classification accuracy. Another explanation comes from the optimization policy of SD. By introducing a teacher signal to optimize each student, different snapshots in SD tend to share a higher similarity than SE, and this is the reason that SD reports a smaller accuracy gain from a single model to model ensemble.
Another important topic to discuss is how asymmetric distillation impacts T-S optimization, for which we show several evidences. With a temperature term , the student tends to become smoother, i.e., the entropy of the class distribution is larger. However, as shown in  and , T-S optimization achieves satisfying performance via finding a balancing point between certainty and uncertainty, so, as the latter gradually increases, we can observe a peak in classification accuracy. In DenseNet190 with , this peak appears during the third mini-generation which achieves the lowest error rate at , but the final error rate goes up . A similar phenomenon also appears in DenseNet100 with , which also achieves the lowest error at the third mini-generation (the lowest error of vs. the last error ), and in ResNets with . This reveals that the optimal temperature term is closely related to the network backbone. For a deeper backbone (e.g., DenseNet190) which itself has a strong ability of fitting data, we use a smaller to introduce less soft labels, decreasing the ambiguity.
4.2 The ILSVRC2012 Dataset
4.2.1 Settings and Baselines
We now investigate a much more challenging dataset, ILSVRC2012 , which is a popular subset of the ImageNet database . It contains training images and testing images, all of which are high-resolution, covering object classes in total. The distribution over classes is approximately uniform in the training set and and strictly uniform in the testing set.
We use deep ResNets  with and layers. They share the same overall design with the ResNets used for CIFAR100, but in each residual block, there is a so-called bottleneck structure which, in order to accelerate, compresses the number of channels by and later recovers the original number. Each input image has a size of . After the first
convolutional layer with a stride ofand a max-pooling layer, four main stages follow with different numbers of blocks (ResNet101: ; ResNet152: ). The spatial resolutions in these four stages are , , and , and the number of channels are , , and , respectively. Three max-pooling layers are inserted between these four stages. The network ends with global average-pooling followed by a fully-connected layer with outputs.
We follow the conventions to configure the training parameters. The standard Stochastic Gradient Descent (SGD) with a weight decay of and a Nesterov momentum of is used. In a total of epochs, the mini-batch size is fixed to be . We still use the cosine annealing learning rate  starting with . A series of data-augmentation techniques  are applied in training to alleviate over-fitting, including rescaling and cropping the image, randomly mirroring and rotating (slightly) the image, changing its aspect ratio and performing pixel jittering. In the testing stage, the standard single-center-crop is used.
To apply SD, we set which partitions the training process into two equal sections (each has epochs). The reason of using a smaller (compared to CIFAR experiments) is that on ILSVRC2012 with high-resolution images and more complex semantics, it is much more difficult to guarantee convergence with a fewer number of iterations within each mini-generation. Regarding the temperature term, we fix . Other settings are the same as in the CIFAR experiments.
4.2.2 Quantitative Results
Experimental results are summarized in Table 4. To make fair comparison, the first mini-generation of SE and SD are shared, and thus the only difference lies in whether the teacher signal is provided in the second mini-generation. We can see that the performance of SE is consistently worse than that of both BL and SD. Even if the two mini-generations of SE are fused, the error rates ( on ResNet101 and on ResNet152) are slightly higher than BL. This reveals that reducing the number of training epochs harms the ability of learning from a challenging dataset such as ILSVRC2012. Our approach, SD applies a teacher signal as a remedy, so that the training process becomes more efficient especially under a limited number of iterations.
In addition, SD achieves consistent accuracy gain over the baseline in terms of both top- and top- error rates. On ResNet101, the top- and top- errors drop by and absolutely, or and relatively; on ResNet152, the top- and top- errors drop by and absolutely, or and relatively. These improvement seems small, but we emphasize that (i) to the best of our knowledge, this is the first time that a model achieves higher accuracy on ILSVRC2012 with T-S optimization within one generation; and (ii) these accuracy gain transfers well to other visual recognition tasks, as shown in the next subsection.
We plot the curves of both the baseline and SD in the training process of ResNet152. We can see that, in the second mini-generation, SD achieves a higher training error but a lower testing error, i.e., the gap between training and testing accuracies becomes smaller, which aligns with our motivation that T-S optimization alleviates over-fitting.
4.3 Transfer Experiments
Last but not least, we fine-tune the models pre-trained on ILSVRC2012 to the object detection and semantic segmentation tasks in the PascalVOC dataset , a widely used benchmark in computer vision. The most powerful models, i.e., the baseline and SD versions of ResNet152, are transferred using a standard approach, which preserves the network backbone (all layers before the final pooling layer), and introduces a network head known as Faster R-CNN  for object detetion, and DeepLab-v3  for semantic segmentation.
|Backbone||mAP @ 2007||mIOU @ 2012|
This model is fine-tuned in an end-to-end manner. For object detection on PascalVOC 2007, training images are fed into the network through epochs with a mini-batch size of . We start a learning rate of and divide it by after epochs. For semantic segmentation on PascalVOC 2012, training images  are fed into the network through epochs with a mini-batch size of . We use “poly” learning rate policy where the initial learning rate is and the power is . Results in terms of mAP and mIOU are summarized in Table 5. One can see that, the model with a higher accuracy on ILSVRC2012 also works better in both tasks, i.e.
, the benefit brought by SD preserves after fine-tuning. Also, we emphasize that SD, providing the same network architecture but being stronger, does not require any additional costs in transfer learning, which claims its potential applications in a wide range of vision problems.
In this paper, we present a framework named snapshot distillation (SD), which finishes teacher-student (T-S) optimization within one generation. To the best of our knowledge, this goal was never achieved before. The key contribution is to take teacher signals from the previous iterations of the same training process, and discuss on three principles that impact the performance of SD. The final solution is easy to implement yet efficient to carry out. With around extra training time, SD boosts the classification accuracy of several baseline models on CIFAR100 and ILSVRC2012 consistently, and the performance gain persists after the trained model is fine-tuned on other vision tasks, e.g., object detection, semantic segmentation.
Our research reduces the basic unit of T-S optimization from a complete generation to a mini-generation which is composed of a number of iterations. The essential difficulty that prevents us from further partitioning this unit is the requirement of T-S difference. We believe there exists, though not yet found, a way of eliminating this constraint so that the basic unit can be even smaller, e.g., one single iteration. If this is achieved, we may directly integrate supervision from the previous iteration into the current iteration, obtaining a new loss function in which the teacher signal appears as a term of higher-order gradients. We leave this topic for future research.
Acknowledgements This paper is supported by NSF award CCF-1317376 and ONR award N00014-15-1-2356. We thank Siyuan Qiao, Huiyu Wang and Chenxi Liu who provided insight and expertise to improve the research.
-  Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. Label-embedding for image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(7):1425–1438, 2016.
-  H. Bagherinezhad, M. Horton, M. Rastegari, and A. Farhadi. Label refinery: Improving imagenet classification through label progression. arXiv preprint arXiv:1805.02641, 2018.
-  L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. In International Conference on Learning Representations, 2016.
-  L. C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
-  T. Chen, I. Goodfellow, and J. Shlens. Net2net: Accelerating learning via knowledge transfer. In International Conference on Learning Representations, 2016.
J. Deng, A. C. Berg, K. Li, and L. Fei-Fei.
What does classifying more than 10,000 image categories tell us?In European Conference on Computer Vision, 2010.
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei.
Imagenet: A large-scale hierarchical image database.
Computer Vision and Pattern Recognition, 2009.
J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell.
Decaf: A deep convolutional activation feature for generic visual
International Conference on Machine Learning, 2014.
-  X. Dong, G. K., K. Zhan, and Y. Yang. Eraserelu: a simple way to ease the training of deep convolution neural networks. arXiv preprint arXiv:1709.07634, 2017.
-  M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303–338, 2010.
-  T. Furlanello, Z. C. Lipton, L. Itti, and A. Anandkumar. Born again neural networks. In International Conference on Machine Learning, 2018.
-  X. Gastaldi. Shake-shake regularization. arXiv preprint arXiv:1705.07485, 2017.
-  R. Girshick. Fast r-cnn. In Computer Vision and Pattern Recognition, 2015.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition, 2014.
-  C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, 2017.
-  D. Han, J. Kim, and J. Kim. Deep pyramidal residual networks. In Computer Vision and Pattern Recognition, 2017.
-  B. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. In International Conference on Computer Vision, 2011.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition, 2016.
-  G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
-  J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In Computer Vision and Pattern Recognition, 2018.
-  G. Huang, Y. Li, G. Pleiss, Z. Liu, J. E. Hopcroft, and K. Q. Weinberger. Snapshot ensembles: Train 1, get m for free. In International Conference on Learning Representations, 2018.
-  G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. In Computer Vision and Pattern Recognition, 2017.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 2015.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
-  Y. LeCun, Y. Bengio, and G. E. Hinton. Deep learning. Nature, 521(7553):436, 2015.
-  C. Liu, B. Zoph, J. Shlens, W. Hua, L. J. Li, L. Fei-Fei, A. L. Yuille, J. Huang, and K. Murphy. Progressive neural architecture search. In European Conference on Computer Vision, 2018.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Computer Vision and Pattern Recognition, 2015.
-  I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
-  V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In International Conference on Machine Learning, 2010.
-  G. Pereyra, G. Tucker, J. Chorowski, Ł. Kaiser, and G. Hinton. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017.
-  F. Perronnin, J. Sanchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In European conference on computer vision, 2010.
-  A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Computer Vision and Pattern Recognition, 2014.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, 2015.
-  A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In International Conference on Learning Representations, 2014.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
-  L. N. Smith and N. Topin. Super-convergence: Very fast training of residual networks using large learning rates. arXiv preprint arXiv:1708.07120, 2017.
-  N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, et al. Going deeper with convolutions. In Computer Vision and Pattern Recognition, 2015.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Computer Vision and Pattern Recognition, 2016.
-  A. Tarvainen and H. Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems, 2017.
-  N. Verma, D. Mahajan, S. Sellamanickam, and V. Nair. Learning hierarchical similarity metrics. In Computer Vision and Pattern Recognition, 2012.
J. Wang, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, Y. Wu, et al.
Learning fine-grained image similarity with deep ranking.In Computer Vision and Pattern Recognition, 2014.
-  C. Wu, M. Tygert, and Y. LeCun. Hierarchical loss for classification. arXiv preprint arXiv:1709.01062, 2017.
-  L. Xie and A. Yuille. Genetic cnn. In International Conference on Computer Vision, 2017.
-  S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In Computer Vision and Pattern Recognition, 2017.
-  S. Xie and Z. Tu. Holistically-nested edge detection. In International Conference on Computer Vision, 2015.
-  C. Yang, L. Xie, S. Qiao, and A. L. Yuille. Knowledge distillation in generations: More tolerant teachers educate better students. arXiv preprint arXiv:1805.05551, 2018.
-  J. Yim, D. Joo, J. Bae, and J. Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In Computer Vision and Pattern Recognition, 2017.
-  S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
-  C. Zhang, J. Cheng, and Q. Tian. Image-level classification by hierarchical structure learning with visual and semantic similarities. Information Sciences, 422:271–281, 2018.
-  H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
-  T. Zhang, G. J. Qi, B. Xiao, and J. Wang. Interleaved group convolutions. In Computer Vision and Pattern Recognition, 2017.
-  Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu. Deep mutual learning. arXiv preprint arXiv:1706.00384, 2017.
-  Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang. Random erasing data augmentation. arXiv preprint arXiv:1708.04896, 2017.
B. Zoph and Q. V. Le.
Neural architecture search with reinforcement learning.In International Conference on Learning Representations, 2017.