Isotonic Data Augmentation for Knowledge Distillation

07/03/2021 ∙ by Wanyun Cui, et al. ∙ 0

Knowledge distillation uses both real hard labels and soft labels predicted by teacher models as supervision. Intuitively, we expect the soft labels and hard labels to be concordant w.r.t. their orders of probabilities. However, we found critical order violations between hard labels and soft labels in augmented samples. For example, for an augmented sample x=0.7*panda+0.3*cat, we expect the order of meaningful soft labels to be P_soft(panda|x)>P_soft(cat|x)>P_soft(other|x). But real soft labels usually violate the order, e.g. P_soft(tiger|x)>P_soft(panda|x)>P_soft(cat|x). We attribute this to the unsatisfactory generalization ability of the teacher, which leads to the prediction error of augmented samples. Empirically, we found the violations are common and injure the knowledge transfer. In this paper, we introduce order restrictions to data augmentation for knowledge distillation, which is denoted as isotonic data augmentation (IDA). We use isotonic regression (IR) – a classic technique from statistics – to eliminate the order violations. We show that IDA can be modeled as a tree-structured IR problem. We thereby adapt the classical IRT-BIN algorithm for optimal solutions with O(c log c) time complexity, where c is the number of labels. In order to further reduce the time complexity, we also propose a GPU-friendly approximation with linear time complexity. We have verified on variant datasets and data augmentation techniques that our proposed IDA algorithms effectively increases the accuracy of knowledge distillation by eliminating the rank violations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Data augmentation, as a widely used technology, is also beneficial to knowledge distillation [7]. For example, [26] use data augmentation to improve the generalization ability of knowledge distillation. [25] use Mixup [30], a widely applied data augmentation technique, to improve the efficiency of knowledge distillation. In this paper, we focus on the mixture-based data augmentation (e.g. Mixup and Cutmix [29]), arguably one of the most widely used type of augmentation techniques.

Intuitively, we expect the order concordance between soft labels and hard labels. In Fig. 2, for an augmented sample , the hard label distribution is . Then we expect the soft labels to be monotonic w.r.t. the hard labels: .

(a) The Kendall’s coefficient between the soft label distribution and the hard label distribution. Larger means higher ordinal association.
(b) The ratio of augmented samples in which at least one original label is in the top soft labels.
Figure 1: Both 0(a) and 0(b) reveal that, the orders of soft labels and hard labels are highly concordant for the original samples. But for augmented samples, the order concordance is broken seriously. This motivates us to introduce the order restrictions in data augmentation for knowledge distillation.
Figure 2: Using isotonic regression to introduce order restrictions to soft labels.

However, we found critical order violations between hard labels and soft labels in real datasets and teacher models. To verify this, we plot the Kendall’s coefficient [12]

between the soft labels and the hard labels of different teacher models and different data augmentation techniques in CIFAR-100 in Fig. 

0(a). We only count label pairs whose orders are known. In other words, we ignore the orders between two “other” labels, since we do not know them. A clear phenomenon is that, although the hard labels and soft labels are almost completely concordant for original samples, they are likely to be discordant for augmented samples. What’s more surprising is that, in Fig. 0(b), we find that there are a proportion of augmented samples, in which none of the original labels are in the top 2 of the soft labels.We attribute this to the insufficient generalization ability of the teacher, which leads to the prediction error of the augmented sample. We will show in Sec 5.3 that the order violations will injury the knowledge distillation. As far as we know, the order violations between hard labels and soft labels havn’t been studied in previous studies.

A natural direction to tackle the problem is to reduce the order violations in soft labels. To this end, we leverage the isotonic regression (IR) – a classic technique from statistics – to introduce the order restrictions into the soft labels. IR minimizes the distance from given nodes to a set defined by some order constraints. In Fig. 2, by applying order restrictions to soft labels via IR, we obtain concordant soft labels while keeping the original soft label information as much as possible. IR has numerous important applications in statistics [3], operations research [17], and signal processing [1]. To our knowledge, we are the first to introduce IR in knowledge distillation.

Some other studies also noticed the erroneous of soft labels in knowledge distillation and were also working on mitigating it [27, 9, 24]. However, none of them revealed the order violations of soft labels.

2 Related Work

Knowledge Distillation with Erroneous Soft Labels. In recent years, knowledge distillation [11] as a model compression and knowledge transfer technology has received extensive research interests. Since the teacher model is non-optimal, how to deal with the errors of soft labels has become an important issue. Traditional methods often solve this problem via optimizing the teacher model or student model.

For teacher optimization, [6] finds that a larger network is not necessarily a better teacher, because student models may not be able to imitate a large network. They proposed that early-stopping should be used for the teacher, so that large networks can behave more like small networks [15], which is easier to imitate. An important idea for teacher model optimization is “strictness” [28]

, which refers to tolerating the teacher’s probability distribution outside of hard labels.

The training optimization of the student model is mainly to modify the loss function of distillation.

[27] proposed to assign different s to different samples based on their deceptiveness to teacher models. [9] proposed that the label correlation represented by student should be consistent with teacher model. They use residual labels to add this goal to the loss function.

However, none of these studies reveal the phenomenon of rank violations in data augmented knowledge distillation.

Data Mixing is a typical data augmentation method. Mixup [30] first randomly combines a pair of samples via weighted sum of their data and labels. Some recent studies include CutMix [29], and RICAP [23]. The main difference among the different mixing methods is how they mix the data.

The difference between our isotonic data augmentation and the conventional data augmentation is that we focus on relieving the error transfer of soft labels in knowledge distillation by introducing order restrictions. Therefore, we pay attention to the order restrictions of the soft labels, instead of directly using the mixed data as data augmentation. We verified in the experiment section that our isotonic data augmentation is more effective than directly using mixed data for knowledge distillation.

3 Data Augmentation for Knowledge Distillation

3.1 Standard Knowledge Distillation

In this paper, we consider the knowledge distillation for multi-class classification. We define the teacher model as , where is the feature space, is the label space. We denote the student model as . The final classification probabilities of the two models are computed by and , respectively. We denote the training dataset as , where

is one-hot encoded. We denote the score of the

-th label for as .

The distillation has two objectives: hard loss and soft loss. The hard loss encourages the student model to predict the supervised hard label . The soft loss encourages the student model to perform similarly with the teacher model. We use cross entropy (CE) to measure both similarities:

(1)

where is a hyper-parameter denoting the temperature of the distillation.

The overall loss of the knowledge distillation is the sum of the hard loss and soft loss:

(2)

where is a hyper-parameter.

3.2 Knowledge Distillation with Augmented Samples

In this subsection, we first formulate data augmentation for knowledge distillation. We train the student model against the augmented samples instead of the original samples from . This method is considered as a baseline without introducing the order restrictions. We then formulate the data augmentation techniques used in this paper.

Data Augmentation-base Knowledge Distillation. In this paper, we consider two classic augmentations (i.e., CutMix [29] and Mixup [30]). Our work can be easily extended to other mixture-based data enhancement operations (e.g. FCut [10], Mosiac [5]). As in Mixup and CutMix, we combine two original samples to form a new augmented sample. For two original samples and , data augmentation generates a new sample . The knowledge distillation based on augmented samples has the same loss function as in Eq. (2):

(3)

where the augmented sample is sampled by first randomly selecting original samples from , and then mixing the samples.

We formulate the process of augmenting samples as:

(4)

where denotes the specific data augmentation technique. is distributed in two labels (e.g. ). We will formulate different data augmentation techniques below.

CutMix augments samples by cutting and pasting patches for a pair of original images. For and , CutMix samples a patch for both of them. Then CutMix removes the region in and fills it with the patch cropped from of . We formulate CutMix as:

(5)

where indicates whether the coordinates are inside () or outside () the patch. We follow the settings in [29] to uniformly sample and and keep the aspect ratio of to be proportional to the original image:

(6)

Mixup augments a pair of sample by a weighted sum of their input features:

(7)

where each for .

4 Isotonic Data Augmentation

In this section, we introduce the order restrictions in data augmentation for knowledge distillation, which is denoted as isotonic data augmentation. In Sec 4.1, we analyze the partial order restrictions of soft labels. We propose the new objective of knowledge distillation subjected to the partial order restrictions in Sec 4.2. Since the partial order is a special directed tree, we propose a more efficient Adapted IRT algorithm based on IRT-BIN [19] to calibrate the original soft labels. In Sec 4.3, we directly impose partial order restrictions on the student model. We propose a more efficient approximation objective based on penalty methods.

4.1 Analysis of the Partial Order Restrictions

We hope that the soft label distribution by isotonic data augmentation and the hard label distribution have no order violations. We perform isotonic regression on the original soft labels to obtain new soft labels that satisfy the order restrictions. We denote the new soft labels as the order restricted soft labels . For simplicity, we will use to denote . We use to denote the score of the -th label.

To elaborate how we compute , without loss of generality, we assume the indices of the two original labels of the augmented sample are respectively with . So is monotonically decreasing, i.e. .

We consider two types of order restrictions: (1) the order between two original labels (i.e., ); (2) The order between an original label and a non-original label (i.e. ). For example, in Fig. 2, we expect the probability of panda is greater than that of cat. And the probability of cat is greater than other labels. We do not consider the order between two non-original labels.

We use to denote the partial order restrictions, where each vertex represents , an edge represents the restriction of . is formulated in Eq. (8). We visualize the partial order restrictions in Fig. 3.

(8)
Figure 3: The partial order restrictions is a directed tree.
Lemma 1.

is a directed tree.

4.2 Knowledge Distillation via Order Restricted Soft Labels

For an augmented sample , we first use the teacher model to predict its soft labels. Then, we calibrate the soft labels to meet the order restrictions. We use the order-restricted soft label distribution to supervise the knowledge distillation. We formulate this process below.

Objective with Order Restricted Soft Labels. Given the hard label distribution and soft label distribution of an augmented sample , the objective of knowledge distillation with isotonic data augmentation is:

(9)

where denotes the optimal calibrated soft label distribution.

To compute , we calibrate the original soft label to meet the order restrictions. There are multiple choices for to meet the restrictions. Besides order restrictions, we also hope that the distance between the original soft label distribution and the calibrated label distribution is minimized. Intuitively, the original soft labels contain the knowledge of the teacher model. So we want this knowledge to be retained as much as possible. We formulate the calibration below.

We compute which satisfies the order restriction while preserving most knowledge by minimizing the mean square error to the original soft labels:

(10a)
(10b)

Eq. (10b) denotes the order restrictions. Eq. (10a) denotes the objective of preserving most original information. The goal of computing can be reduced to the classical isotonic regression in statistics.

Here we analyze the difference between isotonic data augmentation and probability calibration in machine learning 

[18]. Isotonic regression is also applied in probability calibration. While both the proposed isotonic data augmentation and probability calibration try to rectify the erroneous predicted by models, our proposed isotonic data augmentation only happens in the training phase when the groundtruth distribution (i.e. the hard labels) is known. We use the isotonic soft labels as the extra supervision for model training. In contrast, the probability calibration needs to learn an isotonic function and uses it to predict the probability of unlabeled samples.

Algorithm. To optimize , we need to compute first. According to lemma 1, finding the optimal can be reduced to the tree-structured IR problem, which can be solved by IRT-BIN [19] with binomial heap in time complexity. We notice that the tree structure in our problem is special: a star (nodes ) and an extra edge . So we give a more efficient implementation compared to IRT-BIN with only one sort in algorithm 1.

Data: ;

1:  Initialize , for
2:  Sort for in descending order
3:  
4:  while  AND  do
5:     
6:     
7:     
8:  end while
9:  if  then
10:     
11:     
12:     while  AND  do
13:         
14:         
15:         
16:     end while
17:  end if
18:  Recover from according to
19:  Return
Algorithm 1 Adapted IRT.

The core idea of the algorithm is to iteratively reduce the number of violations by merging node blocks until no order violation exists. Specifically, we divide the nodes into several blocks, and use to denote the block for node . At initialization, each only contains node itself. Since all nodes except and are leaf nodes with a common parent , we first consider the violations between and (line 4-7). Note that nodes are sorted according to their soft probabilities . We enumerate and iteratively determine whether there is a violation between node and node . If so, we absorb node into . This absorption will set all nodes in to their average value. In this way, we ensure that there are no violations among nodes . Then, we consider the order between and . If they are discordant (i.e. ), we similarly absorb into to eliminate this violation (line 9-11). If this absorption causes further violations between and a leaf node, we similarly absorb the violated node as above (line 12-15). Finally, we recover from according to the final block divisions.

Theorem 1.

[19] The Adapted IRT algorithm terminates with the optimal solution to .

The correctness of the algorithm is due to the strictly convex function of isotonic data augmentation subject to convex constraints. Therefore it has a unique local minimizer which is also the global minimizer [4]. Its time complexity is .

4.3 Efficient Approximation via Penalty Methods

We found two drawbacks of the proposed order restricted data augmentation in Sec 4.2: (1) although the time complexity is , the algorithm is hard to compute in parallel in GPU; (2) The order restrictions are too harsh, which overly distorts information of the original soft labels. For example, if the probability of original labels are very low, then almost all nodes will be absorbed and averaged. This will loss all valid knowledge from the original soft labels. In this subsection, we loose the order restrictions and propose a more GPU-friendly algorithm.

Note that, the partial order in Eq. (10b) introduces the restrictions to the soft labels, and then uses the isotonic soft labels to limit the student model. If we directly use the partial order to limit the student model instead, the restrictions can be rewritten as:

(11)

Note that we can replace with a simpler term without changing the actual restriction. We use because we want to ensure the loss below is equally sensitive to both and .

Objective with Order Restricted Student. We convert the optimization problem subjected to Eq. (11) to the unconstraint case in Eq. (12) via penalty methods. The idea is to add the restrictions in the loss function.

(12)

where is the penalty coefficients. The penalty-based loss can be computed in time and is GPU-friendly (via the max function).

5 Experiments

5.1 Setup

Models. We use teacher models and the student models of different architectures to test the effect of our proposed isotonic data augmentation algorithms for knowledge distillation. We tested the knowledge transfer of the same architecture (e.g. from ResNet101 to ResNet18), and the knowledge transfer between different architectures (e.g. from GoogLeNet to ResNet).

Competitors. We compare the isotonic data augmentation-based knowledge distillation with standard knowledge distillation [11]. We also compare with the baseline of directly distilling with augmented samples without introducing the order restrictions. We use this baseline to verify the effectiveness of the order restrictions.

Datasets. We use CIFAR-100 [13]

, which contains 50k training images with 500 images per class and 10k test images. We also use ImageNet, which contains 1.2 million images from 1K classes for training and 50K for validation, to evaluate the scalability of our proposed algorithms.

Implementation Details.

For CIFAR-100, we train the teacher model for 200 epochs and select the model with the best accuracy on the validation set. The knowledge distillation is also trained for 200 epochs. We use SGD as the optimizer. We initialize the learning rate as 0.1, and decay it by 0.2 at epoch 60, 120, and 160. By default, we set

, which are derived from grid search in . We set

from common practice. For ImageNet, we train the student model for 100 epochs. We use SGD as the optimizer with initial learning rate is 0.1. We decay the learning rate by 0.1 at epoch 30, 60, 90. We also set

. We follow [16] to set . Models for ImageNet were trained on 4 Nvidia Tesla V100 GPUs. Models for CIFAR-100 were trained on a single Nvidia Tesla V100 GPU.

5.2 Main Results

ResNet101 ResNet50 ResNext50 GoogleNet DenseNet121 SeResNet101 SeResNet101 DenseNet121
ResNet18 ResNet18 ResNet18 ResNet18 ResNet18 ResNet18 SeResNet18 SeResNet18 Avg.
Teacher 78.28 78.85 78.98 78.31 78.84 78.08 78.08 78.84
Student 77.55 77.55 77.55 77.55 77.55 77.55 77.21 77.21
KD 79.78 79.41 79.88 79.33 79.84 79.41 77.45 79.65 79.34
(KD Mixup)KD-aug 79.39 79.75 80.14 80.15 79.75 78.35 78.94 79.52 79.50
(KD Mixup)KD-i 79.75 80.13 80.35 80.25 80.38 79.73 78.83 80.01 79.93
(KD Mixup)KD-p 80.56 80.45 80.67 80.35 80.36 80.11 79.25 80.49 80.28
(KD CutMix)KD-aug 79.73 80.02 80.19 79.71 79.77 79.19 78.55 80.23 79.67
(KD CutMix)KD-i 79.95 80.02 80.67 79.98 80.27 79.51 79.05 80.45 79.99
(KD CutMix)KD-p 79.93 80.51 80.34 79.96 79.98 79.57 79.13 80.83 80.03
CRD 79.76 79.75 79.59 79.74 79.74 79.22 79.35 79.86 79.63
(CRD Mixup)CRD-aug 79.52 79.38 80.03 79.92 80.05 79.69 79.41 80.43 79.81
(CRD Mixup)CRD-i 79.97 79.84 80.49 80.01 80.15 79.45 79.77 80.47 80.01
(CRD Mixup)CRD-p 79.91 79.82 80.04 80.16 81.03 79.93 80.19 80.65 80.21
(CRD CutMix)CRD-aug 79.77 79.63 79.96 80.13 80.18 79.17 79.49 80.37 79.84
(CRD CutMix)CRD-i 80.04 80.14 80.62 80.37 80.59 79.56 79.51 80.52 80.17
(CRD CutMix)CRD-p 79.91 80.19 80.11 80.28 80.59 79.77 80.01 80.48 80.17
Table 1: Results of CIFAR-100. KD means standard knowledge distillation [11] and CRD means contrastive representation distillation [24]. aug means knowledge distillation using mixup-based data augmentation without calibrating the soft labels, means soft labels by isotonic regression and means soft labels by the efficient approximation.

Results on CIFAR-100. We show the classification accuracies of the standard knowledge distillation and our proposed isotonic data augmentation in Table 1. Our proposed algorithms effectively improve the accuracies compared to the standard knowledge distillation. This finding is applicable to different data augmentation techniques (i.e. CutMix and Mixup) and different network structures. In particular, the accuracy of our algorithms even outperform the teacher models. This shows that by introducing the order restriction, our algorithms effectively calibrate the soft labels and reduce the error from the teacher model. As Mixup usually performs better than CutMix, we only use Mixup as data augmentation in the rest experiments.

Results on ImageNet. We display the experimental results on ImageNet in Table 2. We use the same settings as [24], namely using ResNet-34 as the teacher and ResNet-18 as the student. The results show that isotonic data augmentation algorithms are more effective than the original data augmentation technology. This validates the scalability of the isotonic data augmentation.

We found that KD-p is better on CIFAR-100, while KD-i is better on ImageNet. We think this is because ImageNet has more categories (i.e. 1000), which makes order violations more likely to appear. Therefore, strict isotonic regression in KD-i is required to eliminate order violations. On the other hand, since CIFAR-100 has fewer categories, the original soft labels are more accurate. Therefore, introducing loose restrictions through KD-i is enough. As a result, we suggest to use KD-i if severe order violation occurs.

KD-aug KD-i KD-p
top-1/top-5 68.79/88.24 69.71/89.85 69.04/88.93
Table 2: Results of ImageNet.

Ablation. In Table 1, we also compare with the conventional data augmentation without introducing order restrictions (i.e. KD-aug). It can be seen that by introducing the order restriction, our proposed isotonic data augmentation consistently outperforms the conventional data augmentation. This verifies the advantages of our isotonic data augmentation over the original data augmentation.

5.3 Effect of Order Restrictions

Our basic intuition of this paper is that, order violations of soft labels will injure the knowledge distillation. In order to verify this intuition more directly, we evaluated the performance of knowledge distillation under different levels of order violations. Specifically, we use the Adapted IRT algorithm to eliminate the order violations of soft labels for augmented samples, respectively. We show in Fig. 5 the effectiveness of eliminating different proportions of order violations in CIFAR-100. As more violations are calibrated, the accuracy of knowledge distillation continues to increase. This verifies that the order violations injure the knowledge distillation.

Figure 4: Effect of introducing order restrictions to different ratios of samples. Average over 5 runs. Restricting more samples will improve the effect.
Figure 5: Effect of different s. is a recommended value as it outperforms other values in most cases.

5.4 Efficiency of Isotonic Data Augmentation

We mentioned that KD-p based on penalty methods is more efficient and GPU-friendly than KD-i. In this subsection, we verified the efficiency of different algorithms. We selected models from Table 1 and counted their average training time of one epoch. In Table 3, taking the time required for standard KD as the unit 1, we show the time of different data augmentation algorithms. It can be seen that KD-p based on penalty methods require almost no additional time. This shows that KD-p is more suitable for large scale data in terms of efficiency.

KD KD-aug KD-i KD-p
Mixup 1.00x 1.02x 3.33x 1.02x
CutMix 1.00x 1.01x 3.05x 1.01x
Table 3: Time costs for different data augmentation algorithms.

5.5 Effect of the Looseness of Order Restrictions

The coefficient in the Eq. (12) is the key hyper-parameter that controls the looseness of KD-p. It can be found that for most models, the model performs best when . Therefore, is a recommended value for real tasks.

5.6 Effect on NLP Tasks

Our proposed algorithm can also be extended to NLP tasks and Table 4 shows the results on several NLP tasks including SST [21], TREC [14]

and DBPedia

[2]. We use Bert[8] as the teacher and DistilBert[20] as the student. We leverage the mixup method in Mixup-Transformer[22], and the results indicate that comparing to KD-aug, KD-i and KD-p will improve student models’ accuracy.

SST TREC DBPedia
KD-aug 97.35 99.72 98.54
KD-i 97.85 99.78 98.95
KD-p 98.24 99.95 99.01
Table 4: Results on several NLP tasks.

6 Conclusion

We reveal that the conventional data augmentation techniques for knowledge distillation have critical order violations. In this paper, we use isotonic regression (IR) - a classic statistical algorithm - to eliminate the rank violations. We adapt the traditional IRT-BIN algorithm to the adapted IRT algorithm to generate concordant soft labels for augmented samples. We further propose a GPU-friendly penalty-based algorithm. We have conducted a variety of experiments in different datasets with different data augmentation techniques and verified the effectiveness of our proposed isotonic data augmentation algorithms. We also directly verified the effect of introducing rank restrictions on data augmentation-based knowledge distillation.

Acknowledgements

This paper was supported by National Natural Science Foundation of China (No. 61906116), by Shanghai Sailing Program (No. 19YF1414700).

References

  • [1] S. T. Acton and A. C. Bovik (1998)

    Nonlinear image estimation using piecewise and local image models

    .
    TIP 7 (7), pp. 979–991. Cited by: §1.
  • [2] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives (2007) Dbpedia: a nucleus for a web of open data. In The semantic web, pp. 722–735. Cited by: §5.6.
  • [3] R. E. Barlow and H. D. Brunk (1972) The isotonic regression problem and its dual. JASA 67 (337), pp. 140–147. Cited by: §1.
  • [4] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty (2013) Nonlinear programming: theory and algorithms. John Wiley & Sons. Cited by: §4.2.
  • [5] A. Bochkovskiy, C. Wang, and H. M. Liao (2020) YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. Cited by: §3.2.
  • [6] J. H. Cho and B. Hariharan (2019) On the efficacy of knowledge distillation. In ICCV, pp. 4794–4802. Cited by: §2.
  • [7] D. Das, H. Massa, A. Kulkarni, and T. Rekatsinas (2020) An empirical analysis of the impact of data augmentation on knowledge distillation. arXiv preprint arXiv:2006.03810. Cited by: §1.
  • [8] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. Cited by: §5.6.
  • [9] Q. Ding, S. Wu, H. Sun, J. Guo, and S. Xia (2019) Adaptive regularization of labels. arXiv preprint arXiv:1908.05474. Cited by: §1, §2.
  • [10] E. Harris, A. Marcu, M. Painter, M. Niranjan, and A. P. J. Hare (2020) Fmix: enhancing mixed sample data augmentation. arXiv preprint arXiv:2002.12047 2 (3), pp. 4. Cited by: §3.2.
  • [11] G. Hinton, O. Vinyals, and J. Dean (2015)

    Distilling the knowledge in a neural network

    .
    arXiv preprint arXiv:1503.02531. Cited by: §2, §5.1, Table 1.
  • [12] M. G. Kendall (1938) A new measure of rank correlation. Biometrika 30 (1/2), pp. 81–93. Cited by: §1.
  • [13] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §5.1.
  • [14] X. Li and D. Roth (2002)

    Learning question classifiers

    .
    In COLING, Cited by: §5.6.
  • [15] M. Mahsereci, L. Balles, C. Lassner, and P. Hennig (2017) Early stopping without a validation set. arXiv preprint arXiv:1703.09580. Cited by: §2.
  • [16] Y. Matsubara (2021) Torchdistill: a modular, configuration-driven framework for knowledge distillation. In

    International Workshop on Reproducible Research in Pattern Recognition

    ,
    pp. 24–44. Cited by: §5.1.
  • [17] W. L. Maxwell and J. A. Muckstadt (1985) Establishing consistent and realistic reorder intervals in production-distribution systems. OR 33 (6), pp. 1316–1341. Cited by: §1.
  • [18] A. Niculescu-Mizil and R. Caruana (2005)

    Predicting good probabilities with supervised learning

    .
    In ICML, pp. 625–632. Cited by: §4.2.
  • [19] P. M. Pardalos and G. Xue (1999) Algorithms for a class of isotonic regression problems. Algorithmica 23 (3), pp. 211–222. Cited by: §4.2, §4, Theorem 1.
  • [20] V. Sanh, L. Debut, J. Chaumond, and T. Wolf (2019) DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Cited by: §5.6.
  • [21] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pp. 1631–1642. Cited by: §5.6.
  • [22] L. Sun, C. Xia, W. Yin, T. Liang, P. S. Yu, and L. He (2020) Mixup-transfomer: dynamic data augmentation for nlp tasks. arXiv preprint arXiv:2010.02394. Cited by: §5.6.
  • [23] R. Takahashi, T. Matsubara, and K. Uehara (2019) Data augmentation using random image cropping and patching for deep cnns. TCSVT. Cited by: §2.
  • [24] Y. Tian, D. Krishnan, and P. Isola (2019) Contrastive representation distillation. In ICLR, Cited by: §1, §5.2, Table 1.
  • [25] D. Wang, Y. Li, L. Wang, and B. Gong (2020) Neural networks are more productive teachers than human raters: active mixup for data-efficient knowledge distillation from a blackbox model. In CVPR, pp. 1498–1507. Cited by: §1.
  • [26] H. Wang, S. Lohit, M. Jones, and Y. Fu (2020) Knowledge distillation thrives on data augmentation. arXiv preprint arXiv:2012.02909. Cited by: §1.
  • [27] T. Wen, S. Lai, and X. Qian (2019) Preparing lessons: improve knowledge distillation with better supervision. arXiv preprint arXiv:1911.07471. Cited by: §1, §2.
  • [28] C. Yang, L. Xie, S. Qiao, and A. L. Yuille (2019) Training deep neural networks in generations: a more tolerant teacher educates better students. In AAAI, Vol. 33, pp. 5628–5635. Cited by: §2.
  • [29] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo (2019) Cutmix: regularization strategy to train strong classifiers with localizable features. In ICCV, pp. 6023–6032. Cited by: §1, §2, §3.2, §3.2.
  • [30] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz (2018) Mixup: beyond empirical risk minimization. In ICLR, Cited by: §1, §2, §3.2.