Why distillation helps: a statistical perspective

by   Aditya Krishna Menon, et al.

Knowledge distillation is a technique for improving the performance of a simple "student" model by replacing its one-hot training labels with a distribution over labels obtained from a complex "teacher" model. While this simple approach has proven widely effective, a basic question remains unresolved: why does distillation help? In this paper, we present a statistical perspective on distillation which addresses this question, and provides a novel connection to extreme multiclass retrieval techniques. Our core observation is that the teacher seeks to estimate the underlying (Bayes) class-probability function. Building on this, we establish a fundamental bias-variance tradeoff in the student's objective: this quantifies how approximate knowledge of these class-probabilities can significantly aid learning. Finally, we show how distillation complements existing negative mining techniques for extreme multiclass retrieval, and propose a unified objective which combines these ideas.



There are no comments yet.


page 20


Knowledge Distillation as Semiparametric Inference

A popular approach to model compression is to train an inexpensive stude...

Teacher's pet: understanding and mitigating biases in distillation

Knowledge distillation is widely used as a means of improving the perfor...

Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective

Knowledge distillation is an effective approach to leverage a well-train...

Noisy Self-Knowledge Distillation for Text Summarization

In this paper we apply self-knowledge distillation to text summarization...

Causal Distillation for Language Models

Distillation efforts have led to language models that are more compact a...

Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing

We aim to identify how different components in the KD pipeline affect th...

Collective Relevance Labeling for Passage Retrieval

Deep learning for Information Retrieval (IR) requires a large amount of ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Distillation is the process of using a “teacher” model to improve the performance of a “student” model (Craven:1995; Breiman:1996; Bucilua:2006; Xue:2013; Ba:2014; Hinton:2015). In its simplest form, rather than fitting to raw labels, one trains the student to fit the teacher’s distribution over labels. While originally devised with the aim of model compression, distillation has proven successful in iteratively improving a fixed-capacity model  (Rusu:2016; Furlanello:2018; Yang:2019; Xie:2019), and found use in many other settings (Papernot:2016; Tang:2016; Czarnecki:2017; Celik:2017; Yim:2017; Li:2018; Liu:2019; Nayak:2019).

Given its empirical successes, it is natural to ask: why does distillation help? Hinton:2015

argued that distillation provides “dark knowledge” via teacher logits on the “wrong” labels

for an example , which effectively weights samples differently (Furlanello:2018). Various theoretical analyses of distillation have subsequently been developed (Lopez-Paz:2016; Phuong:2019; Foster:2019; Dong:2019; Mobahi:2020), with particular focus on its optimisation and regularisation effects.

In this paper, we present a novel statistical perspective on distillation which sheds light on why it aids performance. Our analysis centers on a simple observation: a good teacher accurately models the true (Bayes) class-probabilities. This is a stricter requirement than the teacher merely having high accuracy. We quantify how such probability estimates improve generalisation compared to learning from raw labels. Building on this, we show how distillation is also useful in selecting informative labels for multiclass retrieval, wherein we wish to order labels according to their relevance (Jain:2019). In sum, our contributions are:

  1. [label=(),itemsep=0pt,topsep=0pt]

  2. We establish the statistical benefit of using the Bayes class-probabilities in place of one-hot labels, and quantify a bias-variance tradeoff when using approximate class-probabilities (§3).

  3. We propose double-distillation, a novel application of distillation for multiclass retrieval wherein teacher probabilities guide a ranking over labels (§4).

  4. We experimentally validate the value of both approximate class-probabilities in terms of generalisation, and of double-distillation in multiclass retrieval5).

Contribution (i) gives a statistical perspective on the value of “dark knowledge”: for an example , the logits on “wrong” labels encode information about the underlying data distribution. This view elucidates how a teacher’s probability calibration rather than accuracy can influence a student’s generalisation; see Figure 1 for an illustration. Contribution (ii) shows a practical benefit of this statistical view of distillation, by showing how multiclass retrieval objectives benefit from approximate class-probabilities.

(a) Top-1 accuracy.
(b) Log-loss.
(c) Expected calibration error.
Figure 1: Illustration of how better teacher modelling of underlying (Bayes) class-probabilities influences student generalisation, per our statistical perspective. Here, we train ResNets of varying depths on the CIFAR-100 dataset, and use these as teachers to distill to a student ResNet of fixed depth . Figure 1(a) reveals that the teacher model gets increasingly more accurate as its depth increases; however, from 1(b) the corresponding log-loss starts increasing beyond a depth of . The calibration error is also seen to worsen with increasing depth in 1(c)

. This indicates the teacher’s probability estimates become progressively poorer approximations of the Bayes class-probability distribution

after certain depth. Intuitively, the teacher’s approximation of is governed by balancing the bias and variance in its predictions. The accuracy of the student model also degrades beyond a teacher depth of , reflecting the bound in Proposition 3. See §5.2 for more details, and §3.2 for illustration of a general bias-variance tradeoff.

2 Background and notation

We review multiclass classification, retrieval, & distillation.

2.1 Multiclass classification

In multiclass classification, we are given a training sample , for unknown distribution over instances and labels . Our goal is to learn a predictor so as to minimise the risk of , i.e., its expected loss for a random instance and label:



is a loss function, where for label

and prediction vector

, is the loss incurred for predicting when the true label is . A canonical example is the softmax cross-entropy loss,


We may approximate the risk via the empirical risk



denotes the one-hot encoding of

and denotes the vector of losses for each possible label.

In a multiclass retrieval setting, our goal is to ensure that the top-ranked labels in include the true label (Lapin:2018). Formally, we seek to minimise the top- loss


where denotes the top- highest scoring labels. When , this loss reduces to the standard 0-1 error.

2.2 Knowledge distillation

Distillation involves using a “teacher” model to improve the performance of a “student” model (Bucilua:2006; Hinton:2015). In the simplest form, one trains the teacher model and obtains class-probability estimator , where denotes the simplex. Each estimates how likely

is to be classified as

. In place of the empirical risk (3), the student now minimises the distilled risk


so that the one-hot encoding of labels is replaced with the teacher’s distribution over labels. Distillation may be used when the student has access to a large pool of unlabelled samples; in this case, distillation is a means of semi-supervised learning 


While originally conceived in settings where the student has lower capacity than the teacher (Bucilua:2006; Hinton:2015), distillation has proven useful when both models have the same capacity (Breiman:1996; Furlanello:2018; Xie:2019). More precisely, distillation involves training a teacher on labelled samples , using a function class . Classic distillation assumes that , and that the capacity of is greater than that of . “Born-again” distillation assumes that , and , i.e., one iteratively trains versions of the same model, using past predictions to improve performance.

2.3 Existing explanations for distillation

While it is well-accepted that distillation is empirically useful, there is less consensus as to why this is the case. Hinton:2015 attributed the success of distillation (at least in part) to the encoding of “dark knowledge” in the probabilities the teacher assigns to the “wrong” labels for example . This richer information plausibly aids the student, providing for example a weighting on the samples (Furlanello:2018; Tang:2020). Further, when is the softmax cross-entropy, the gradient of the distillation objective with respect to the student logits is the difference in the probability estimates for the two, implying a form of logit matching in high-temperature regimes (Hinton:2015).

Lopez-Paz:2016 related distillation to learning from privileged information under a noise-free setting.  Phuong:2019 analysed the dynamics of student learning, assuming a deep linear model for binary classification.  Foster:2019 provided a generalisation bound for the student, under the assumption that it learns a model close to the teacher. This does not, however, explicate what constitutes an “ideal” teacher, nor quantify how an approximation to this ideal teacher will affect the student generalisation. Gotmare:2019

studied the effect of distillation on discrimination versus feature extraction layers of the student network.

(Dong:2019) argued that distillation has a similar effect to early stopping, and studied its ability to denoise labels. Our focus, by contrast, is in settings where there is no exogenous label noise, and in studying the statistical rather than optimisation effects of distillation. (Mobahi:2020) analysed the special setting of self-distillation – wherein the student and teacher employ the same model class – and showed that for kernelised models, this is equivalent to increasing the regularisation strength.

3 Distillation through a class-probability lens

We now present a statistical perspective of distillation, which gives insight into why it can aid generalisation. Central to our perspective are two observations:

  1. [label=(),itemsep=0pt,topsep=0pt]

  2. the risk (1) we seek to minimise inherently smooths labels by the class-probability distribution ;

  3. a teacher’s predictions provide an approximation to the class-probability distribution , which can thus yield a better approximation to the risk (1) than one-hot labels.

Building on these, we show how sufficiently accurate teacher approximations can improve student generalisation.

3.1 Bayes knows best: distilling class-probabilities

Our starting point is the following elementary observation: the underlying risk for a predictor is


where is the Bayes class-probability distribution. Intuitively, inherently captures the suitability of label for instance . Thus, the risk involves drawing an instance , and then computing the average loss of over all labels , weighted by their Bayes probabilities . When is not concentrated on a single label, there is an inherent confusion amongst the labels for the instance .

Given an , the empirical risk (3) approximates the distribution with the one-hot

, which is only supported on one label. While this is an unbiased estimate, it is a significant reduction in granularity. By contrast, consider the following

Bayes-distilled risk on a sample :


This is a distillation objective (cf. (5)) using a Bayes teacher, who provides the student with the true class-probabilities. Rather than fitting to a single label realisation , a student minimising (7) considers all alternate label realisations, weighted by their likelihood.111While the student could trivially memorise the training class-probabilities, this would not generalise to test samples. Observe that when is the cross-entropy, (7) is simply the KL divergence between the Bayes class-probabilities and our predictions.

Both the standard empirical risk in (3) and Bayes-distilled risk in (7) are unbiased estimates of the population risk . But intuitively, we expect that a student minimising (7) ought to generalise better from a finite sample. We can make this intuition precise by establishing that the Bayes-distilled risk has lower variance, considering fresh draws of the training sample.

Lemma 1.

For any fixed predictor ,

where denotes variance, and equality holds iff , the loss values are constant on the support of .

Proof of Lemma 1.

By definition,

In both cases, the second term simply equals since both estimates are unbiased. For fixed

, the result follows by Jensen’s inequality applied to the random variable

. Equality occurs iff is constant, which requires the loss to be constant on the support of . ∎

The condition when the Bayes-distilled and empirical risks have the same variance is intuitive: the two risks trivially agree when is non-discriminative (attaining equal loss on all labels), or when a label is inherently deterministic (the class-probability is concentrated on one label). For discriminative predictors and non-deterministic labels, however, the Bayes-distilled risk can have significantly lower variance.

The reward of reducing variance is better generalisation: a student minimising (7), will better minimise the population risk, compared to using one-hot labels. In fact, we may quantify how the empirical variance of the Bayes-distilled loss values influences generalisation as follows.

Proposition 2.

Pick any bounded loss . Fix a hypothesis class of predictors , with induced class of functions . Suppose has uniform covering number . Then, for any , with probability at least over ,

where and is the empirical variance of the loss values .

Proof of Proposition 2.

This a simple consequence of Maurer:2009, which is a uniform convergence version of Bennet’s inequality (Bennett:1962). ∎

The above may be contrast to the bound achievable for the standard empirical risk using one-hot labels: by Maurer:2009,

where here we consider a function class comprising functions , with uniform covering number . Combining the above with Lemma 1, we see that the Bayes-distilled empirical risk results in a lower variance penalty.

To summarise the statistical perspective espoused above, a student should ideally have access to the underlying class-probabilities , rather than a single realisation . As a final comment, the above provides a statistical perspective on the value of “dark knowledge”: the teacher’s “wrong” logits for an example provide approximate information about the Bayes class-probabilities. This results in a lower-variance student objective, aiding generalisation.

3.2 Distilling from an imperfect teacher

The previous section explicates how an idealised “Bayes teacher” can be beneficial to a student. How does this translate to more realistic settings, where one obtains predictions from a teacher which itself is learned form data?

Our first observation is that a teacher’s predictor can typically be seen as an imperfect estimate of the true . Indeed, if the teacher is trained with a loss that is proper (Savage:1971; Schervish:1989; Buja:2005), then


where is a (loss-dependent) divergence between the true and estimated class-probability functions. For example, the softmax cross-entropy corresponds to being the KL divergence between the distributions. The teacher’s goal is thus fundamentally to ensure that their predictions align with the true class-probability function.

Of course, a teacher learned from finite samples is unlikely to achieve zero divergence in (8). Indeed, even high-capacity teacher models may not be rich enough capture the true . Further, even if the teacher can represent , it may not be able to learn this perfectly given a finite sample, owing to both statistical (e.g., the risk of overfitting) and optimisation (e.g., non-convexity of the objective) issues. We must thus treat as an imperfect estimate of . The natural question is: will such an estimate still improve generalisation?

To answer this, we establish a fundamental bias-variance tradeoff when performing distillation. Specifically, we show the difference between the distilled risk (cf. (5)) and population risks (cf. (3.1)) depends on how variable the loss under the teacher is, and how well the teacher estimates in a squared-error sense. Intuitively, the latter captures how well the teacher estimates on average (bias), and how variable the teacher’s predictions are (variance).

Proposition 3.

Pick any bounded loss . Suppose we have a teacher model with corresponding distilled empirical risk in (5). For constant and any predictor ,


where denotes the sum of coordinate-wise variance.

Proof of Proposition 3.

Let . Then,

Observe that

where the second line is by the Cauchy-Schwartz inequality, and the third line by the equivalence of norms. Now, (9) follows since is a constant, implying

For (10), by Jensen’s inequality, and the definition of variance,

Unpacking the above, the fidelity of the distilled risk’s approximation to the true one depends on three factors: how variable the expected loss is for a random instance; how well the teacher’s approximates the true on average; and how variable the teacher’s predictions are. Mirroring the previous section, we may convert Proposition 3 into a generalisation bound for the student:


where is the penalty term from Proposition 2. As is intuitive, using an imperfect teacher invokes an additional penalty depending on how far the predictions are from the Bayes, in a squared-error sense. For completeness, a formal statement is provided in Proposition 5 in Appendix A.

3.3 Discussion and implications

Our statistical perspective gives a simple yet powerful means of understanding distillation. Our formal results follow readily from this perspective, but their implications are subtle, and merit further discussion.

Why accuracy is not enough. Our bias-variance result establishes that if the teacher provides good class-probabilities, in the sense of approximating in a mean-square sense, then the resulting student should generalise well. In deep networks, this does not imply providing the teacher having higher accuracy; such models may be accurate while being overly confident (Guo:2017; Rothfuss:2019). Our result thus potentially illuminates why more accurate teachers may lead to poorer students, as has been noted empirically (Muller:2019); see also §5.2.

In practice, the precise bound derived above is expected to be loose. However, its qualitative trend may be observed in practice. Figure 1 (see also §5.2) illustrates how increasing the depth of a ResNet model may increase accuracy, but degrade probabilistic calibration. This is seen to directly relate to the quality of a student distilled from these models.

Temperature scaling (Hinton:2015), a very common and empirically successful trick in distillation, can also be analysed through this perspective. Usually, teachers are highly complex and optimised to maximise accuracy; hence, they often become overly confident. Increasing the temperature can help the student-target be closer to the true distribution, and not just convey the most accurate label.

Teacher variance and model capacity. Proposition 3 allows for to be random (e.g., owing to the teacher being learned from some independent sample ). Consequently, the variance terms not only reflect how diffused the teacher’s predictions are, but also how much these predictions vary over fresh draws of the teacher sample. High-capacity teachers may yield vastly different predictions when trained on fresh samples. This variability incurs a penalty in the third term in Proposition 3. At the same time, such teachers can better estimate the Bayes , which incurs a lower bias. The delicate tradeoff between these concerns translates into (student) generalisation.

We may understand the label smoothing trick (Szegedy:2016) in light of the above. This corresponds to mixing the student labels with uniform predictions, yielding for . From the perspective of modelling , choosing introduces a bias. However, has lower variance than the one-hot labels ’s, owing to the scaling. Provided the bias is not too large, smoothing can thus aid generalisation.

We remark also that the first variance term in Proposition 3 vanishes as . This is intuitive: in the limit of infinite student samples, the quality of the distilled objective is wholly determined by how well the teacher probabilities model the Bayes probabilities. For small , this term measures how diffused the losses are when weighted by the teacher probabilities (similar to Lemma 1).

How much teacher bias is admissible? When , Proposition 3 reveals that a distilled student’s generalisation gap depends on the bias and variance in . By contrast, from a labelled sample, the student’s generalisation gap depends on the complexity of its model class . Distillation can be expected to help when the first gap is lower than the second.

Concretely, when trained from limited labelled samples, the student will find , which incurs a high statistical error. Now suppose we have a large amount of unlabelled data . A distilled student can then reliably find the minimiser (following (8))

with essentially no statistical error, but an approximation error given by the teacher’s bias. In this setting, we thus need the teacher’s bias to be lower than the statistical error.

On teacher versus student samples A qualifier to our results is that they assume a disjoint set of samples for the teacher and student. For example, it may be that the teacher is trained on a pool of labelled samples, while the student is trained on a larger pool of unlabelled samples. The results thus do not directly hold for the settings of self- or co-distillation (Furlanello:2018; Anil:2018), wherein the same sample is used for both models. Combining our results with recent analyses from the perspective of regularisation (Mobahi:2020) and data-dependent function classes (Foster:2019) would be of interest in future work.

4 Distillation meets multiclass retrieval

Our statistical view has thus far given insight into the potential value of distillation. We now show a distinct practical benefit of this view, by leveraging it for a novel application of distillation to multiclass retrieval. Our basic idea is to construct a double-distillation objective, wherein distillation informs the loss as to the relative confidence of both “positive” and “negative” labels.

4.1 Double-distillation for multiclass retrieval

Recall from (4) that in multiclass retrieval, we wish to ensure that the top-ranked labels in our predictor contain the true label. The softmax-cross entropy in (2) offers a reasonable surrogate loss for this task, since

The latter is related to the Cramer-Singer loss (Crammer:2002), which bounds the top- retrieval loss. Given a sample , we thus seek to minimise . From our statistical view, there is value in instead computing : intuitively, each acts a “smooth positive”, weighted by the Bayes probability .

We observe, however, that such smoothing does not affect the innards of the loss itself. In particular, one critique of (2) is that it assigns equal importance to each . Intuitively, mistakes on labels that are a poor explanation for — i.e., have low — ought to be strongly penalised. On the other hand, if under the Bayes probabilities some strongly explains , it ought to be ignored.

To this end, consider a generalised softmax cross-entropy:


where is a distribution over labels. When is uniform, this is exactly the standard cross-entropy loss (cf. (2)) plus a constant. For non-uniform , however, the loss is encouraged to focus on ensuring for those with large .

Continuing our statistical view, we posit that an ideal choice of is , where is some decreasing function. Concretely, our new loss for a given is

Intuitively, the loss treats as a “positive” label for , and each where as a “negative” label. The loss seeks to score the positive label over all labels which poorly explain . Compared to the standard softmax cross-entropy, we avoid penalisation when plausibly explains as well.

Recall that under a distillation setup, a teacher model provides us estimates of . Consequently, to estimate this risk on a finite sample , we may construct the following double-distillation objective:


where ; unrolling, this corresponds to the objective

Observe that (13) uses the teacher in two ways: the first is the standard use of distillation to smooth the “positive” training labels. The second is a novel use of distillation to smooth the “negative” labels. Here, for any candidate “positive” label , we apply varying weights to “negative” labels when computing each .

It remains to specify the precise form of in (13). A natural choice of ; however, since the entries of sum to one, this may not allow the loss to sufficiently ignore other “positive” labels. Concretely, suppose there are plausible “positive” labels, which have most of the probability mass under . Then, the total weight assigned to these labels will be roughly ; this is of the order of , which is what we would get from uniform weights.

To resolve this, we may instead use weights proportional to , where

is the sigmoid function,

is the teacher logit, and is a scaling parameter which may be tuned. This parameterisation allows for multiple labels to have high (or low) weights. In the above example, each “positive” label can individually get a score close to . The total weight of the “positives” can thus also be close to , and so the loss can learn to ignore them.

4.2 Discussion and implications

The viability of distillation as a means of smoothing negatives has not, to our knowledge, been explored in the literature. The proposal is, however, a natural consequence of our statistical view of distillation developed in §3.

The double-distillation objective in (13) relates to ranking losses. In bipartite ranking settings (Cohen:1999), one assumes an instance space and binary labels denoting that items are either relevant, or irrelevant. Rudin:2009 proposed the following push loss for this task:

where and are distributions over positive and negative instances respectively, and are convex non-decreasing functions. Similar losses have also been studied in Yun:2014. The generalised softmax cross-entropy in (12) can be seen as a contextual version of this objective for each , where and . In double-distillation, the contextual distributions are approximated using the teacher’s predictions .

There is a broad literature on using a weight on “negatives” in the softmax (Liu:2016; Liu:2017; Wang:2018; Li:2019; Cao:2019); this is typically motivated by ensuring a varying margin for different classes. The resulting weighting is thus either constant or label-dependent, rather than the label- and example-dependent weights provided by distillation. Closer still to our framework is the recent work of Khan:2019, which employs uncertainty estimates in the predictions for a given example and label to adjust the desired margin in the softmax. While not explicitly couched in terms of distillation, this may be understood as a “self-distillation” setup, wherein the current predictions of a model are used to progressively refine future iterates. Compared to double-distillation, however, the nature of the weighting employed is considerably more complicated.

There is a rich literature on the problem of label ranking, where typically it is assumed that one observes a (partial) ground-truth ranking over labels (Dekel:2004; Furnkranz:2008; Vembu:2011). We remark also that the view of the softmax as a ranking loss has received recent attention (Bruch:2019; Bruch:2019b). Exploiting the statistical view of distillation in these regimes is a promising future direction. Tang:2018 explored distillation in a related learning-to-rank framework. While similar in spirit, this focusses on pointwise losses, wherein the distinction between positive and negative smoothing is absent.

Finally, we note that while our discussion has focussed on the softmax cross-entropy, double-distillation may be useful for a broader class of losses, e.g., order-weighted losses as explored in Usunier:2009; Reddi:2019.

5 Experimental results

We now present experiments illustrating three key points:

  1. [label=(),itemsep=0pt,topsep=0pt]

  2. we show that distilling with true (Bayes) class-probabilities improves generalisation over one-hot labels, validating our statistical view of distillation.

  3. we illustrate our bias-variance tradeoff on synthetic and real-world datasets, confirming that teachers with good estimates of can be usefully distilled.

  4. we finally show that double-distillation performs well on real-world multiclass retrieval datasets, confirming the broader value in our statistical perspective.

(a) Student AUC versus # of training samples.
(b) Student AUC versus class separation.
Figure 2: Distillation with Bayes-teacher on synthetic data comprising Gaussian class-conditionals. Distillation offers a noticeable gain over the standard one-hot encoding, particularly in the small-sample regime (left), and when the underlying problem is noisier (right).

5.1 Is Bayes a good teacher for distillation?

To illustrate our statistical perspective, we conduct a synthetic experiment where is known, and show that distilling these Bayes class-probabilities benefits learning.

We generate training samples from a distribution comprising class-conditionals which are each 10-dimensional Gaussians, with means respectively. By construction, the Bayes class-probability distribution is where , for .

We compare two training procedures: standard logistic regression on

, and Bayes-distilled logistic regression using per (7). Logistic regression is well-specified for this problem, i.e., as , the standard learner will learn . However, we will demonstrate that on finite samples, the Bayes-distilled learner’s knowledge of will be beneficial. We reiterate that while this learner could trivially memorise the training , this would not generalise.

Figure 2(a) compares the performance of these two approaches for varying training set sizes, where for each training set size we perform independent trials and measure the AUC-ROC on a test set of samples. We observe two key trends: first, Bayes-distillation generally offers a noticeable gain over the standard one-hot encoding, in line with our theoretical guarantee of low variance.

Second, both methods see improved performance with more samples, but the gains are greater for the one-hot encoding. This in line with our intuition that distillation effectively augments each training sample: when is large to begin with, the marginal gain of such augmentation is minimal.

Figure 2(b) continues the exploration of this setting. We now vary the distance between the means of each of the Gaussians. When is small, the two distributions grow closer together, making the classification problem more challenging. We thus observe that both methods see worse performance as is smaller. At the same time, smaller makes the one-hot labels have higher variance compared to the Bayes class-probabilities. Consequently, the gains of distillation over the one-hot encoding are greater for this setting, in line with our guarantee on the lower-variance Bayes-distilled risk aiding generalisation (Proposition 2).

As a final experiment, we verify the claim that teacher accuracy does not suffice for improving student generalisation, since this does not necessarily correlate with the quality of the teacher’s probability estimates. We assess this by artificially distorting the teacher probabilities so as to perfectly preserve teacher accuracy, while degrading their approximation to . Appendix B.2 presents plots confirming that such degradation progressively reduces the gains of distillation.

5.2 Illustration of bias-variance tradeoff

We next illustrate our analysis on the bias-variance tradeoff in distillation from §3.2 on synthetic and real-world datasets.

Synthetic. We now train a series of increasingly complex teacher models , and assess their resulting distillation benefit on a synthetic problem. Here, the data is sampled from a marginal which is a zero-mean isotropic Gaussian in 2D. The class-probability function is given by , so that the negatives are concentrated in a rectangular slab.

We consider teachers that are random forests of a fixed depth

, with base estimators. Increasing has the effect of reducing teacher bias (since the class of depth- trees can better approximate ), but increasing teacher variance (since the class of depth- trees can induce complex decision boundaries). For fixed , we train a teacher model on the given training sample (with ). We then distill the teacher predictions to a student model, which is a depth tree. For each such teacher, we compute its MSE, as well as the test set AUC of the corresponding distilled student. We repeat this for independent trials.

Figure 3(a) and 3(b) show how the teacher’s depth affects its MSE in modelling , as well as the AUC of the resulting distilled student. There is an optimal depth at which the teacher achieves the best MSE approximation of . In keeping with the theory, this also corresponds to the teacher whose resulting student generalises the best. Figure 4(a) combines these plots to explicitly show the relationship between the the teacher’s MSE and the student’s AUC. In line with the theory, more accurate estimates of result in better students.

Note that at depth , the teacher model is expected to have lower bias; however, it results in a slightly worse distilled student. This verifies that one may favour a higher-bias teacher if it has lower variance: a teacher may achieve a lower MSE – and thus distill better – by slightly increasing its bias while lowering variance. See Appendix B.1 for additional bias-variance experiments on synthetic data.

Fashion MNIST. It is challenging to assess the bias-variance tradeoff on real-world datasets, where the Bayes is unknown. As a proxy, we take the fashion MNIST dataset, and treat a powerful teacher model as our . We train an MLP teacher with two hidden layers with and dimensions. This achieves a test accuracy of .

We then inject bias and noise per (16), and distill the result to a linear logistic regression model. To amplify the effects of distillation, we constrain the student by only offering it the top samples that the original teacher deems most uncertain. Figures 4(b) demonstrates similar trend to the synthetic dataset, with the best MSE approximator to the original teacher generally yielding the best student.


. We verify that accurate probabilty estimation by the teacher strongly influences student generalisation, and that this can be at odds with accuracy. We revisit the plots introduced in Figure 

1. Here, we train ResNets of varying depths on the CIFAR-100 dataset, and use these as teachers to distill to a student ResNet of fixed depth . Figure 1(a) reveals that the teacher model gets increasingly more accurate as its depth increases; however, the corresponding log-loss starts increasing beyond a depth of . This indicates the teacher’s probability estimates become progressively poorer approximations of the Bayes class-probability distribution . The accuracy of the student model also degrades beyond a teacher depth of , reflecting the bias-variance bound in Proposition 3.

(a) Teacher depth versus MSE.
(b) Teacher depth versus student AUC.
Figure 3:

Relationship between depth of teacher’s decision tree model (model complexity) and its MSE in modelling

, as well as the AUC of the resulting distilled student, on a synthetic problem. There is an optimal depth at which the teacher achieves the best MSE approximation of . In keeping with the theory, this corresponds to the teacher whose resulting student generalises the best.
(a) Synthetic dataset.
(b) Fashion MNIST dataset.
Figure 4: Relationship between teacher’s MSE against true class-probability and student’s test set AUC. In keeping with the theory, teachers which better approximate in an MSE sense yield better students.

5.3 Double-distillation for multiclass retrieval

Our final set of experiments confirm the value in our double-distillation objective in (13). To do so, we use the AmazonCat-13K and Amazon-670K benchmark datasets for multiclass retrieval (McAuley:2013; Bhatia:2015). The data is multilabel; following Reddi:2019, we make it multiclass by creating a single example for each label associated with .

We construct a “teacher” model using a feedforward network with a single (linear) hidden layer of width , trained to minimise the softmax cross-entropy loss. We then construct a “student” model using the same architecture, but with a hidden layer of width for AmazonCat-13K and for Amazon-670K because Amazon-670K is significantly larger than AmazonCat-13K (670k vs 13k labels). This student model is compared to a distilled student, where the teacher logits are used in place of the one-hot training labels. Both methods are then compared to the double-distillation objective, where the teacher logits are used to smooth the negatives in the softmax per (12) and (13).

We compare all methods using the precision@ metric with , averaging these over multiple runs. Table 1 summarises our findings. We see that distillation offers a small but consistent bump in performance over the student baseline. Double-distillation further improves upon this, especially at the head of the prediction (P@1 and P@3), confirming the value of weighing negatives differently. The gains are particularly significant on AmazonCat-13K, where the double-distilled student can improve upon the teacher model itself. Overall, our findings illustrate the broader value of the statistical perspective of distillation.

Method P@1 P@3 P@5 Teacher 0.8495 0.7412 0.6109 Student 0.7913 0.6156 0.4774 Student + distillation 0.8131 0.6363 0.4918 Student + double-distillation 0.8560 0.7148 0.5715 Method P@1 P@3 P@5 Teacher 0.3983 0.3598 0.3298 Student 0.3307 0.3004 0.2753 Student + distillation 0.3461 0.3151 0.2892 Student + double-distillation 0.3480 0.3161 0.2865
Table 1: Precision@ metrics for double-distillation objective against standard distillation and student baseline on AmazonCat-13K (left) and AmazonCat-670K (right). With double-distillation, the student is seen to significantly improve performance over training with one-hot labels, but also a distilled model which applies uniform weighting of all negatives.

6 Conclusion

We presented a statistical perspective on distillation, building on a simple observation: distilling the Bayes class-probabilities yields a more reliable estimate of the population risk. Viewing distillation in this light, we formalised a bias-variance tradeoff to quantify the effect of approximate teacher class-probability estimates on student generalisation, and also studied a novel application of distillation to multiclass retrieval. Towards developing a comprehensive understanding of distillation, studying the optimisation aspects of this viewpoint, and the setting of overparametrised teacher models (Zhang:2018) would be of interest.


Appendix A Theory: additional results

Proposition 4.

Suppose we have a teacher model with corresponding distilled empirical risk (5). Furthermore, assume is unbiased, i.e., for all . Then, for any predictor ,

for some constant .

Proof of Proposition 4.

Let and . Then,

Note that since is an unbiased estimator of . Using this fact, we obtain the desired result as follows:

Proposition 5.

Pick any bounded loss . Fix a hypothesis class of predictors , with induced class of functions . Suppose has uniform covering number . Then, for any , with probability at least over ,

where and is the empirical variance of the loss values.

Proof of Proposition 5.

Let and . We note that with probability ,


where and is the empirical variance of the loss values. Furthermore, the following holds

Thus, we have


for some constant . The result follows by combining (14) and (15). ∎

Appendix B Experiments: additional results

b.1 Bias-variance tradeoff

Continuing the same synthetic Gaussian data as in §5.1, we now consider a family of teachers of the form


where is the sigmoid, , , and and comprises independent Gaussian noise. Increasing induces a bias in the teacher’s estimate of , while increasing induces a variance in the teacher over fresh draws. Combined, these induce the teacher’s mean squared error (MSE) , which by Proposition 3 bounds the gap between the population and distilled empirical risk.

For each such teacher, we compute its MSE, as well as the test set AUC of the corresponding distilled student. Figure 5 shows the relationship between the the teacher’s MSE and the student’s AUC. In line with the theory, more accurate estimates of result in better students. Figure 6 also shows how the teacher’s MSE depends on the choice of and , demonstrating that multiple such pairs can achieve a similar MSE. As before, we see that a teacher may trade-off bias for variance in order to achieve a low MSE.

Figure 5: Relationship between teacher’s MSE against true class-probability and student’s test set AUC. In keeping with the theory, teachers which better approximate yield better students.
Figure 6: Relationship between teacher’s bias and variance, and corresponding MSE against true class-probability. The teacher can achieve a given MSE through multiple possible bias-variance combinations.

b.2 Uncalibrated teachers may distill worse

We now illustrate the importance of the teacher probabilities needing to be meaningful reflections of the true . We continue our exploration of the synthetic Gaussian problem, where takes on a sigmoid form. We now distort these probabilities as follows: for , we construct

The new class-probability function preserves the classification boundary , but squashes the probabilities themselves as gets larger. We now consider using as teacher probabilities to distill to a student. This teacher has the same accuracy, but significantly worse calibration than the Bayes-teacher using .

Figure 7 confirms that as increases, the effect of distilling on the student is harmed. This validates our claim that teacher accuracy is insufficient to judge whether distillation will be useful.

(a) As tuning parameter is increased, the teacher probabilities increasingly deviate from the Bayes probabilities .
(b) As tuning parameter is increased — so that the teacher probabilities are increasingly uncalibrated — the student AUC becomes progressively worse.
Figure 7: Effect of distorting the Bayes probabilities so as to preserve the classification decision boundary (), but degrade the calibration of the scores.