1 Introduction
Distillation is the process of using a “teacher” model to improve the performance of a “student” model (Craven:1995; Breiman:1996; Bucilua:2006; Xue:2013; Ba:2014; Hinton:2015). In its simplest form, rather than fitting to raw labels, one trains the student to fit the teacher’s distribution over labels. While originally devised with the aim of model compression, distillation has proven successful in iteratively improving a fixedcapacity model (Rusu:2016; Furlanello:2018; Yang:2019; Xie:2019), and found use in many other settings (Papernot:2016; Tang:2016; Czarnecki:2017; Celik:2017; Yim:2017; Li:2018; Liu:2019; Nayak:2019).
Given its empirical successes, it is natural to ask: why does distillation help? Hinton:2015
argued that distillation provides “dark knowledge” via teacher logits on the “wrong” labels
for an example , which effectively weights samples differently (Furlanello:2018). Various theoretical analyses of distillation have subsequently been developed (LopezPaz:2016; Phuong:2019; Foster:2019; Dong:2019; Mobahi:2020), with particular focus on its optimisation and regularisation effects.In this paper, we present a novel statistical perspective on distillation which sheds light on why it aids performance. Our analysis centers on a simple observation: a good teacher accurately models the true (Bayes) classprobabilities. This is a stricter requirement than the teacher merely having high accuracy. We quantify how such probability estimates improve generalisation compared to learning from raw labels. Building on this, we show how distillation is also useful in selecting informative labels for multiclass retrieval, wherein we wish to order labels according to their relevance (Jain:2019). In sum, our contributions are:

[label=(),itemsep=0pt,topsep=0pt]

We establish the statistical benefit of using the Bayes classprobabilities in place of onehot labels, and quantify a biasvariance tradeoff when using approximate classprobabilities (§3).

We propose doubledistillation, a novel application of distillation for multiclass retrieval wherein teacher probabilities guide a ranking over labels (§4).

We experimentally validate the value of both approximate classprobabilities in terms of generalisation, and of doubledistillation in multiclass retrieval (§5).
Contribution (i) gives a statistical perspective on the value of “dark knowledge”: for an example , the logits on “wrong” labels encode information about the underlying data distribution. This view elucidates how a teacher’s probability calibration rather than accuracy can influence a student’s generalisation; see Figure 1 for an illustration. Contribution (ii) shows a practical benefit of this statistical view of distillation, by showing how multiclass retrieval objectives benefit from approximate classprobabilities.
. This indicates the teacher’s probability estimates become progressively poorer approximations of the Bayes classprobability distribution
after certain depth. Intuitively, the teacher’s approximation of is governed by balancing the bias and variance in its predictions. The accuracy of the student model also degrades beyond a teacher depth of , reflecting the bound in Proposition 3. See §5.2 for more details, and §3.2 for illustration of a general biasvariance tradeoff.2 Background and notation
We review multiclass classification, retrieval, & distillation.
2.1 Multiclass classification
In multiclass classification, we are given a training sample , for unknown distribution over instances and labels . Our goal is to learn a predictor so as to minimise the risk of , i.e., its expected loss for a random instance and label:
(1) 
Here,
is a loss function, where for label
and prediction vector
, is the loss incurred for predicting when the true label is . A canonical example is the softmax crossentropy loss,(2) 
We may approximate the risk via the empirical risk
(3) 
where
denotes the onehot encoding of
and denotes the vector of losses for each possible label.In a multiclass retrieval setting, our goal is to ensure that the topranked labels in include the true label (Lapin:2018). Formally, we seek to minimise the top loss
(4) 
where denotes the top highest scoring labels. When , this loss reduces to the standard 01 error.
2.2 Knowledge distillation
Distillation involves using a “teacher” model to improve the performance of a “student” model (Bucilua:2006; Hinton:2015). In the simplest form, one trains the teacher model and obtains classprobability estimator , where denotes the simplex. Each estimates how likely
is to be classified as
. In place of the empirical risk (3), the student now minimises the distilled risk(5) 
so that the onehot encoding of labels is replaced with the teacher’s distribution over labels. Distillation may be used when the student has access to a large pool of unlabelled samples; in this case, distillation is a means of semisupervised learning
(Radosavovic:2018).While originally conceived in settings where the student has lower capacity than the teacher (Bucilua:2006; Hinton:2015), distillation has proven useful when both models have the same capacity (Breiman:1996; Furlanello:2018; Xie:2019). More precisely, distillation involves training a teacher on labelled samples , using a function class . Classic distillation assumes that , and that the capacity of is greater than that of . “Bornagain” distillation assumes that , and , i.e., one iteratively trains versions of the same model, using past predictions to improve performance.
2.3 Existing explanations for distillation
While it is wellaccepted that distillation is empirically useful, there is less consensus as to why this is the case. Hinton:2015 attributed the success of distillation (at least in part) to the encoding of “dark knowledge” in the probabilities the teacher assigns to the “wrong” labels for example . This richer information plausibly aids the student, providing for example a weighting on the samples (Furlanello:2018; Tang:2020). Further, when is the softmax crossentropy, the gradient of the distillation objective with respect to the student logits is the difference in the probability estimates for the two, implying a form of logit matching in hightemperature regimes (Hinton:2015).
LopezPaz:2016 related distillation to learning from privileged information under a noisefree setting. Phuong:2019 analysed the dynamics of student learning, assuming a deep linear model for binary classification. Foster:2019 provided a generalisation bound for the student, under the assumption that it learns a model close to the teacher. This does not, however, explicate what constitutes an “ideal” teacher, nor quantify how an approximation to this ideal teacher will affect the student generalisation. Gotmare:2019
studied the effect of distillation on discrimination versus feature extraction layers of the student network.
(Dong:2019) argued that distillation has a similar effect to early stopping, and studied its ability to denoise labels. Our focus, by contrast, is in settings where there is no exogenous label noise, and in studying the statistical rather than optimisation effects of distillation. (Mobahi:2020) analysed the special setting of selfdistillation – wherein the student and teacher employ the same model class – and showed that for kernelised models, this is equivalent to increasing the regularisation strength.3 Distillation through a classprobability lens
We now present a statistical perspective of distillation, which gives insight into why it can aid generalisation. Central to our perspective are two observations:

[label=(),itemsep=0pt,topsep=0pt]

the risk (1) we seek to minimise inherently smooths labels by the classprobability distribution ;

a teacher’s predictions provide an approximation to the classprobability distribution , which can thus yield a better approximation to the risk (1) than onehot labels.
Building on these, we show how sufficiently accurate teacher approximations can improve student generalisation.
3.1 Bayes knows best: distilling classprobabilities
Our starting point is the following elementary observation: the underlying risk for a predictor is
(6) 
where is the Bayes classprobability distribution. Intuitively, inherently captures the suitability of label for instance . Thus, the risk involves drawing an instance , and then computing the average loss of over all labels , weighted by their Bayes probabilities . When is not concentrated on a single label, there is an inherent confusion amongst the labels for the instance .
Given an , the empirical risk (3) approximates the distribution with the onehot
, which is only supported on one label. While this is an unbiased estimate, it is a significant reduction in granularity. By contrast, consider the following
Bayesdistilled risk on a sample :(7) 
This is a distillation objective (cf. (5)) using a Bayes teacher, who provides the student with the true classprobabilities. Rather than fitting to a single label realisation , a student minimising (7) considers all alternate label realisations, weighted by their likelihood.^{1}^{1}1While the student could trivially memorise the training classprobabilities, this would not generalise to test samples. Observe that when is the crossentropy, (7) is simply the KL divergence between the Bayes classprobabilities and our predictions.
Both the standard empirical risk in (3) and Bayesdistilled risk in (7) are unbiased estimates of the population risk . But intuitively, we expect that a student minimising (7) ought to generalise better from a finite sample. We can make this intuition precise by establishing that the Bayesdistilled risk has lower variance, considering fresh draws of the training sample.
Lemma 1.
For any fixed predictor ,
where denotes variance, and equality holds iff , the loss values are constant on the support of .
Proof of Lemma 1.
By definition,
In both cases, the second term simply equals since both estimates are unbiased. For fixed
, the result follows by Jensen’s inequality applied to the random variable
. Equality occurs iff is constant, which requires the loss to be constant on the support of . ∎The condition when the Bayesdistilled and empirical risks have the same variance is intuitive: the two risks trivially agree when is nondiscriminative (attaining equal loss on all labels), or when a label is inherently deterministic (the classprobability is concentrated on one label). For discriminative predictors and nondeterministic labels, however, the Bayesdistilled risk can have significantly lower variance.
The reward of reducing variance is better generalisation: a student minimising (7), will better minimise the population risk, compared to using onehot labels. In fact, we may quantify how the empirical variance of the Bayesdistilled loss values influences generalisation as follows.
Proposition 2.
Pick any bounded loss . Fix a hypothesis class of predictors , with induced class of functions . Suppose has uniform covering number . Then, for any , with probability at least over ,
where and is the empirical variance of the loss values .
Proof of Proposition 2.
This a simple consequence of Maurer:2009, which is a uniform convergence version of Bennet’s inequality (Bennett:1962). ∎
The above may be contrast to the bound achievable for the standard empirical risk using onehot labels: by Maurer:2009,
where here we consider a function class comprising functions , with uniform covering number . Combining the above with Lemma 1, we see that the Bayesdistilled empirical risk results in a lower variance penalty.
To summarise the statistical perspective espoused above, a student should ideally have access to the underlying classprobabilities , rather than a single realisation . As a final comment, the above provides a statistical perspective on the value of “dark knowledge”: the teacher’s “wrong” logits for an example provide approximate information about the Bayes classprobabilities. This results in a lowervariance student objective, aiding generalisation.
3.2 Distilling from an imperfect teacher
The previous section explicates how an idealised “Bayes teacher” can be beneficial to a student. How does this translate to more realistic settings, where one obtains predictions from a teacher which itself is learned form data?
Our first observation is that a teacher’s predictor can typically be seen as an imperfect estimate of the true . Indeed, if the teacher is trained with a loss that is proper (Savage:1971; Schervish:1989; Buja:2005), then
(8) 
where is a (lossdependent) divergence between the true and estimated classprobability functions. For example, the softmax crossentropy corresponds to being the KL divergence between the distributions. The teacher’s goal is thus fundamentally to ensure that their predictions align with the true classprobability function.
Of course, a teacher learned from finite samples is unlikely to achieve zero divergence in (8). Indeed, even highcapacity teacher models may not be rich enough capture the true . Further, even if the teacher can represent , it may not be able to learn this perfectly given a finite sample, owing to both statistical (e.g., the risk of overfitting) and optimisation (e.g., nonconvexity of the objective) issues. We must thus treat as an imperfect estimate of . The natural question is: will such an estimate still improve generalisation?
To answer this, we establish a fundamental biasvariance tradeoff when performing distillation. Specifically, we show the difference between the distilled risk (cf. (5)) and population risks (cf. (3.1)) depends on how variable the loss under the teacher is, and how well the teacher estimates in a squarederror sense. Intuitively, the latter captures how well the teacher estimates on average (bias), and how variable the teacher’s predictions are (variance).
Proposition 3.
Pick any bounded loss . Suppose we have a teacher model with corresponding distilled empirical risk in (5). For constant and any predictor ,
(9)  
(10) 
where denotes the sum of coordinatewise variance.
Proof of Proposition 3.
Unpacking the above, the fidelity of the distilled risk’s approximation to the true one depends on three factors: how variable the expected loss is for a random instance; how well the teacher’s approximates the true on average; and how variable the teacher’s predictions are. Mirroring the previous section, we may convert Proposition 3 into a generalisation bound for the student:
(11) 
where is the penalty term from Proposition 2. As is intuitive, using an imperfect teacher invokes an additional penalty depending on how far the predictions are from the Bayes, in a squarederror sense. For completeness, a formal statement is provided in Proposition 5 in Appendix A.
3.3 Discussion and implications
Our statistical perspective gives a simple yet powerful means of understanding distillation. Our formal results follow readily from this perspective, but their implications are subtle, and merit further discussion.
Why accuracy is not enough. Our biasvariance result establishes that if the teacher provides good classprobabilities, in the sense of approximating in a meansquare sense, then the resulting student should generalise well. In deep networks, this does not imply providing the teacher having higher accuracy; such models may be accurate while being overly confident (Guo:2017; Rothfuss:2019). Our result thus potentially illuminates why more accurate teachers may lead to poorer students, as has been noted empirically (Muller:2019); see also §5.2.
In practice, the precise bound derived above is expected to be loose. However, its qualitative trend may be observed in practice. Figure 1 (see also §5.2) illustrates how increasing the depth of a ResNet model may increase accuracy, but degrade probabilistic calibration. This is seen to directly relate to the quality of a student distilled from these models.
Temperature scaling (Hinton:2015), a very common and empirically successful trick in distillation, can also be analysed through this perspective. Usually, teachers are highly complex and optimised to maximise accuracy; hence, they often become overly confident. Increasing the temperature can help the studenttarget be closer to the true distribution, and not just convey the most accurate label.
Teacher variance and model capacity. Proposition 3 allows for to be random (e.g., owing to the teacher being learned from some independent sample ). Consequently, the variance terms not only reflect how diffused the teacher’s predictions are, but also how much these predictions vary over fresh draws of the teacher sample. Highcapacity teachers may yield vastly different predictions when trained on fresh samples. This variability incurs a penalty in the third term in Proposition 3. At the same time, such teachers can better estimate the Bayes , which incurs a lower bias. The delicate tradeoff between these concerns translates into (student) generalisation.
We may understand the label smoothing trick (Szegedy:2016) in light of the above. This corresponds to mixing the student labels with uniform predictions, yielding for . From the perspective of modelling , choosing introduces a bias. However, has lower variance than the onehot labels ’s, owing to the scaling. Provided the bias is not too large, smoothing can thus aid generalisation.
We remark also that the first variance term in Proposition 3 vanishes as . This is intuitive: in the limit of infinite student samples, the quality of the distilled objective is wholly determined by how well the teacher probabilities model the Bayes probabilities. For small , this term measures how diffused the losses are when weighted by the teacher probabilities (similar to Lemma 1).
How much teacher bias is admissible? When , Proposition 3 reveals that a distilled student’s generalisation gap depends on the bias and variance in . By contrast, from a labelled sample, the student’s generalisation gap depends on the complexity of its model class . Distillation can be expected to help when the first gap is lower than the second.
Concretely, when trained from limited labelled samples, the student will find , which incurs a high statistical error. Now suppose we have a large amount of unlabelled data . A distilled student can then reliably find the minimiser (following (8))
with essentially no statistical error, but an approximation error given by the teacher’s bias. In this setting, we thus need the teacher’s bias to be lower than the statistical error.
On teacher versus student samples A qualifier to our results is that they assume a disjoint set of samples for the teacher and student. For example, it may be that the teacher is trained on a pool of labelled samples, while the student is trained on a larger pool of unlabelled samples. The results thus do not directly hold for the settings of self or codistillation (Furlanello:2018; Anil:2018), wherein the same sample is used for both models. Combining our results with recent analyses from the perspective of regularisation (Mobahi:2020) and datadependent function classes (Foster:2019) would be of interest in future work.
4 Distillation meets multiclass retrieval
Our statistical view has thus far given insight into the potential value of distillation. We now show a distinct practical benefit of this view, by leveraging it for a novel application of distillation to multiclass retrieval. Our basic idea is to construct a doubledistillation objective, wherein distillation informs the loss as to the relative confidence of both “positive” and “negative” labels.
4.1 Doubledistillation for multiclass retrieval
Recall from (4) that in multiclass retrieval, we wish to ensure that the topranked labels in our predictor contain the true label. The softmaxcross entropy in (2) offers a reasonable surrogate loss for this task, since
The latter is related to the CramerSinger loss (Crammer:2002), which bounds the top retrieval loss. Given a sample , we thus seek to minimise . From our statistical view, there is value in instead computing : intuitively, each acts a “smooth positive”, weighted by the Bayes probability .
We observe, however, that such smoothing does not affect the innards of the loss itself. In particular, one critique of (2) is that it assigns equal importance to each . Intuitively, mistakes on labels that are a poor explanation for — i.e., have low — ought to be strongly penalised. On the other hand, if under the Bayes probabilities some strongly explains , it ought to be ignored.
To this end, consider a generalised softmax crossentropy:
(12) 
where is a distribution over labels. When is uniform, this is exactly the standard crossentropy loss (cf. (2)) plus a constant. For nonuniform , however, the loss is encouraged to focus on ensuring for those with large .
Continuing our statistical view, we posit that an ideal choice of is , where is some decreasing function. Concretely, our new loss for a given is
Intuitively, the loss treats as a “positive” label for , and each where as a “negative” label. The loss seeks to score the positive label over all labels which poorly explain . Compared to the standard softmax crossentropy, we avoid penalisation when plausibly explains as well.
Recall that under a distillation setup, a teacher model provides us estimates of . Consequently, to estimate this risk on a finite sample , we may construct the following doubledistillation objective:
(13)  
where ; unrolling, this corresponds to the objective
Observe that (13) uses the teacher in two ways: the first is the standard use of distillation to smooth the “positive” training labels. The second is a novel use of distillation to smooth the “negative” labels. Here, for any candidate “positive” label , we apply varying weights to “negative” labels when computing each .
It remains to specify the precise form of in (13). A natural choice of ; however, since the entries of sum to one, this may not allow the loss to sufficiently ignore other “positive” labels. Concretely, suppose there are plausible “positive” labels, which have most of the probability mass under . Then, the total weight assigned to these labels will be roughly ; this is of the order of , which is what we would get from uniform weights.
To resolve this, we may instead use weights proportional to , where
is the sigmoid function,
is the teacher logit, and is a scaling parameter which may be tuned. This parameterisation allows for multiple labels to have high (or low) weights. In the above example, each “positive” label can individually get a score close to . The total weight of the “positives” can thus also be close to , and so the loss can learn to ignore them.4.2 Discussion and implications
The viability of distillation as a means of smoothing negatives has not, to our knowledge, been explored in the literature. The proposal is, however, a natural consequence of our statistical view of distillation developed in §3.
The doubledistillation objective in (13) relates to ranking losses. In bipartite ranking settings (Cohen:1999), one assumes an instance space and binary labels denoting that items are either relevant, or irrelevant. Rudin:2009 proposed the following push loss for this task:
where and are distributions over positive and negative instances respectively, and are convex nondecreasing functions. Similar losses have also been studied in Yun:2014. The generalised softmax crossentropy in (12) can be seen as a contextual version of this objective for each , where and . In doubledistillation, the contextual distributions are approximated using the teacher’s predictions .
There is a broad literature on using a weight on “negatives” in the softmax (Liu:2016; Liu:2017; Wang:2018; Li:2019; Cao:2019); this is typically motivated by ensuring a varying margin for different classes. The resulting weighting is thus either constant or labeldependent, rather than the label and exampledependent weights provided by distillation. Closer still to our framework is the recent work of Khan:2019, which employs uncertainty estimates in the predictions for a given example and label to adjust the desired margin in the softmax. While not explicitly couched in terms of distillation, this may be understood as a “selfdistillation” setup, wherein the current predictions of a model are used to progressively refine future iterates. Compared to doubledistillation, however, the nature of the weighting employed is considerably more complicated.
There is a rich literature on the problem of label ranking, where typically it is assumed that one observes a (partial) groundtruth ranking over labels (Dekel:2004; Furnkranz:2008; Vembu:2011). We remark also that the view of the softmax as a ranking loss has received recent attention (Bruch:2019; Bruch:2019b). Exploiting the statistical view of distillation in these regimes is a promising future direction. Tang:2018 explored distillation in a related learningtorank framework. While similar in spirit, this focusses on pointwise losses, wherein the distinction between positive and negative smoothing is absent.
Finally, we note that while our discussion has focussed on the softmax crossentropy, doubledistillation may be useful for a broader class of losses, e.g., orderweighted losses as explored in Usunier:2009; Reddi:2019.
5 Experimental results
We now present experiments illustrating three key points:

[label=(),itemsep=0pt,topsep=0pt]

we show that distilling with true (Bayes) classprobabilities improves generalisation over onehot labels, validating our statistical view of distillation.

we illustrate our biasvariance tradeoff on synthetic and realworld datasets, confirming that teachers with good estimates of can be usefully distilled.

we finally show that doubledistillation performs well on realworld multiclass retrieval datasets, confirming the broader value in our statistical perspective.
5.1 Is Bayes a good teacher for distillation?
To illustrate our statistical perspective, we conduct a synthetic experiment where is known, and show that distilling these Bayes classprobabilities benefits learning.
We generate training samples from a distribution comprising classconditionals which are each 10dimensional Gaussians, with means respectively. By construction, the Bayes classprobability distribution is where , for .
We compare two training procedures: standard logistic regression on
, and Bayesdistilled logistic regression using per (7). Logistic regression is wellspecified for this problem, i.e., as , the standard learner will learn . However, we will demonstrate that on finite samples, the Bayesdistilled learner’s knowledge of will be beneficial. We reiterate that while this learner could trivially memorise the training , this would not generalise.Figure 2(a) compares the performance of these two approaches for varying training set sizes, where for each training set size we perform independent trials and measure the AUCROC on a test set of samples. We observe two key trends: first, Bayesdistillation generally offers a noticeable gain over the standard onehot encoding, in line with our theoretical guarantee of low variance.
Second, both methods see improved performance with more samples, but the gains are greater for the onehot encoding. This in line with our intuition that distillation effectively augments each training sample: when is large to begin with, the marginal gain of such augmentation is minimal.
Figure 2(b) continues the exploration of this setting. We now vary the distance between the means of each of the Gaussians. When is small, the two distributions grow closer together, making the classification problem more challenging. We thus observe that both methods see worse performance as is smaller. At the same time, smaller makes the onehot labels have higher variance compared to the Bayes classprobabilities. Consequently, the gains of distillation over the onehot encoding are greater for this setting, in line with our guarantee on the lowervariance Bayesdistilled risk aiding generalisation (Proposition 2).
As a final experiment, we verify the claim that teacher accuracy does not suffice for improving student generalisation, since this does not necessarily correlate with the quality of the teacher’s probability estimates. We assess this by artificially distorting the teacher probabilities so as to perfectly preserve teacher accuracy, while degrading their approximation to . Appendix B.2 presents plots confirming that such degradation progressively reduces the gains of distillation.
5.2 Illustration of biasvariance tradeoff
We next illustrate our analysis on the biasvariance tradeoff in distillation from §3.2 on synthetic and realworld datasets.
Synthetic. We now train a series of increasingly complex teacher models , and assess their resulting distillation benefit on a synthetic problem. Here, the data is sampled from a marginal which is a zeromean isotropic Gaussian in 2D. The classprobability function is given by , so that the negatives are concentrated in a rectangular slab.
We consider teachers that are random forests of a fixed depth
, with base estimators. Increasing has the effect of reducing teacher bias (since the class of depth trees can better approximate ), but increasing teacher variance (since the class of depth trees can induce complex decision boundaries). For fixed , we train a teacher model on the given training sample (with ). We then distill the teacher predictions to a student model, which is a depth tree. For each such teacher, we compute its MSE, as well as the test set AUC of the corresponding distilled student. We repeat this for independent trials.Figure 3(a) and 3(b) show how the teacher’s depth affects its MSE in modelling , as well as the AUC of the resulting distilled student. There is an optimal depth at which the teacher achieves the best MSE approximation of . In keeping with the theory, this also corresponds to the teacher whose resulting student generalises the best. Figure 4(a) combines these plots to explicitly show the relationship between the the teacher’s MSE and the student’s AUC. In line with the theory, more accurate estimates of result in better students.
Note that at depth , the teacher model is expected to have lower bias; however, it results in a slightly worse distilled student. This verifies that one may favour a higherbias teacher if it has lower variance: a teacher may achieve a lower MSE – and thus distill better – by slightly increasing its bias while lowering variance. See Appendix B.1 for additional biasvariance experiments on synthetic data.
Fashion MNIST. It is challenging to assess the biasvariance tradeoff on realworld datasets, where the Bayes is unknown. As a proxy, we take the fashion MNIST dataset, and treat a powerful teacher model as our . We train an MLP teacher with two hidden layers with and dimensions. This achieves a test accuracy of .
We then inject bias and noise per (16), and distill the result to a linear logistic regression model. To amplify the effects of distillation, we constrain the student by only offering it the top samples that the original teacher deems most uncertain. Figures 4(b) demonstrates similar trend to the synthetic dataset, with the best MSE approximator to the original teacher generally yielding the best student.
CIFAR100
. We verify that accurate probabilty estimation by the teacher strongly influences student generalisation, and that this can be at odds with accuracy. We revisit the plots introduced in Figure
1. Here, we train ResNets of varying depths on the CIFAR100 dataset, and use these as teachers to distill to a student ResNet of fixed depth . Figure 1(a) reveals that the teacher model gets increasingly more accurate as its depth increases; however, the corresponding logloss starts increasing beyond a depth of . This indicates the teacher’s probability estimates become progressively poorer approximations of the Bayes classprobability distribution . The accuracy of the student model also degrades beyond a teacher depth of , reflecting the biasvariance bound in Proposition 3.Relationship between depth of teacher’s decision tree model (model complexity) and its MSE in modelling
, as well as the AUC of the resulting distilled student, on a synthetic problem. There is an optimal depth at which the teacher achieves the best MSE approximation of . In keeping with the theory, this corresponds to the teacher whose resulting student generalises the best.5.3 Doubledistillation for multiclass retrieval
Our final set of experiments confirm the value in our doubledistillation objective in (13). To do so, we use the AmazonCat13K and Amazon670K benchmark datasets for multiclass retrieval (McAuley:2013; Bhatia:2015). The data is multilabel; following Reddi:2019, we make it multiclass by creating a single example for each label associated with .
We construct a “teacher” model using a feedforward network with a single (linear) hidden layer of width , trained to minimise the softmax crossentropy loss. We then construct a “student” model using the same architecture, but with a hidden layer of width for AmazonCat13K and for Amazon670K because Amazon670K is significantly larger than AmazonCat13K (670k vs 13k labels). This student model is compared to a distilled student, where the teacher logits are used in place of the onehot training labels. Both methods are then compared to the doubledistillation objective, where the teacher logits are used to smooth the negatives in the softmax per (12) and (13).
We compare all methods using the precision@ metric with , averaging these over multiple runs. Table 1 summarises our findings. We see that distillation offers a small but consistent bump in performance over the student baseline. Doubledistillation further improves upon this, especially at the head of the prediction (P@1 and P@3), confirming the value of weighing negatives differently. The gains are particularly significant on AmazonCat13K, where the doubledistilled student can improve upon the teacher model itself. Overall, our findings illustrate the broader value of the statistical perspective of distillation.
6 Conclusion
We presented a statistical perspective on distillation, building on a simple observation: distilling the Bayes classprobabilities yields a more reliable estimate of the population risk. Viewing distillation in this light, we formalised a biasvariance tradeoff to quantify the effect of approximate teacher classprobability estimates on student generalisation, and also studied a novel application of distillation to multiclass retrieval. Towards developing a comprehensive understanding of distillation, studying the optimisation aspects of this viewpoint, and the setting of overparametrised teacher models (Zhang:2018) would be of interest.
References
Appendix A Theory: additional results
Proposition 4.
Suppose we have a teacher model with corresponding distilled empirical risk (5). Furthermore, assume is unbiased, i.e., for all . Then, for any predictor ,
for some constant .
Proof of Proposition 4.
Let and . Then,
Note that since is an unbiased estimator of . Using this fact, we obtain the desired result as follows:
∎
Proposition 5.
Pick any bounded loss . Fix a hypothesis class of predictors , with induced class of functions . Suppose has uniform covering number . Then, for any , with probability at least over ,
where and is the empirical variance of the loss values.
Appendix B Experiments: additional results
b.1 Biasvariance tradeoff
Continuing the same synthetic Gaussian data as in §5.1, we now consider a family of teachers of the form
(16) 
where is the sigmoid, , , and and comprises independent Gaussian noise. Increasing induces a bias in the teacher’s estimate of , while increasing induces a variance in the teacher over fresh draws. Combined, these induce the teacher’s mean squared error (MSE) , which by Proposition 3 bounds the gap between the population and distilled empirical risk.
For each such teacher, we compute its MSE, as well as the test set AUC of the corresponding distilled student. Figure 5 shows the relationship between the the teacher’s MSE and the student’s AUC. In line with the theory, more accurate estimates of result in better students. Figure 6 also shows how the teacher’s MSE depends on the choice of and , demonstrating that multiple such pairs can achieve a similar MSE. As before, we see that a teacher may tradeoff bias for variance in order to achieve a low MSE.
b.2 Uncalibrated teachers may distill worse
We now illustrate the importance of the teacher probabilities needing to be meaningful reflections of the true . We continue our exploration of the synthetic Gaussian problem, where takes on a sigmoid form. We now distort these probabilities as follows: for , we construct
The new classprobability function preserves the classification boundary , but squashes the probabilities themselves as gets larger. We now consider using as teacher probabilities to distill to a student. This teacher has the same accuracy, but significantly worse calibration than the Bayesteacher using .
Figure 7 confirms that as increases, the effect of distilling on the student is harmed. This validates our claim that teacher accuracy is insufficient to judge whether distillation will be useful.
Comments
There are no comments yet.