RepDistiller
[ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methods
view repo
Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cuttingedge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and crossmodal transfer. Our method sets a new stateoftheart in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation. Code: http://github.com/HobbitLong/RepDistiller.
READ FULL TEXT VIEW PDF
The primary goal of knowledge distillation (KD) is to encapsulate the
in...
read it
Knowledge distillation has become an important technique for model
compr...
read it
In this work, we address the problem how a network for action recognitio...
read it
Knowledge distillation extracts general knowledge from a pretrained tea...
read it
Knowledge Transfer (KT) techniques tackle the problem of transferring th...
read it
Proliferation of edge networks creates islands of learning agents workin...
read it
Knowledge Transfer has been applied in solving a wide variety of problem...
read it
[ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methods
Knowledge distillation (KD) transfers knowledge from one deep learning model (the teacher) to another (the student). The objective originally proposed by
Hinton et al. (2015)minimizes the KL divergence between the teacher and student outputs. This formulation makes intuitive sense when the output is a distribution, e.g., a probability mass function over classes. However, often we instead wish to transfer knowledge about a
representation. For example, in the problem of “crossmodal distillation", we may wish to transfer the representation of an image processing network to a sound (Aytar et al., 2016) or to depth (Gupta et al., 2016)processing network, such that deep features for an image and the associated sound or depth features are highly correlated. In such cases, the KL divergence is undefined.
Representational knowledge is structured – the dimensions exhibit complex interdependencies. The original KD objective introduced in Hinton et al. (2015) treats all dimensions as independent, conditioned on the input. Let be the output of the teacher and be the output of the student. Then the original KD objective function, , has the fully factored form: ^{*}^{*}*In particular, in Hinton et al. (2015), . Such a factored objective is insufficient for transferring structural knowledge, i.e. dependencies between output dimensions and . This is similar to the situation in image generation where an objective produces blurry results, due to independence assumptions between output dimensions.
To overcome this problem, we would like an objective that capture correlations and higher order output dependencies. To achieve this, in this paper we leverage the family of contrastive objectives (Gutmann & Hyvärinen, 2010; Oord et al., 2018; Arora et al., 2019; Hjelm et al., 2018). These objective functions have been used successfully in recent years for density estimation and representation learning, especially in selfsupervised settings. Here we adapt them to the task of knowledge distillation from one deep network to another. We show that it is important to work in representation space, similar to recent works such as Zagoruyko & Komodakis (2016a); Romero et al. (2014)
. However, note that the loss functions used in those works do not explicitly try to capture correlations or higherorder dependencies in representational space.
Our objective maximizes a lowerbound to the mutual information between the teacher and student representations. We find that this results in better performance on several knowledge transfer tasks. We conjecture that this is because the contrastive objective better transfers all the information in the teacher’s representation, rather than only transferring knowledge about conditionally independent output class probabilities. Somewhat surprisingly, the contrastive objective even improves results on the originally proposed task of distilling knowledge about class probabilities, for example, compressing a large CIFAR100 network into a smaller one that performs almost as well. We believe this is because the correlations between different class probabilities contains useful information that regularizes the learning problem. Our paper forges a connection between two literatures that have evolved mostly independently: knowledge distillation and representation learning. This connection allows us to leverage strong methods from representation learning to significantly improve the SOTA on knowledge distillation. Our contributions are:
A contrastivebased objective for transferring knowledge between deep networks.
Applications to model compression, crossmodal transfer, and ensemble distillation.
Benchmarking 12 recent distillation methods; CRD outperforms all other methods, e.g., 57% average relative improvement over the original KD (Hinton et al., 2015) ^{†}^{†}†Average relative improvement = , where , , and represent the accuracies of CRD, KD, and vanilla training of the th student model, respectively., which, surprisingly, performs the second best.
The seminal work of Buciluǎ et al. (2006) and Hinton et al. (2015) introduced the idea of knowledge distillation between large, cumbersome models into smaller, faster models without losing too much generalization power. The general motivation was that at training time, the availability of computation allows “slop" in model size, and potentially faster learning. But computation and memory constraints at inference time necessitate the use of smaller models. Buciluǎ et al. (2006)
achieve this by matching output logits;
Hinton et al. (2015) introduced the idea of temperature in the softmax outputs to better represent smaller probabilities in the output of a single sample. These smaller probabilities provide useful information about the learned representation of the teacher model; some tradeoff between large temperatures (which increase entropy) or small temperatures tend to provide the highest transfer of knowledge between student and teacher. The method in Li et al. (2014) was also closely related to Hinton et al. (2015).Attention transfer (Zagoruyko & Komodakis, 2016a) focuses on the features maps of the network as opposed to the output logits. Here the idea is to elicit similar response patterns in the teacher and student feature maps (called “attention"). However, only feature maps with the same spatial resolution can be combined in this approach, which is a significant limitation since it requires student and teacher networks with very similar architectures. This technique achieves state of the art results for distillation (as measured by the generalization of the student network). FitNets (Romero et al., 2014) also deal with intermediate representations by using regressions to guide the feature activations of the student network. Since Zagoruyko & Komodakis (2016a) do a weighted form of this regression, they tend to perform better. Other papers (Yim et al., 2017; Huang & Wang, 2017; Kim et al., 2018; Yim et al., 2017; Huang & Wang, 2017; Ahn et al., 2019) have enforced various criteria based on representations. The contrastive objective we use in this paper is the same as that used in Tian et al. (2019). But we derive from a different perspective and give a rigorous proof that our objective is a lower bound on mutual information. Our objective is also related to the InfoNCE and NCE objectives introduced in (Oord et al., 2018; Gutmann & Hyvärinen, 2010). (Oord et al., 2018) use contrastive learning in the context of selfsupervised representations learning. They show that their objective maximizes a lower bound on mutual information. A very related approach is used in Hjelm et al. (2018). InfoNCE and NCE are closely related but distinct from adversarial learning (Goodfellow et al., 2014). In Goodfellow (2014), it is shown that the NCE objective of Gutmann & Hyvärinen (2010) can lead to maximum likelihood learning, but not the adversarial objective.
The key idea of contrastive learning is very general: learn a representation that is close in some metric space for “positive” pairs and push apart the representation between “negative” pairs. Fig. 1 gives a visual explanation for how we structure contrastive learning for the three tasks we consider: model compression, crossmodal transfer and ensemble distillation.
Given two deep neural networks, a teacher and a student . Let be the network input; we denote representations at the penultimate layer (before logits) as and respectively. Let represent a training sample, and another randomly chosen sample. We would like to push closer the representations and while pushing apart and
. For ease of notation, we define random variables
and for the student and teacher’s representations of the data respectively:(1)  
(2)  
(3) 
Intuitively speaking, we will consider the joint distribution
and the product of marginal distributions , so that, by maximizing KL divergence between these distributions, we can maximize the mutual information between student and teacher representations. To setup an appropriate loss that can achieve this aim, let us define a distribution with latent variable which decides whether a tuple was drawn from the joint () or product of marginals ():(4) 
Now, suppose in our data, we are given congruent pair (drawn from the joint distribution, i.e. the same input provided to and ) for every incongruent pairs (drawn from the product of marginals; independent randomly drawn inputs provided to and ). Then the priors on the latent are:
(5) 
By simple manipulation and Bayes’ rule, the posterior for class is given by:
(6)  
(7) 
Next, we observe a connection to mutual information as follows:
(8)  
Then taking expectation on both sides w.r.t. (equivalently w.r.t. ) and rearranging, gives us:
(9) 
where is the mutual information between the distributions of the teacher and student embeddings. Thus maximizing w.r.t. the parameters of the student network increases a lower bound on mutual information. However, we do not know the true distribution ; instead we estimate it by fitting a model to samples from the data distribution , where and represent the domains of the embeddings. We maximize the log likelihood of the data under this model (a binary classification problem):
(10)  
(11) 
We term the critic since we will be learning representations that optimize the critic’s score. Assuming sufficiently expressive , (via Gibbs’ inequality; see Sec. 6.2.1 for proof), so we can rewrite Eq. 9 in terms of :
(12) 
Therefore, we see that the optimal critic is an estimator whose expectation lowerbounds mutual information. We wish to learn a student that maximizes the mutual information between its representation and the teacher’s, suggesting the following optimization problem:
(13) 
An apparent difficulty here is that the optimal critic depends on the current student. We can circumvent this difficulty by weakening the bound in (12) to:
(14)  
(15)  
(16) 
The first line comes about by simply adding to the bound in (12). This term is strictly negative, so the inequality holds. The last line follows from the fact that upperbounds . Optimizing (15) w.r.t. the student we have:
(17)  
(18) 
which demonstrates that we may jointly optimize at the same time as we learn . We note that due to (16), , for any , also is a representation that optimizes a lowerbound (a weaker one) on mutual information, so our formulation does not rely on being optimized perfectly.
We may choose to represent with any family of functions that satisfy . In practice, we use the following:
(19) 
where is the cardinality of the dataset and is a temperature that adjusts the concentration level. In practice, since the dimensionality of and may be different, and linearly transform them into the same dimension and further normalize them by 2 norm before the inner product. The form of Eq. (18) is inspired by NCE (Gutmann & Hyvärinen, 2010). Our formulation is similar to the InfoNCE loss (Oord et al., 2018) in that we maximize a lower bound on the mutual information. However we use a different objective and bound, which in our experiments (Sec. 4.4) we found to be more effective than InfoNCE.
Implementation. Theoretically, larger in Eq. 16 leads to tighter lower bound on MI. In practice, to avoid using very large batch size, we implement a memory buffer that stores latent features of each data sample computed from previous batches. Therefore, during training we can efficiently retrieve a large number of negative samples from the memory buffer without recomputing their features.
The knowledge distillation loss was proposed in Hinton et al. (2015). In addition to the regular crossentropy loss between the student output and onehot label , it asks the student network output to be as similar as possible to the teacher output by minimizing the crossentropy between their output probabilities. The complete objective is:
(20) 
where is the temperature, is a balancing weight, and is softmax function. is further decomposed into and a constant entropy .
In the crossmodal transfer task shown in Fig. 1(b), a teacher network is trained on a source modality with largescale labeled dataset. We then wish to transfer the knowledge to a student network, but adapt it to another dataset or modality . But the features of the teacher network are still valuable to help with learning of the student on another domain. In this transfer task, we use the contrastive loss Eq. 10 to match the features of the student and teacher. Additionally, we also consider other distillation objectives, such as KD discussed in previous section, Attention Transfer Zagoruyko & Komodakis (2016a) and FitNet Romero et al. (2014). Such transfer is conducted on a paired but unlabeled dataset . In this scenario, there is no true label of such data for the original training task on the source modality, and therefore we ignore the term in all objectives that we test. Prior crossmodal work Aytar et al. (2016); Hoffman et al. (2016b, a) uses either regression or KLdivergence.
In the case of ensemble distillation shown in 1(c), we have teacher networks, and one student network . We adopt the contrastive framework by defining multiple pairwise contrastive losses between features of each teacher network and the student network . These losses are summed together to give the final loss (to be minimized):
(21) 
We evaluate our contrastive representation distillation (CRD) framework in three knowledge distillation tasks: (a) model compression of a large network to a smaller one; (b) crossmodal knowledge transfer; (c) ensemble distillation from a group of teachers to a single student network.
Datasets (1) CIFAR100 (Krizhevsky & Hinton, 2009) contains 50K training images with 0.5K images per class and 10K test images. (2) ImageNet (Deng et al., 2009) provides 1.2 million images from 1K classes for training and 50K for validation. (3) STL10 (Coates et al., 2011) consists of a training set of 5K labeled images from 10 classes and 100K unlabeled images, and a test set of 8K images. (4) TinyImageNet (Deng et al., 2009) has 200 classes, each with 500 training images and 50 validaton images. (5) NYUDepth V2 (Silberman et al., 2012) consists of 1449 indoor images, each labeled with dense depth image and semantic map.










Teacher  75.61  75.61  72.34  74.31  74.31  79.42  74.64  
Student  73.26  71.98  69.06  69.06  71.14  72.50  70.36  
KD 
74.92  73.54  70.66  70.67  73.08  73.33  72.98  
FitNet  73.58 ()  72.24 ()  69.21 ()  68.99 ()  71.06 ()  73.50 ()  71.02 ()  
AT  74.08 ()  72.77 ()  70.55 ()  70.22 ()  72.31 ()  73.44 ()  71.43 ()  
SP  73.83 ()  72.43 ()  69.67 ()  70.04 ()  72.69 ()  72.94 ()  72.68 ()  
CC  73.56 ()  72.21 ()  69.63 ()  69.48 ()  71.48 ()  72.97 ()  70.71 ()  
VID  74.11 ()  73.30 ()  70.38 ()  70.16 ()  72.61 ()  73.09 ()  71.23 ()  
RKD  73.35 ()  72.22 ()  69.61 ()  69.25 ()  71.82 ()  71.90 ()  71.48 ()  
PKT  74.54 ()  73.45 ()  70.34 ()  70.25 ()  72.61 ()  73.64 ()  72.88 ()  
AB  72.50 ()  72.38 ()  69.47 ()  69.53 ()  70.98 ()  73.17 ()  70.94 ()  
FT  73.25 ()  71.59 ()  69.84 ()  70.22 ( 
Comments
There are no comments yet.