Contrastive Representation Distillation

10/23/2019
by   Yonglong Tian, et al.
12

Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation. Code: http://github.com/HobbitLong/RepDistiller.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2020

Wasserstein Contrastive Representation Distillation

The primary goal of knowledge distillation (KD) is to encapsulate the in...
research
12/01/2020

Multi-level Knowledge Distillation

Knowledge distillation has become an important technique for model compr...
research
04/28/2023

Ensemble Modeling with Contrastive Knowledge Distillation for Sequential Recommendation

Sequential recommendation aims to capture users' dynamic interest and pr...
research
12/01/2021

Information Theoretic Representation Distillation

Despite the empirical success of knowledge distillation, there still lac...
research
05/19/2021

Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation

Knowledge distillation (KD), transferring knowledge from a cumbersome te...
research
03/09/2020

Knowledge distillation via adaptive instance normalization

This paper addresses the problem of model compression via knowledge dist...
research
09/15/2022

Layerwise Bregman Representation Learning with Applications to Knowledge Distillation

In this work, we propose a novel approach for layerwise representation l...

Please sign up or login with your details

Forgot password? Click here to reset