Knowledge Distillation Meets Self-Supervision

06/12/2020
by   Guodong Xu, et al.
14

Knowledge distillation, which involves extracting the "dark knowledge" from a teacher network to guide the learning of a student network, has emerged as an important technique for model compression and transfer learning. Unlike previous works that exploit architecture-specific cues such as activation and attention for distillation, here we wish to explore a more general and model-agnostic approach for extracting "richer dark knowledge" from the pre-trained teacher model. We show that the seemingly different self-supervision task can serve as a simple yet powerful solution. For example, when performing contrastive learning between transformed entities, the noisy predictions of the teacher network reflect its intrinsic composition of semantic and pose information. By exploiting the similarity between those self-supervision signals as an auxiliary task, one can effectively transfer the hidden information from the teacher to the student. In this paper, we discuss practical ways to exploit those noisy self-supervision signals with selective transfer for distillation. We further show that self-supervision signals improve conventional distillation with substantial gains under few-shot and noisy-label scenarios. Given the richer knowledge mined from self-supervision, our knowledge distillation approach achieves state-of-the-art performance on standard benchmarks, i.e., CIFAR100 and ImageNet, under both similar-architecture and cross-architecture settings. The advantage is even more pronounced under the cross-architecture setting, where our method outperforms the state of the art CRD by an average of 2.3 CIFAR100 across six different teacher-student pairs.

READ FULL TEXT
research
11/23/2021

Semi-Online Knowledge Distillation

Knowledge distillation is an effective and stable method for model compr...
research
12/17/2020

Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup

Knowledge distillation, which involves extracting the "dark knowledge" f...
research
09/07/2021

Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution

Knowledge distillation (KD) is an effective framework that aims to trans...
research
12/01/2018

Snapshot Distillation: Teacher-Student Optimization in One Generation

Optimizing a deep neural network is a fundamental task in computer visio...
research
03/01/2018

Knowledge Transfer with Jacobian Matching

Classical distillation methods transfer representations from a "teacher"...
research
03/09/2022

How many Observations are Enough? Knowledge Distillation for Trajectory Forecasting

Accurate prediction of future human positions is an essential task for m...
research
08/02/2019

Learning Lightweight Lane Detection CNNs by Self Attention Distillation

Training deep models for lane detection is challenging due to the very s...

Please sign up or login with your details

Forgot password? Click here to reset