Parameter-Efficient and Student-Friendly Knowledge Distillation

05/28/2022
by   Jun Rao, et al.
9

Knowledge distillation (KD) has been extensively employed to transfer the knowledge from a large teacher model to the smaller students, where the parameters of the teacher are fixed (or partially) during training. Recent studies show that this mode may cause difficulties in knowledge transfer due to the mismatched model capacities. To alleviate the mismatch problem, teacher-student joint training methods, e.g., online distillation, have been proposed, but it always requires expensive computational cost. In this paper, we present a parameter-efficient and student-friendly knowledge distillation method, namely PESF-KD, to achieve efficient and sufficient knowledge transfer by updating relatively few partial parameters. Technically, we first mathematically formulate the mismatch as the sharpness gap between their predictive distributions, where we show such a gap can be narrowed with the appropriate smoothness of the soft label. Then, we introduce an adapter module for the teacher and only update the adapter to obtain soft labels with appropriate smoothness. Experiments on a variety of benchmarks show that PESF-KD can significantly reduce the training cost while obtaining competitive results compared to advanced online distillation methods. Code will be released upon acceptance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2021

Learning Student-Friendly Teacher Networks for Knowledge Distillation

We propose a novel knowledge distillation approach to facilitate the tra...
research
08/23/2020

Matching Guided Distillation

Feature distillation is an effective way to improve the performance for ...
research
01/30/2023

Knowledge Distillation ≈ Label Smoothing: Fact or Fallacy?

Contrary to its original interpretation as a facilitator of knowledge tr...
research
09/30/2020

Efficient Kernel Transfer in Knowledge Distillation

Knowledge distillation is an effective way for model compression in deep...
research
09/12/2022

Switchable Online Knowledge Distillation

Online Knowledge Distillation (OKD) improves the involved models by reci...
research
10/03/2022

Robust Active Distillation

Distilling knowledge from a large teacher model to a lightweight one is ...
research
02/16/2023

Learning From Biased Soft Labels

Knowledge distillation has been widely adopted in a variety of tasks and...

Please sign up or login with your details

Forgot password? Click here to reset