AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation

11/20/2022
by   Hyungmin Kim, et al.
0

We present a novel adversarial penalized self-knowledge distillation method, named adversarial learning and implicit regularization for self-knowledge distillation (AI-KD), which regularizes the training procedure by adversarial learning and implicit distillations. Our model not only distills the deterministic and progressive knowledge which are from the pre-trained and previous epoch predictive probabilities but also transfers the knowledge of the deterministic predictive distributions using adversarial learning. The motivation is that the self-knowledge distillation methods regularize the predictive probabilities with soft targets, but the exact distributions may be hard to predict. Our method deploys a discriminator to distinguish the distributions between the pre-trained and student models while the student model is trained to fool the discriminator in the trained procedure. Thus, the student model not only can learn the pre-trained model's predictive probabilities but also align the distributions between the pre-trained and student models. We demonstrate the effectiveness of the proposed method with network architectures on multiple datasets and show the proposed method achieves better performance than state-of-the-art methods.

READ FULL TEXT
research
05/12/2021

MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation

The advent of large pre-trained language models has given rise to rapid ...
research
06/07/2021

RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models

Pre-trained language models achieve outstanding performance in NLP tasks...
research
02/05/2020

Feature-map-level Online Adversarial Knowledge Distillation

Feature maps contain rich information about image intensity and spatial ...
research
10/09/2020

Be Your Own Best Competitor! Multi-Branched Adversarial Knowledge Transfer

Deep neural network architectures have attained remarkable improvements ...
research
02/13/2020

Self-Distillation Amplifies Regularization in Hilbert Space

Knowledge distillation introduced in the deep learning context is a meth...
research
12/05/2021

Safe Distillation Box

Knowledge distillation (KD) has recently emerged as a powerful strategy ...
research
12/16/2021

Knowledge Distillation Leveraging Alternative Soft Targets from Non-Parallel Qualified Speech Data

This paper describes a novel knowledge distillation framework that lever...

Please sign up or login with your details

Forgot password? Click here to reset