Densely Guided Knowledge Distillation using Multiple Teacher Assistants

09/18/2020
by   Wonchul Son, et al.
0

With the success of deep neural networks, knowledge distillation which guides the learning of a small student network from a large teacher network is being actively studied for model compression and transfer learning. However, few studies have been performed to resolve the poor learning issue of the student network when the student and teacher model sizes significantly differ. In this paper, we propose a densely guided knowledge distillation using multiple teacher assistants that gradually decrease the model size to efficiently bridge the gap between teacher and student networks. To stimulate more efficient learning of the student network, we guide each teacher assistant to every other smaller teacher assistant step by step. Specifically, when teaching a smaller teacher assistant at the next step, the existing larger teacher assistants from the previous step are used as well as the teacher network to increase the learning efficiency. Moreover, we design stochastic teaching where, for each mini-batch during training, a teacher or a teacher assistant is randomly dropped. This acts as a regularizer like dropout to improve the accuracy of the student network. Thus, the student can always learn rich distilled knowledge from multiple sources ranging from the teacher to multiple teacher assistants. We verified the effectiveness of the proposed method for a classification task using Cifar-10, Cifar-100, and Tiny ImageNet. We also achieved significant performance improvements with various backbone architectures such as a simple stacked convolutional neural network, ResNet, and WideResNet.

READ FULL TEXT
research
02/09/2019

Improved Knowledge Distillation via Teacher Assistant: Bridging the Gap Between Student and Teacher

Despite the fact that deep neural networks are powerful models and achie...
research
06/01/2022

ORC: Network Group-based Knowledge Distillation using Online Role Change

In knowledge distillation, since a single, omnipotent teacher network ca...
research
10/21/2022

Distilling the Undistillable: Learning from a Nasty Teacher

The inadvertent stealing of private/sensitive information using Knowledg...
research
09/15/2022

CES-KD: Curriculum-based Expert Selection for Guided Knowledge Distillation

Knowledge distillation (KD) is an effective tool for compressing deep cl...
research
03/09/2020

Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly Convolutional Neural Network

There is a need for an on-the-fly computational process with very low pe...
research
08/16/2020

Cascaded channel pruning using hierarchical self-distillation

In this paper, we propose an approach for filter-level pruning with hier...
research
07/03/2022

PrUE: Distilling Knowledge from Sparse Teacher Networks

Although deep neural networks have enjoyed remarkable success across a w...

Please sign up or login with your details

Forgot password? Click here to reset