Unified and Effective Ensemble Knowledge Distillation

04/01/2022
by   Chuhan Wu, et al.
0

Ensemble knowledge distillation can extract knowledge from multiple teacher models and encode it into a single student model. Many existing methods learn and distill the student model on labeled data only. However, the teacher models are usually learned on the same labeled data, and their predictions have high correlations with groudtruth labels. Thus, they cannot provide sufficient knowledge complementary to task labels for student teaching. Distilling on unseen unlabeled data has the potential to enhance the knowledge transfer from the teachers to the student. In this paper, we propose a unified and effective ensemble knowledge distillation method that distills a single student model from an ensemble of teacher models on both labeled and unlabeled data. Since different teachers may have diverse prediction correctness on the same sample, on labeled data we weight the predictions of different teachers according to their correctness. In addition, we weight the distillation loss based on the overall prediction correctness of the teacher ensemble to distill high-quality knowledge. On unlabeled data, there is no groundtruth to evaluate prediction correctness. Fortunately, the disagreement among teachers is an indication of sample hardness, and thereby we weight the distillation loss based on teachers' disagreement to emphasize knowledge distillation on important samples. Extensive experiments on four datasets show the effectiveness of our proposed ensemble distillation method.

READ FULL TEXT
research
11/13/2019

Learning from a Teacher using Unlabeled Data

Knowledge distillation is a widely used technique for model compression....
research
05/22/2021

Revisiting Knowledge Distillation for Object Detection

The existing solutions for object detection distillation rely on the ava...
research
01/02/2019

Learning Efficient Detector with Semi-supervised Adaptive Distillation

Knowledge Distillation (KD) has been used in image classification for mo...
research
08/20/2022

Effectiveness of Function Matching in Driving Scene Recognition

Knowledge distillation is an effective approach for training compact rec...
research
02/08/2023

SLaM: Student-Label Mixing for Semi-Supervised Knowledge Distillation

Semi-supervised knowledge distillation is a powerful training paradigm f...
research
01/11/2021

Resolution-Based Distillation for Efficient Histology Image Classification

Developing deep learning models to analyze histology images has been com...
research
01/26/2022

TrustAL: Trustworthy Active Learning using Knowledge Distillation

Active learning can be defined as iterations of data labeling, model tra...

Please sign up or login with your details

Forgot password? Click here to reset