Deep Collective Knowledge Distillation

04/18/2023
by   Jihyeon Seo, et al.
0

Many existing studies on knowledge distillation have focused on methods in which a student model mimics a teacher model well. Simply imitating the teacher's knowledge, however, is not sufficient for the student to surpass that of the teacher. We explore a method to harness the knowledge of other students to complement the knowledge of the teacher. We propose deep collective knowledge distillation for model compression, called DCKD, which is a method for training student models with rich information to acquire knowledge from not only their teacher model but also other student models. The knowledge collected from several student models consists of a wealth of information about the correlation between classes. Our DCKD considers how to increase the correlation knowledge of classes during training. Our novel method enables us to create better performing student models for collecting knowledge. This simple yet powerful method achieves state-of-the-art performances in many experiments. For example, for ImageNet, ResNet18 trained with DCKD achieves 72.27%, which outperforms the pretrained ResNet18 by 2.52%. For CIFAR-100, the student model of ShuffleNetV1 with DCKD achieves 6.55% higher top-1 accuracy than the pretrained ShuffleNetV1.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset