Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data

08/11/2021
by   Kuluhan Binici, et al.
0

With the increasing popularity of deep learning on edge devices, compressing large neural networks to meet the hardware requirements of resource-constrained devices became a significant research direction. Numerous compression methodologies are currently being used to reduce the memory sizes and energy consumption of neural networks. Knowledge distillation (KD) is among such methodologies and it functions by using data samples to transfer the knowledge captured by a large model (teacher) to a smaller one(student). However, due to various reasons, the original training data might not be accessible at the compression stage. Therefore, data-free model compression is an ongoing research problem that has been addressed by various works. In this paper, we point out that catastrophic forgetting is a problem that can potentially be observed in existing data-free distillation methods. Moreover, the sample generation strategies in some of these methods could result in a mismatch between the synthetic and real data distributions. To prevent such problems, we propose a data-free KD framework that maintains a dynamic collection of generated samples over time. Additionally, we add the constraint of matching the real data distribution in sample generation strategies that target maximum information gain. Our experiments demonstrate that we can improve the accuracy of the student models obtained via KD when compared with state-of-the-art approaches on the SVHN, Fashion MNIST and CIFAR100 datasets.

READ FULL TEXT
research
01/11/2023

Synthetic data generation method for data-free knowledge distillation in regression neural networks

Knowledge distillation is the technique of compressing a larger neural n...
research
01/09/2022

Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

Data-Free Knowledge Distillation (KD) allows knowledge transfer from a t...
research
05/08/2020

Data-Free Network Quantization With Adversarial Knowledge Distillation

Network quantization is an essential procedure in deep learning for deve...
research
06/23/2023

GKD: Generalized Knowledge Distillation for Auto-regressive Sequence Models

Knowledge distillation is commonly used for compressing neural networks ...
research
11/17/2017

Generation and Consolidation of Recollections for Efficient Deep Lifelong Learning

Deep lifelong learning systems need to efficiently manage resources to s...
research
11/07/2020

Robustness and Diversity Seeking Data-Free Knowledge Distillation

Knowledge distillation (KD) has enabled remarkable progress in model com...
research
06/01/2023

Teacher Agent: A Non-Knowledge Distillation Method for Rehearsal-based Video Incremental Learning

With the rise in popularity of video-based social media, new categories ...

Please sign up or login with your details

Forgot password? Click here to reset