Knowledge distillation using unlabeled mismatched images

03/21/2017
by   Mandar Kulkarni, et al.
0

Current approaches for Knowledge Distillation (KD) either directly use training data or sample from the training data distribution. In this paper, we demonstrate effectiveness of 'mismatched' unlabeled stimulus to perform KD for image classification networks. For illustration, we consider scenarios where this is a complete absence of training data, or mismatched stimulus has to be used for augmenting a small amount of training data. We demonstrate that stimulus complexity is a key factor for distillation's good performance. Our examples include use of various datasets for stimulating MNIST and CIFAR teachers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2019

Learning from a Teacher using Unlabeled Data

Knowledge distillation is a widely used technique for model compression....
research
02/08/2018

Imitation networks: Few-shot learning of neural networks from scratch

In this paper, we propose imitation networks, a simple but effective met...
research
03/16/2022

Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation

Using huge training datasets can be costly and inconvenient. This articl...
research
04/09/2023

Homogenizing Non-IID datasets via In-Distribution Knowledge Distillation for Decentralized Learning

Decentralized learning enables serverless training of deep neural networ...
research
09/30/2022

Towards a Unified View of Affinity-Based Knowledge Distillation

Knowledge transfer between artificial neural networks has become an impo...
research
03/11/2023

Knowledge Distillation for Efficient Sequences of Training Runs

In many practical scenarios – like hyperparameter search or continual re...
research
09/08/2023

Towards Mitigating Architecture Overfitting in Dataset Distillation

Dataset distillation methods have demonstrated remarkable performance fo...

Please sign up or login with your details

Forgot password? Click here to reset