DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier

12/27/2019
by   Sravanti Addepalli, et al.
0

In this era of digital information explosion, an abundance of data from numerous modalities is being generated as well as archived everyday. However, most problems associated with training Deep Neural Networks still revolve around lack of data that is rich enough for a given task. Data is required not only for training an initial model, but also for future learning tasks such as Model Compression and Incremental Learning. A diverse dataset may be used for training an initial model, but it may not be feasible to store it throughout the product life cycle due to data privacy issues or memory constraints. We propose to bridge the gap between the abundance of available data and lack of relevant data, for the future learning tasks of a given trained network. We use the available data, that may be an imbalanced subset of the original training dataset, or a related domain dataset, to retrieve representative samples from a trained classifier, using a novel Data-enriching GAN (DeGAN) framework. We demonstrate that data from a related domain can be leveraged to achieve state-of-the-art performance for the tasks of Data-free Knowledge Distillation and Incremental Learning on benchmark datasets. We further demonstrate that our proposed framework can enrich any data, even from unrelated domains, to make it more useful for the future learning tasks of a given network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2020

Data-free Knowledge Distillation for Segmentation using Data-Enriching GAN

Distilling knowledge from huge pre-trained networks to improve the perfo...
research
11/18/2020

Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation

Knowledge Distillation is an effective method to transfer the learning a...
research
07/08/2018

Distillation Techniques for Pseudo-rehearsal Based Incremental Learning

The ability to learn from incrementally arriving data is essential for a...
research
04/10/2021

Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis

Knowledge distillation (KD) has proved to be an effective approach for d...
research
01/15/2021

Data Impressions: Mining Deep Models to Extract Samples for Data-free Applications

Pretrained deep models hold their learnt knowledge in the form of the mo...
research
05/24/2022

CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing

Recently, the compression and deployment of powerful deep neural network...
research
06/03/2023

Deep Classifier Mimicry without Data Access

Access to pre-trained models has recently emerged as a standard across n...

Please sign up or login with your details

Forgot password? Click here to reset