Data Impressions: Mining Deep Models to Extract Samples for Data-free Applications

01/15/2021
by   Gaurav Kumar Nayak, et al.
21

Pretrained deep models hold their learnt knowledge in the form of the model parameters. These parameters act as memory for the trained models and help them generalize well on unseen data. However, in absence of training data, the utility of a trained model is merely limited to either inference or better initialization towards a target task. In this paper, we go further and extract synthetic data by leveraging the learnt model parameters. We dub them "Data Impressions", which act as proxy to the training data and can be used to realize a variety of tasks. These are useful in scenarios where only the pretrained models are available and the training data is not shared (e.g., due to privacy or sensitivity concerns). We show the applicability of data impressions in solving several computer vision tasks such as unsupervised domain adaptation, continual learning as well as knowledge distillation. We also study the adversarial robustness of the lightweight models trained via knowledge distillation using these data impressions. Further, we demonstrate the efficacy of data impressions in generating UAPs with better fooling rates. Extensive experiments performed on several benchmark datasets demonstrate competitive performance achieved using data impressions in absence of the original training data.

READ FULL TEXT

page 3

page 5

page 8

page 11

research
12/31/2021

Data-Free Knowledge Transfer: A Survey

In the last decade, many deep learning models have been well trained and...
research
11/02/2020

Data-free Knowledge Distillation for Segmentation using Data-Enriching GAN

Distilling knowledge from huge pre-trained networks to improve the perfo...
research
10/27/2021

Beyond Classification: Knowledge Distillation using Multi-Object Impressions

Knowledge Distillation (KD) utilizes training data as a transfer set to ...
research
02/08/2018

Imitation networks: Few-shot learning of neural networks from scratch

In this paper, we propose imitation networks, a simple but effective met...
research
10/14/2022

Learning Generalizable Models for Vehicle Routing Problems via Knowledge Distillation

Recent neural methods for vehicle routing problems always train and test...
research
10/19/2022

Attaining Class-level Forgetting in Pretrained Model using Few Samples

In order to address real-world problems, deep learning models are jointl...
research
12/27/2019

DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier

In this era of digital information explosion, an abundance of data from ...

Please sign up or login with your details

Forgot password? Click here to reset