Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion

12/18/2019
by   Hongxu Yin, et al.
Princeton University
University of Illinois at Urbana-Champaign
Nvidia
27

We introduce DeepInversion, a new method for synthesizing images from the image distribution used to train a deep neural network. We 'invert' a trained network (teacher) to synthesize class-conditional input images starting from random noise, without using any additional information about the training dataset. Keeping the teacher fixed, our method optimizes the input while regularizing the distribution of intermediate feature maps using information stored in the batch normalization layers of the teacher. Further, we improve the diversity of synthesized images using Adaptive DeepInversion, which maximizes the Jensen-Shannon divergence between the teacher and student network logits. The resulting synthesized images from networks trained on the CIFAR-10 and ImageNet datasets demonstrate high fidelity and degree of realism, and help enable a new breed of data-free applications - ones that do not require any real images or labeled data. We demonstrate the applicability of our proposed method to three tasks of immense practical importance – (i) data-free network pruning, (ii) data-free knowledge transfer, and (iii) data-free continual learning.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

04/10/2021

Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis

Knowledge distillation (KD) has proved to be an effective approach for d...
06/29/2023

NaturalInversion: Data-Free Image Synthesis Improving Real-World Consistency

We introduce NaturalInversion, a novel model inversion-based method to s...
07/03/2022

PrUE: Distilling Knowledge from Sparse Teacher Networks

Although deep neural networks have enjoyed remarkable success across a w...
05/23/2019

Zero-shot Knowledge Transfer via Adversarial Belief Matching

Performing knowledge transfer from a large teacher network to a smaller ...
11/22/2021

Adaptive Transfer Learning: a simple but effective transfer learning

Transfer learning (TL) leverages previously obtained knowledge to learn ...
10/26/2017

Knowledge Projection for Deep Neural Networks

While deeper and wider neural networks are actively pushing the performa...
03/26/2021

Synthesize-It-Classifier: Learning a Generative Classifier through RecurrentSelf-analysis

In this work, we show the generative capability of an image classifier n...

Code Repositories

DeepInversion

Official PyTorch implementation of Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion (CVPR 2020)


view repo

Please sign up or login with your details

Forgot password? Click here to reset