Contrastive Model Inversion for Data-Free Knowledge Distillation

05/18/2021
by   Gongfan Fang, et al.
0

Model inversion, whose goal is to recover training data from a pre-trained model, has been recently proved feasible. However, existing inversion methods usually suffer from the mode collapse problem, where the synthesized instances are highly similar to each other and thus show limited effectiveness for downstream tasks, such as knowledge distillation. In this paper, we propose Contrastive Model Inversion (CMI), where the data diversity is explicitly modeled as an optimizable objective, to alleviate the mode collapse issue. Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination. To this end, we introduce in CMI a contrastive learning objective that encourages the synthesizing instances to be distinguishable from the already synthesized ones in previous batches. Experiments of pre-trained models on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI not only generates more visually plausible instances than the state of the arts, but also achieves significantly superior performance when the generated data are used for knowledge distillation. Code is available at <https://github.com/zju-vipa/DataFree>.

READ FULL TEXT
research
06/04/2023

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

Data-free knowledge distillation (KD) helps transfer knowledge from a pr...
research
10/30/2022

Dataset Distillation via Factorization

In this paper, we study \xw{dataset distillation (DD)}, from a novel per...
research
03/14/2023

Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation Towards General Sound Classification

Data-Free Knowledge Distillation (DFKD) has recently attracted growing a...
research
06/03/2023

Deep Classifier Mimicry without Data Access

Access to pre-trained models has recently emerged as a standard across n...
research
04/15/2021

Contrastive Learning with Stronger Augmentations

Representation learning has significantly been developed with the advanc...
research
05/16/2022

Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt

Data-free knowledge distillation (DFKD) conducts knowledge distillation ...
research
12/12/2021

Up to 100x Faster Data-free Knowledge Distillation

Data-free knowledge distillation (DFKD) has recently been attracting inc...

Please sign up or login with your details

Forgot password? Click here to reset