Deep Classifier Mimicry without Data Access

06/03/2023
by   Steven Braun, et al.
7

Access to pre-trained models has recently emerged as a standard across numerous machine learning domains. Unfortunately, access to the original data the models were trained on may not equally be granted. This makes it tremendously challenging to fine-tune, compress models, adapt continually, or to do any other type of data-driven update. We posit that original data access may however not be required. Specifically, we propose Contrastive Abductive Knowledge Extraction (CAKE), a model-agnostic knowledge distillation procedure that mimics deep classifiers without access to the original data. To this end, CAKE generates pairs of noisy synthetic samples and diffuses them contrastively toward a model's decision boundary. We empirically corroborate CAKE's effectiveness using several benchmark datasets and various architectural choices, paving the way for broad application.

READ FULL TEXT

page 2

page 6

page 7

page 9

research
06/04/2023

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

Data-free knowledge distillation (KD) helps transfer knowledge from a pr...
research
05/16/2022

Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt

Data-free knowledge distillation (DFKD) conducts knowledge distillation ...
research
05/18/2021

Contrastive Model Inversion for Data-Free Knowledge Distillation

Model inversion, whose goal is to recover training data from a pre-train...
research
12/31/2021

Data-Free Knowledge Transfer: A Survey

In the last decade, many deep learning models have been well trained and...
research
05/28/2023

Learning to Learn from APIs: Black-Box Data-Free Meta-Learning

Data-free meta-learning (DFML) aims to enable efficient learning of new ...
research
12/27/2019

DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier

In this era of digital information explosion, an abundance of data from ...
research
12/31/2022

Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples

There is a growing interest in developing unlearnable examples (UEs) aga...

Please sign up or login with your details

Forgot password? Click here to reset