Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model

06/07/2021
by   Zi Wang, et al.
0

Knowledge distillation (KD) is a successful approach for deep neural network acceleration, with which a compact network (student) is trained by mimicking the softmax output of a pre-trained high-capacity network (teacher). In tradition, KD usually relies on access to the training samples and the parameters of the white-box teacher to acquire the transferred knowledge. However, these prerequisites are not always realistic due to storage costs or privacy issues in real-world applications. Here we propose the concept of decision-based black-box (DB3) knowledge distillation, with which the student is trained by distilling the knowledge from a black-box teacher (parameters are not accessible) that only returns classes rather than softmax outputs. We start with the scenario when the training set is accessible. We represent a sample's robustness against other classes by computing its distances to the teacher's decision boundaries and use it to construct the soft label for each training sample. After that, the student can be trained via standard KD. We then extend this approach to a more challenging scenario in which even accessing the training data is not feasible. We propose to generate pseudo samples distinguished by the teacher's decision boundaries to the largest extent and construct soft labels for them, which are used as the transfer set. We evaluate our approaches on various benchmark networks and datasets and experiment results demonstrate their effectiveness. Codes are available at: https://github.com/zwang84/zsdb3kd.

READ FULL TEXT

page 8

page 9

page 15

research
07/25/2022

Black-box Few-shot Knowledge Distillation

Knowledge distillation (KD) is an efficient approach to transfer the kno...
research
04/10/2021

Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis

Knowledge distillation (KD) has proved to be an effective approach for d...
research
02/23/2022

Absolute Zero-Shot Learning

Considering the increasing concerns about data copyright and privacy iss...
research
07/24/2020

Dynamic Knowledge Distillation for Black-box Hypothesis Transfer Learning

In real world applications like healthcare, it is usually difficult to b...
research
07/10/2020

Data-Efficient Ranking Distillation for Image Retrieval

Recent advances in deep learning has lead to rapid developments in the f...
research
08/08/2022

SKDCGN: Source-free Knowledge Distillation of Counterfactual Generative Networks using cGANs

With the usage of appropriate inductive biases, Counterfactual Generativ...
research
11/22/2022

A Generic Approach for Reproducible Model Distillation

Model distillation has been a popular method for producing interpretable...

Please sign up or login with your details

Forgot password? Click here to reset