Undistillable: Making A Nasty Teacher That CANNOT teach students

05/16/2021
by   Haoyu Ma, et al.
32

Knowledge Distillation (KD) is a widely used technique to transfer knowledge from pre-trained teacher models to (usually more lightweight) student models. However, in certain situations, this technique is more of a curse than a blessing. For instance, KD poses a potential risk of exposing intellectual properties (IPs): even if a trained machine learning model is released in 'black boxes' (e.g., as executable software or APIs without open-sourcing code), it can still be replicated by KD through imitating input-output behaviors. To prevent this unwanted effect of KD, this paper introduces and investigates a concept called Nasty Teacher: a specially trained teacher network that yields nearly the same performance as a normal one, but would significantly degrade the performance of student models learned by imitating it. We propose a simple yet effective algorithm to build the nasty teacher, called self-undermining knowledge distillation. Specifically, we aim to maximize the difference between the output of the nasty teacher and a normal pre-trained network. Extensive experiments on several datasets demonstrate that our method is effective on both standard KD and data-free KD, providing the desirable KD-immunity to model owners for the first time. We hope our preliminary study can draw more awareness and interest in this new practical problem of both social and legal importance.

READ FULL TEXT

page 6

page 9

research
03/23/2021

Student Network Learning via Evolutionary Knowledge Distillation

Knowledge distillation provides an effective way to transfer knowledge v...
research
03/26/2022

Knowledge Distillation with the Reused Teacher Classifier

Knowledge distillation aims to compress a powerful yet cumbersome teache...
research
12/05/2021

Safe Distillation Box

Knowledge distillation (KD) has recently emerged as a powerful strategy ...
research
05/19/2020

Learning from a Lightweight Teacher for Efficient Knowledge Distillation

Knowledge Distillation (KD) is an effective framework for compressing de...
research
06/04/2023

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

Data-free knowledge distillation (KD) helps transfer knowledge from a pr...
research
12/19/2022

KNIFE: Knowledge Distillation with Free-Text Rationales

Free-text rationales (FTRs) follow how humans communicate by explaining ...
research
11/23/2020

Generative Adversarial Simulator

Knowledge distillation between machine learning models has opened many n...

Please sign up or login with your details

Forgot password? Click here to reset