Knowledge Distillation with Adversarial Samples Supporting Decision Boundary

05/15/2018
by   Byeongho Heo, et al.
0

Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the state-of-the-arts performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2018

Improving Knowledge Distillation with Supporting Adversarial Samples

Many recent works on knowledge distillation have provided ways to transf...
research
01/12/2023

Effective Decision Boundary Learning for Class Incremental Learning

Rehearsal approaches in class incremental learning (CIL) suffer from dec...
research
12/01/2021

Information Theoretic Representation Distillation

Despite the empirical success of knowledge distillation, there still lac...
research
10/15/2020

Spherical Knowledge Distillation

Knowledge distillation aims at obtaining a small but effective deep mode...
research
11/13/2019

Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation

Despite the recent works on knowledge distillation (KD) have achieved a ...
research
05/05/2022

Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems

Adversarial attack perturbs an image with an imperceptible noise, leadin...
research
05/27/2021

Towards Understanding Knowledge Distillation

Knowledge distillation, i.e., one classifier being trained on the output...

Please sign up or login with your details

Forgot password? Click here to reset