On the benefits of knowledge distillation for adversarial robustness

03/14/2022
by   Javier Maroto, et al.
0

Knowledge distillation is normally used to compress a big network, or teacher, onto a smaller one, the student, by training it to match its outputs. Recently, some works have shown that robustness against adversarial attacks can also be distilled effectively to achieve good rates of robustness on mobile-friendly models. In this work, however, we take a different point of view, and show that knowledge distillation can be used directly to boost the performance of state-of-the-art models in adversarial robustness. In this sense, we present a thorough analysis and provide general guidelines to distill knowledge from a robust teacher and boost the clean and adversarial performance of a student model even further. To that end, we present Adversarial Knowledge Distillation (AKD), a new framework to improve a model's robust performance, consisting on adversarially training a student on a mixture of the original labels and the teacher outputs. Through carefully controlled ablation studies, we show that using early-stopping, model ensembles and weak adversarial training are key techniques to maximize performance of the student, and show that these insights generalize across different robust distillation techniques. Finally, we provide insights on the effect of robust knowledge distillation on the dynamics of the student network, and show that AKD mostly improves the calibration of the network and modify its training dynamics on samples that the model finds difficult to learn, or even memorize.

READ FULL TEXT

page 26

page 27

page 29

research
05/23/2019

Adversarially Robust Distillation

Knowledge distillation is effective for producing small high-performance...
research
11/01/2022

ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation

Adversarial training is the most promising method for learning robust mo...
research
03/10/2022

Improving Neural ODEs via Knowledge Distillation

Neural Ordinary Differential Equations (Neural ODEs) construct the conti...
research
05/21/2022

Mapping Emulation for Knowledge Distillation

This paper formalizes the source-blind knowledge distillation problem th...
research
03/21/2023

Model Robustness Meets Data Privacy: Adversarial Robustness Distillation without Original Data

Large-scale deep learning models have achieved great performance based o...
research
11/01/2022

Maximum Likelihood Distillation for Robust Modulation Classification

Deep Neural Networks are being extensively used in communication systems...
research
06/07/2023

Faithful Knowledge Distillation

Knowledge distillation (KD) has received much attention due to its succe...

Please sign up or login with your details

Forgot password? Click here to reset