Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better

08/18/2021
by   Bojia Zi, et al.
0

Adversarial training is one effective approach for training robust deep neural networks against adversarial attacks. While being able to bring reliable robustness, adversarial training (AT) methods in general favor high capacity models, i.e., the larger the model the better the robustness. This tends to limit their effectiveness on small models, which are more preferable in scenarios where storage or computing resources are very limited (e.g., mobile devices). In this paper, we leverage the concept of knowledge distillation to improve the robustness of small models by distilling from adversarially trained large models. We first revisit several state-of-the-art AT methods from a distillation perspective and identify one common technique that can lead to improved robustness: the use of robust soft labels – predictions of a robust model. Following this observation, we propose a novel adversarial robustness distillation method called Robust Soft Label Adversarial Distillation (RSLAD) to train robust small student models. RSLAD fully exploits the robust soft labels produced by a robust (adversarially-trained) large teacher model to guide the student's learning on both natural and adversarial examples in all loss terms. We empirically demonstrate the effectiveness of our RSLAD approach over existing adversarial training and distillation methods in improving the robustness of small models against state-of-the-art attacks including the AutoAttack. We also provide a set of understandings on our RSLAD and the importance of robust soft labels for adversarial robustness distillation.

READ FULL TEXT
research
05/23/2019

Adversarially Robust Distillation

Knowledge distillation is effective for producing small high-performance...
research
11/03/2021

LTD: Low Temperature Distillation for Robust Adversarial Training

Adversarial training has been widely used to enhance the robustness of t...
research
11/09/2021

MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps

Deep neural networks are susceptible to adversarially crafted, small and...
research
09/21/2020

Feature Distillation With Guided Adversarial Contrastive Learning

Deep learning models are shown to be vulnerable to adversarial examples....
research
06/09/2021

Reliable Adversarial Distillation with Unreliable Teachers

In ordinary distillation, student networks are trained with soft labels ...
research
03/21/2023

Model Robustness Meets Data Privacy: Adversarial Robustness Distillation without Original Data

Large-scale deep learning models have achieved great performance based o...
research
05/20/2023

Annealing Self-Distillation Rectification Improves Adversarial Training

In standard adversarial training, models are optimized to fit one-hot la...

Please sign up or login with your details

Forgot password? Click here to reset