LTD: Low Temperature Distillation for Robust Adversarial Training

11/03/2021
by   Erh-Chung Chen, et al.
0

Adversarial training has been widely used to enhance the robustness of the neural network models against adversarial attacks. However, there still a notable gap between the nature accuracy and the robust accuracy. We found one of the reasons is the commonly used labels, one-hot vectors, hinder the learning process for image recognition. In this paper, we proposed a method, called Low Temperature Distillation (LTD), which is based on the knowledge distillation framework to generate the desired soft labels. Unlike the previous work, LTD uses relatively low temperature in the teacher model, and employs different, but fixed, temperatures for the teacher model and the student model. Moreover, we have investigated the methods to synergize the use of nature data and adversarial ones in LTD. Experimental results show that without extra unlabeled data, the proposed method combined with the previous work can achieve 57.72% and 30.36% robust accuracy on CIFAR-10 and CIFAR-100 dataset respectively, which is about 1.21% improvement of the state-of-the-art methods in average.

READ FULL TEXT

page 5

page 8

research
08/18/2021

Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better

Adversarial training is one effective approach for training robust deep ...
research
06/28/2023

Mitigating the Accuracy-Robustness Trade-off via Multi-Teacher Adversarial Distillation

Adversarial training is a practical approach for improving the robustnes...
research
11/13/2019

Learning from a Teacher using Unlabeled Data

Knowledge distillation is a widely used technique for model compression....
research
11/01/2022

Maximum Likelihood Distillation for Robust Modulation Classification

Deep Neural Networks are being extensively used in communication systems...
research
03/10/2022

Improving Neural ODEs via Knowledge Distillation

Neural Ordinary Differential Equations (Neural ODEs) construct the conti...
research
05/31/2019

Unlabeled Data Improves Adversarial Robustness

We demonstrate, theoretically and empirically, that adversarial robustne...
research
10/16/2019

A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone

Gaze estimation for ordinary smart phone, e.g. estimating where the user...

Please sign up or login with your details

Forgot password? Click here to reset