Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free

10/22/2020
by   Haotao Wang, et al.
0

Adversarial training and its many variants substantially improve deep network robustness, yet at the cost of compromising standard accuracy. Moreover, the training process is heavy and hence it becomes impractical to thoroughly explore the trade-off between accuracy and robustness. This paper asks this new question: how to quickly calibrate a trained model in-situ, to examine the achievable trade-offs between its standard and robust accuracies, without (re-)training it many times? Our proposed framework, Once-for-all Adversarial Training (OAT), is built on an innovative model-conditional training framework, with a controlling hyper-parameter as the input. The trained model could be adjusted among different standard and robust accuracies "for free" at testing time. As an important knob, we exploit dual batch normalization to separate standard and adversarial feature statistics, so that they can be learned in one model without degrading performance. We further extend OAT to a Once-for-all Adversarial Training and Slimming (OATS) framework, that allows for the joint trade-off among accuracy, robustness and runtime efficiency. Experiments show that, without any re-training nor ensembling, OAT/OATS achieve similar or even superior performance compared to dedicatedly trained models at various configurations. Our codes and pretrained models are available at: https://github.com/VITA-Group/Once-for-All-Adversarial-Training.

READ FULL TEXT

page 16

page 17

page 18

research
03/27/2023

CAT:Collaborative Adversarial Training

Adversarial training can improve the robustness of neural networks. Prev...
research
11/25/2020

Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization

Unmasking the decision-making process of machine learning models is esse...
research
10/27/2022

Efficient and Effective Augmentation Strategy for Adversarial Training

Adversarial training of Deep Neural Networks is known to be significantl...
research
02/10/2019

Adversarially Trained Model Compression: When Robustness Meets Efficiency

The robustness of deep models to adversarial attacks has gained signific...
research
10/10/2022

Revisiting adapters with adversarial training

While adversarial training is generally used as a defense mechanism, rec...
research
05/19/2019

What Do Adversarially Robust Models Look At?

In this paper, we address the open question: "What do adversarially robu...
research
04/05/2023

Hyper-parameter Tuning for Adversarially Robust Models

This work focuses on the problem of hyper-parameter tuning (HPT) for rob...

Please sign up or login with your details

Forgot password? Click here to reset