A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning

12/25/2020
by   Ahmadreza Jeddi, et al.
0

Adversarial Training (AT) with Projected Gradient Descent (PGD) is an effective approach for improving the robustness of the deep neural networks. However, PGD AT has been shown to suffer from two main limitations: i) high computational cost, and ii) extreme overfitting during training that leads to reduction in model generalization. While the effect of factors such as model capacity and scale of training data on adversarial robustness have been extensively studied, little attention has been paid to the effect of a very important parameter in every network optimization on adversarial robustness: the learning rate. In particular, we hypothesize that effective learning rate scheduling during adversarial training can significantly reduce the overfitting issue, to a degree where one does not even need to adversarially train a model from scratch but can instead simply adversarially fine-tune a pre-trained model. Motivated by this hypothesis, we propose a simple yet very effective adversarial fine-tuning approach based on a slow start, fast decay learning rate scheduling strategy which not only significantly decreases computational cost required, but also greatly improves the accuracy and robustness of a deep neural network. Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets in both test accuracy and the robustness, while reducing the computational cost by 8-10×. Furthermore, a very important benefit of the proposed adversarial fine-tuning approach is that it enables the ability to improve the robustness of any pre-trained deep neural network without needing to train the model from scratch, which to the best of the authors' knowledge has not been previously demonstrated in research literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2023

TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization

Recent years have seen the ever-increasing importance of pre-trained mod...
research
06/27/2023

MAT: Mixed-Strategy Game of Adversarial Training in Fine-tuning

Fine-tuning large-scale pre-trained language models has been demonstrate...
research
10/25/2019

A Simple Dynamic Learning Rate Tuning Algorithm For Automated Training of DNNs

Training neural networks on image datasets generally require extensive e...
research
11/14/2022

Efficient Adversarial Training with Robust Early-Bird Tickets

Adversarial training is one of the most powerful methods to improve the ...
research
03/28/2020

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

Pretrained models from self-supervision are prevalently used in fine-tun...
research
04/05/2023

Hyper-parameter Tuning for Adversarially Robust Models

This work focuses on the problem of hyper-parameter tuning (HPT) for rob...
research
06/13/2023

Rethinking Adversarial Training with A Simple Baseline

We report competitive results on RobustBench for CIFAR and SVHN using a ...

Please sign up or login with your details

Forgot password? Click here to reset