Consistency Regularization for Adversarial Robustness

03/08/2021
by   Jihoon Tack, et al.
6

Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks. However, a significant generalization gap in the robustness obtained from AT has been problematic, making practitioners to consider a bag of tricks for a successful training, e.g., early stopping. In this paper, we investigate data augmentation (DA) techniques to address the issue. In contrast to the previous reports in the literature that DA is not effective for regularizing AT, we discover that DA can mitigate overfitting in AT surprisingly well, but they should be chosen deliberately. To utilize the effect of DA further, we propose a simple yet effective auxiliary 'consistency' regularization loss to optimize, which forces predictive distributions after attacking from two different augmentations to be similar to each other. Our experimental results demonstrate that our simple regularization scheme is applicable for a wide range of AT methods, showing consistent yet significant improvements in the test robust accuracy. More remarkably, we also show that our method could significantly help the model to generalize its robustness against unseen adversaries, e.g., other types or larger perturbations compared to those used during training. Code is available at https://github.com/alinlab/consistency-adversarial.

READ FULL TEXT
research
01/24/2023

Data Augmentation Alone Can Improve Adversarial Training

Adversarial training suffers from the issue of robust overfitting, which...
research
08/20/2023

Towards Generalizable Morph Attack Detection with Consistency Regularization

Though recent studies have made significant progress in morph attack det...
research
06/12/2023

AROID: Improving Adversarial Robustness through Online Instance-wise Data Augmentation

Deep neural networks are vulnerable to adversarial examples. Adversarial...
research
11/13/2022

Adversarial and Random Transformations for Robust Domain Adaptation and Generalization

Data augmentation has been widely used to improve generalization in trai...
research
05/08/2019

Does Data Augmentation Lead to Positive Margin?

Data augmentation (DA) is commonly used during model training, as it sig...
research
12/16/2022

Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation?

It is no secret that deep learning models exhibit undesirable behaviors ...
research
01/16/2020

Increasing the robustness of DNNs against image corruptions by playing the Game of Noise

The human visual system is remarkably robust against a wide range of nat...

Please sign up or login with your details

Forgot password? Click here to reset