Overfitting in adversarially robust deep learning

02/26/2020
by   Leslie Rice, et al.
0

It is common practice in deep learning to use overparameterized networks and train for as long as possible; there are numerous studies that show, both theoretically and empirically, that such practices surprisingly do not unduly harm the generalization performance of the classifier. In this paper, we empirically study this phenomenon in the setting of adversarially trained deep networks, which are trained to minimize the loss under worst-case adversarial perturbations. We find that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training across multiple datasets (SVHN, CIFAR-10, CIFAR-100, and ImageNet) and perturbation models (ℓ_∞ and ℓ_2). Based upon this observed effect, we show that the performance gains of virtually all recent algorithmic improvements upon adversarial training can be matched by simply using early stopping. We also show that effects such as the double descent curve do still occur in adversarially trained models, yet fail to explain the observed overfitting. Finally, we study several classical and modern deep learning remedies for overfitting, including regularization and data augmentation, and find that no approach in isolation improves significantly upon the gains achieved by early stopping. All code for reproducing the experiments as well as pretrained model weights and training logs can be found at https://github.com/locuslab/robust_overfitting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2021

On the effectiveness of adversarial training against common corruptions

The literature on robustness towards common corruptions shows no consens...
research
01/12/2020

Fast is better than free: Revisiting adversarial training

Adversarial training, a method for learning robust deep networks, is typ...
research
08/07/2022

Adversarial Robustness Through the Lens of Convolutional Filters

Deep learning models are intrinsically sensitive to distribution shifts ...
research
10/03/2022

Stability Analysis and Generalization Bounds of Adversarial Training

In adversarial machine learning, deep neural networks can fit the advers...
research
11/29/2022

LUMix: Improving Mixup by Better Modelling Label Uncertainty

Modern deep networks can be better generalized when trained with noisy s...
research
06/23/2023

Predicting Grokking Long Before it Happens: A look into the loss landscape of models which grok

This paper focuses on predicting the occurrence of grokking in neural ne...
research
09/27/2022

Measuring Overfitting in Convolutional Neural Networks using Adversarial Perturbations and Label Noise

Although numerous methods to reduce the overfitting of convolutional neu...

Please sign up or login with your details

Forgot password? Click here to reset