The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization

02/25/2020
by   Yifei Min, et al.
10

Despite remarkable success, deep neural networks are sensitive to human-imperceptible small perturbations on the data and could be adversarially misled to produce incorrect or even dangerous predictions. To circumvent these issues, practitioners introduced adversarial training to produce adversarially robust models whose predictions are robust to small perturbations to the data. It is widely believed that more training data will help adversarially robust models generalize better on the test data. In this paper, however, we challenge this conventional belief and show that more training data could hurt the generalization of adversarially robust models for the linear classification problem. We identify three regimes based on the strength of the adversary. In the weak adversary regime, more data improves the generalization of adversarially robust models. In the medium adversary regime, with more training data, the generalization loss exhibits a double descent curve. This implies that in this regime, there is an intermediate stage where more training data hurts their generalization. In the strong adversary regime, more data almost immediately causes the generalization error to increase.

READ FULL TEXT
research
06/17/2022

Understanding Robust Overfitting of Adversarial Training and Beyond

Robust overfitting widely exists in adversarial training of deep network...
research
02/11/2020

More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models

Despite remarkable success in practice, modern machine learning models h...
research
01/15/2021

Fundamental Tradeoffs in Distributionally Adversarial Training

Adversarial training is among the most effective techniques to improve t...
research
09/27/2021

Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective

State-of-the-art deep learning classifiers are heavily overparameterized...
research
06/14/2019

Adversarial Training Can Hurt Generalization

While adversarial training can improve robust accuracy (against an adver...
research
08/15/2020

On the Generalization Properties of Adversarial Training

Modern machine learning and deep learning models are shown to be vulnera...
research
03/19/2022

Deep Learning Generalization, Extrapolation, and Over-parameterization

We study the generalization of over-parameterized deep networks (for ima...

Please sign up or login with your details

Forgot password? Click here to reset