On the Generalization Properties of Adversarial Training

08/15/2020
by   Yue Xing, et al.
0

Modern machine learning and deep learning models are shown to be vulnerable when testing data are slightly perturbed. Theoretical studies of adversarial training algorithms mostly focus on their adversarial training losses or local convergence properties. In contrast, this paper studies the generalization performance of a generic adversarial training algorithm. Specifically, we consider linear regression models and two-layer neural networks (with lazy training) using squared loss under both low-dimensional and high-dimensional regimes. In the former regime, the adversarial risk of the trained models will converge to the minimal adversarial risk. In the latter regime, we discover that data interpolation prevents the adversarial robust estimator from being consistent (i.e. converge in probability). Therefore, inspired by successes of the least absolute shrinkage and selection operator (LASSO), we incorporate the L1 penalty in the high dimensional adversarial learning, and show that it leads to consistent adversarial robust estimation in both theory and numerical trials.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2022

Why adversarial training can hurt robust accuracy

Machine learning classifiers with high test accuracy often perform poorl...
research
12/18/2020

Adversarially Robust Estimate and Risk Analysis in Linear Regression

Adversarially robust learning aims to design algorithms that are robust ...
research
05/25/2022

Surprises in adversarially-trained linear regression

State-of-the-art machine learning models can be vulnerable to very small...
research
06/21/2023

Adversarial Training with Generated Data in High-Dimensional Regression: An Asymptotic Study

In recent years, studies such as <cit.> have demonstrated that incorpora...
research
02/21/2023

Generalization Bounds for Adversarial Contrastive Learning

Deep networks are well-known to be fragile to adversarial attacks, and a...
research
02/11/2020

More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models

Despite remarkable success in practice, modern machine learning models h...
research
02/25/2020

The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization

Despite remarkable success, deep neural networks are sensitive to human-...

Please sign up or login with your details

Forgot password? Click here to reset