Rademacher Complexity for Adversarially Robust Generalization

10/29/2018
by   Dong Yin, et al.
0

Many machine learning models are vulnerable to adversarial attacks. It has been observed that adding adversarial perturbations that are imperceptible to humans can make machine learning models produce wrong predictions with high confidence. Although there has been a lot of recent effort dedicated to learning models that are adversarially robust, this remains an open problem. In particular, it has been empirically observed that although using adversarial training can effectively reduce the adversarial classification error on the training dataset, the learned model cannot generalize well to the test data. Moreover, we lack a theoretical understanding of the generalization property of machine learning models in the adversarial setting. In this paper, we study the adversarially robust generalization problem through the lens of Rademacher complexity. We focus on ℓ_∞ adversarial attacks and study both linear classifiers and feedforward neural networks. For binary linear classifiers, we prove tight bounds for the adversarial Rademacher complexity, and show that in the adversarial setting, the Rademacher complexity is never smaller than that in the natural setting, and it has an unavoidable dimension dependence, unless the weight vector has bounded ℓ_1 norm. The results also extend to multi-class linear classifiers. For (nonlinear) neural networks, we show that the dimension dependence also exists in the Rademacher complexity of the ℓ_∞ adversarial loss function class. We further consider a surrogate adversarial loss and prove margin bounds for this setting. Our results indicate that having ℓ_1 norm constraints on the weight matrices might be a potential way to improve generalization in the adversarial setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2019

Adversarial Risk Bounds for Neural Networks through Sparsity based Compression

Neural networks have been shown to be vulnerable against minor adversari...
research
06/06/2020

Unique properties of adversarially trained linear classifiers on Gaussian data

Machine learning models are vulnerable to adversarial perturbations, tha...
research
11/27/2022

Adversarial Rademacher Complexity of Deep Neural Networks

Deep neural networks are vulnerable to adversarial attacks. Ideally, a r...
research
03/03/2021

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

Studying the sensitivity of weight perturbation in neural networks and i...
research
09/24/2018

Is Ordered Weighted ℓ_1 Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR

Many state-of-the-art machine learning models such as deep neural networ...
research
03/03/2022

Why adversarial training can hurt robust accuracy

Machine learning classifiers with high test accuracy often perform poorl...
research
02/11/2020

More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models

Despite remarkable success in practice, modern machine learning models h...

Please sign up or login with your details

Forgot password? Click here to reset