Adversarial Risk Bounds for Neural Networks through Sparsity based Compression

06/03/2019
by   Emilio Rafael Balda, et al.
0

Neural networks have been shown to be vulnerable against minor adversarial perturbations of their inputs, especially for high dimensional data under ℓ_∞ attacks. To combat this problem, techniques like adversarial training have been employed to obtain models which are robust on the training set. However, the robustness of such models against adversarial perturbations may not generalize to unseen data. To study how robustness generalizes, recent works assume that the inputs have bounded ℓ_2-norm in order to bound the adversarial risk for ℓ_∞ attacks with no explicit dimension dependence. In this work we focus on ℓ_∞ attacks on ℓ_∞ bounded inputs and prove margin-based bounds. Specifically, we use a compression based approach that relies on efficiently compressing the set of tunable parameters without distorting the adversarial risk. To achieve this, we apply the concept of effective sparsity and effective joint sparsity on the weight matrices of neural networks. This leads to bounds with no explicit dependence on the input dimension, neither on the number of classes. Our results show that neural networks with approximately sparse weight matrices not only enjoy enhanced robustness, but also better generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2018

Rademacher Complexity for Adversarially Robust Generalization

Many machine learning models are vulnerable to adversarial attacks. It h...
research
02/23/2021

Non-Singular Adversarial Robustness of Neural Networks

Adversarial robustness has become an emerging challenge for neural netwo...
research
10/22/2020

Adversarial Robustness of Supervised Sparse Coding

Several recent results provide theoretical insights into the phenomena o...
research
10/29/2018

Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution

We study adversarial perturbations when the instances are uniformly dist...
research
03/25/2022

Origins of Low-dimensional Adversarial Perturbations

In this note, we initiate a rigorous study of the phenomenon of low-dime...
research
04/28/2020

Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks

Adversarial or test time robustness measures the susceptibility of a cla...
research
11/29/2022

Understanding and Enhancing Robustness of Concept-based Models

Rising usage of deep neural networks to perform decision making in criti...

Please sign up or login with your details

Forgot password? Click here to reset