Adversarial robustness of sparse local Lipschitz predictors

02/26/2022
by   Ramchandran Muthukumar, et al.
0

This work studies the adversarial robustness of parametric functions composed of a linear predictor and a non-linear representation map. Our analysis relies on sparse local Lipschitzness (SLL), an extension of local Lipschitz continuity that better captures the stability and reduced effective dimensionality of predictors upon local perturbations. SLL functions preserve a certain degree of structure, given by the sparsity pattern in the representation map, and include several popular hypothesis classes, such as piece-wise linear models, Lasso and its variants, and deep feed-forward ReLU networks. We provide a tighter robustness certificate on the minimal energy of an adversarial example, as well as tighter data-dependent non-uniform bounds on the robust generalization error of these predictors. We instantiate these results for the case of deep neural networks and provide numerical evidence that supports our results, shedding new insights into natural regularization strategies to increase the robustness of these models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2020

Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks

Adversarial or test time robustness measures the susceptibility of a cla...
research
09/04/2018

Lipschitz Networks and Distributional Robustness

Robust risk minimisation has several advantages: it has been studied wit...
research
07/01/2023

Sparsity-aware generalization theory for deep neural networks

Deep artificial neural networks achieve surprising generalization abilit...
research
10/22/2020

Adversarial Robustness of Supervised Sparse Coding

Several recent results provide theoretical insights into the phenomena o...
research
05/10/2019

On the Connection Between Adversarial Robustness and Saliency Map Interpretability

Recent studies on the adversarial vulnerability of neural networks have ...
research
02/23/2020

De-randomized PAC-Bayes Margin Bounds: Applications to Non-convex and Non-smooth Predictors

In spite of several notable efforts, explaining the generalization of de...
research
07/09/2019

Are deep ResNets provably better than linear predictors?

Recently, a residual network (ResNet) with a single residual block has b...

Please sign up or login with your details

Forgot password? Click here to reset