A Law of Robustness for Weight-bounded Neural Networks

02/16/2021
by   Hisham Husain, et al.
0

Robustness of deep neural networks against adversarial perturbations is a pressing concern motivated by recent findings showing the pervasive nature of such vulnerabilities. One method of characterizing the robustness of a neural network model is through its Lipschitz constant, which forms a robustness certificate. A natural question to ask is, for a fixed model class (such as neural networks) and a dataset of size n, what is the smallest achievable Lipschitz constant among all models that fit the dataset? Recently, (Bubeck et al., 2020) conjectured that when using two-layer networks with k neurons to fit a generic dataset, the smallest Lipschitz constant is Ω(√(n/k)). This implies that one would require one neuron per data point to robustly fit the data. In this work we derive a lower bound on the Lipschitz constant for any arbitrary model class with bounded Rademacher complexity. Our result coincides with that conjectured in (Bubeck et al., 2020) for two-layer networks under the assumption of bounded weights. However, due to our result's generality, we also derive bounds for multi-layer neural networks, discovering that one requires log n constant-sized layers to robustly fit the data. Thus, our work establishes a law of robustness for weight bounded neural networks and provides formal evidence on the necessity of over-parametrization in deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2020

A law of robustness for two-layers neural networks

We initiate the study of the inherent tradeoffs between the size of a ne...
research
09/04/2018

Lipschitz Networks and Distributional Robustness

Robust risk minimisation has several advantages: it has been studied wit...
research
10/25/2021

Scalable Lipschitz Residual Networks with Convex Potential Flows

The Lipschitz constant of neural networks has been established as a key ...
research
03/20/2023

Lipschitz-bounded 1D convolutional neural networks using the Cayley transform and the controllability Gramian

We establish a layer-wise parameterization for 1D convolutional neural n...
research
04/12/2019

The coupling effect of Lipschitz regularization in deep neural networks

We investigate robustness of deep feed-forward neural networks when inpu...
research
02/10/2021

Towards Certifying ℓ_∞ Robustness using Neural Networks with ℓ_∞-dist Neurons

It is well-known that standard neural networks, even with a high classif...
research
06/11/2019

Stable Rank Normalization for Improved Generalization in Neural Networks and GANs

Exciting new work on the generalization bounds for neural networks (NN) ...

Please sign up or login with your details

Forgot password? Click here to reset