A law of robustness for two-layers neural networks

09/30/2020
by   Sébastien Bubeck, et al.
0

We initiate the study of the inherent tradeoffs between the size of a neural network and its robustness, as measured by its Lipschitz constant. We make a precise conjecture that, for any Lipschitz activation function and for most datasets, any two-layers neural network with k neurons that perfectly fit the data must have its Lipschitz constant larger (up to a constant) than √(n/k) where n is the number of datapoints. In particular, this conjecture implies that overparametrization is necessary for robustness, since it means that one needs roughly one neuron per datapoint to ensure a O(1)-Lipschitz network, while mere data fitting of d-dimensional data requires only one neuron per d datapoints. We prove a weaker version of this conjecture when the Lipschitz constant is replaced by an upper bound on it based on the spectral norm of the weight matrix. We also prove the conjecture for the ReLU activation function in the high-dimensional regime n ≈ d, and for a polynomial activation function of degree p when n ≈ d^p. We complement these findings with experimental evidence supporting the conjecture.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2021

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds

Certified robustness is a desirable property for deep neural networks in...
research
02/16/2021

A Law of Robustness for Weight-bounded Neural Networks

Robustness of deep neural networks against adversarial perturbations is ...
research
07/25/2017

On The Robustness of a Neural Network

With the development of neural networks based machine learning and their...
research
12/18/2021

The Kolmogorov Superposition Theorem can Break the Curse of Dimensionality When Approximating High Dimensional Functions

We explain how to use Kolmogorov's Superposition Theorem (KST) to overco...
research
02/20/2018

On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition

We establish connections between the problem of learning a two-layers ne...
research
02/10/2021

Towards Certifying ℓ_∞ Robustness using Neural Networks with ℓ_∞-dist Neurons

It is well-known that standard neural networks, even with a high classif...
research
06/27/2017

When Neurons Fail

We view a neural network as a distributed system of which neurons can fa...

Please sign up or login with your details

Forgot password? Click here to reset