
A law of robustness for twolayers neural networks
We initiate the study of the inherent tradeoffs between the size of a ne...
read it

Lipschitz Networks and Distributional Robustness
Robust risk minimisation has several advantages: it has been studied wit...
read it

Training robust neural networks using Lipschitz bounds
Due to their susceptibility to adversarial perturbations, neural network...
read it

Stable Rank Normalization for Improved Generalization in Neural Networks and GANs
Exciting new work on the generalization bounds for neural networks (NN) ...
read it

Towards Certifying ℓ_∞ Robustness using Neural Networks with ℓ_∞dist Neurons
It is wellknown that standard neural networks, even with a high classif...
read it

Computing the Information Content of Trained Neural Networks
How much information does a learning algorithm extract from the training...
read it

The coupling effect of Lipschitz regularization in deep neural networks
We investigate robustness of deep feedforward neural networks when inpu...
read it
A Law of Robustness for Weightbounded Neural Networks
Robustness of deep neural networks against adversarial perturbations is a pressing concern motivated by recent findings showing the pervasive nature of such vulnerabilities. One method of characterizing the robustness of a neural network model is through its Lipschitz constant, which forms a robustness certificate. A natural question to ask is, for a fixed model class (such as neural networks) and a dataset of size n, what is the smallest achievable Lipschitz constant among all models that fit the dataset? Recently, (Bubeck et al., 2020) conjectured that when using twolayer networks with k neurons to fit a generic dataset, the smallest Lipschitz constant is Ω(√(n/k)). This implies that one would require one neuron per data point to robustly fit the data. In this work we derive a lower bound on the Lipschitz constant for any arbitrary model class with bounded Rademacher complexity. Our result coincides with that conjectured in (Bubeck et al., 2020) for twolayer networks under the assumption of bounded weights. However, due to our result's generality, we also derive bounds for multilayer neural networks, discovering that one requires log n constantsized layers to robustly fit the data. Thus, our work establishes a law of robustness for weight bounded neural networks and provides formal evidence on the necessity of overparametrization in deep learning.
READ FULL TEXT
Comments
There are no comments yet.