Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks

03/02/2021
by   David Gamarnik, et al.
0

We consider the problem of finding a two-layer neural network with sigmoid, rectified linear unit (ReLU), or binary step activation functions that "fits" a training data set as accurately as possible as quantified by the training error; and study the following question: does a low training error guarantee that the norm of the output layer (outer norm) itself is small? We answer affirmatively this question for the case of non-negative output weights. Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm. Notably, our results (a) have a polynomial (in d) sample complexity, (b) are independent of the number of hidden units (which can potentially be very high), (c) are oblivious to the training algorithm; and (d) require quite mild assumptions on the data (in particular the input vector X∈ℝ^d need not have independent coordinates). We then leverage our bounds to establish generalization guarantees for such networks through fat-shattering dimension, a scale-sensitive measure of the complexity class that the network architectures we investigate belong to. Notably, our generalization bounds also have good sample complexity (polynomials in d with a low degree), and are in fact near-linear for some important cases of interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2017

Size-Independent Sample Complexity of Neural Networks

We study the sample complexity of learning neural networks, by providing...
research
11/17/2022

On the Sample Complexity of Two-Layer Networks: Lipschitz vs. Element-Wise Lipschitz Activation

We investigate the sample complexity of bounded two-layer neural network...
research
02/13/2022

The Sample Complexity of One-Hidden-Layer Neural Networks

We study norm-based uniform convergence bounds for neural networks, aimi...
research
10/13/2019

Generalization Bounds for Neural Networks via Approximate Description Length

We investigate the sample complexity of networks with bounds on the magn...
research
03/07/2019

Limiting Network Size within Finite Bounds for Optimization

Largest theoretical contribution to Neural Networks comes from VC Dimens...
research
07/20/2022

Fixed Points of Cone Mapping with the Application to Neural Networks

We derive conditions for the existence of fixed points of cone mappings ...
research
10/22/2019

Global Capacity Measures for Deep ReLU Networks via Path Sampling

Classical results on the statistical complexity of linear models have co...

Please sign up or login with your details

Forgot password? Click here to reset