Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks

09/26/2019
by   Ziwei Ji, et al.
0

Recent work has revealed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size n, the (inverse) training error 1/ϵ, and the (inverse) failure probability 1/δ. This work shows that O(1/ϵ) iterations of gradient descent on two-layer networks of any width exceeding polylog(n,1/ϵ,1/δ) and Ω(1/ϵ^2) training examples suffices to achieve a test error of ϵ. The analysis further relies upon a margin property of the limiting kernel, which is guaranteed positive, and can distinguish between true labels and random labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2023

Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time

Despite recent theoretical progress on the non-convex optimization of tw...
research
03/01/2021

Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels

We study the relative power of learning with gradient descent on differe...
research
08/04/2022

Feature selection with gradient descent on two-layer networks in low-rotation regimes

This work establishes low test error of gradient flow (GF) and stochasti...
research
10/08/2021

On the Implicit Biases of Architecture Gradient Descent

Do neural networks generalise because of bias in the functions returned ...
research
05/08/2020

A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods

In this work we demonstrate provable guarantees on the training of depth...
research
06/22/2020

Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent

We prove the first superpolynomial lower bounds for learning one-layer n...
research
01/01/2023

Sharper analysis of sparsely activated wide neural networks with trainable biases

This work studies training one-hidden-layer overparameterized ReLU netwo...

Please sign up or login with your details

Forgot password? Click here to reset