DeepAI AI Chat
Log In Sign Up

Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks

by   Ziwei Ji, et al.

Recent work has revealed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size n, the (inverse) training error 1/ϵ, and the (inverse) failure probability 1/δ. This work shows that O(1/ϵ) iterations of gradient descent on two-layer networks of any width exceeding polylog(n,1/ϵ,1/δ) and Ω(1/ϵ^2) training examples suffices to achieve a test error of ϵ. The analysis further relies upon a margin property of the limiting kernel, which is guaranteed positive, and can distinguish between true labels and random labels.


page 1

page 2

page 3

page 4


Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels

We study the relative power of learning with gradient descent on differe...

Feature selection with gradient descent on two-layer networks in low-rotation regimes

This work establishes low test error of gradient flow (GF) and stochasti...

On the Implicit Biases of Architecture Gradient Descent

Do neural networks generalise because of bias in the functions returned ...

A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods

In this work we demonstrate provable guarantees on the training of depth...

Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent

We prove the first superpolynomial lower bounds for learning one-layer n...

Sharper analysis of sparsely activated wide neural networks with trainable biases

This work studies training one-hidden-layer overparameterized ReLU netwo...