The Smoking Gun: Statistical Theory Improves Neural Network Estimates

07/20/2021
by   Alina Braun, et al.
0

In this paper we analyze the L_2 error of neural network regression estimates with one hidden layer. Under the assumption that the Fourier transform of the regression function decays suitably fast, we show that an estimate, where all initial weights are chosen according to proper uniform distributions and where the weights are learned by gradient descent, achieves a rate of convergence of 1/√(n) (up to a logarithmic factor). Our statistical analysis implies that the key aspect behind this result is the proper choice of the initial inner weights and the adjustment of the outer weights via gradient descent. This indicates that we can also simply use linear least squares to choose the outer weights. We prove a corresponding theoretical result and compare our new linear least squares neural network estimate with standard neural network estimates via simulated data. Our simulations show that our theoretical considerations lead to an estimate with an improved performance. Hence the development of statistical theory can indeed improve neural network estimates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2019

On the rate of convergence of a neural network regression estimate learned by gradient descent

Nonparametric regression with random design is considered. Estimates are...
research
10/04/2022

Analysis of the rate of convergence of an over-parametrized deep neural network estimate learned by gradient descent

Estimation of a regression function from independent and identically dis...
research
12/09/2019

Analysis of the rate of convergence of neural network regression estimates which are easy to implement

Recent results in nonparametric regression show that for deep learning, ...
research
07/27/2020

Universality of Gradient Descent Neural Network Training

It has been observed that design choices of neural networks are often cr...
research
08/29/2019

Deep Learning and MARS: A Connection

We consider least squares regression estimates using deep neural network...
research
03/29/2019

A proof of convergence of multi-class logistic regression network

This paper revisits the special type of a neural network known under two...
research
09/02/2022

Normalization effects on deep neural networks

We study the effect of normalization on the layers of deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset