Quantitative CLTs in Deep Neural Networks

07/12/2023
by   Stefano Favaro, et al.
0

We study the distribution of a fully connected neural network with random Gaussian weights and biases in which the hidden layer widths are proportional to a large constant n. Under mild assumptions on the non-linearity, we obtain quantitative bounds on normal approximations valid at large but finite n and any fixed network depth. Our theorems show both for the finite-dimensional distributions and the entire process, that the distance between a random fully connected network (and its derivatives) to the corresponding infinite width Gaussian process scales like n^-γ for γ>0, with the exponent depending on the metric used to measure discrepancy. Our bounds are strictly stronger in terms of their dependence on network width than any previously available in the literature; in the one-dimensional case, we also prove that they are optimal, i.e., we establish matching lower bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2021

Large-width functional asymptotics for deep Gaussian neural networks

In this paper, we consider fully connected feed-forward deep neural netw...
research
04/03/2022

Correlation Functions in Random Fully Connected Neural Networks at Finite Width

This article considers fully connected neural networks with Gaussian ran...
research
03/14/2022

Quantitative Gaussian Approximation of Randomly Initialized Deep Neural Networks

Given any deep fully connected neural network, initialized with random G...
research
08/27/2019

Finite size corrections for neural network Gaussian processes

There has been a recent surge of interest in modeling neural networks (N...
research
06/17/2018

Exact information propagation through fully-connected feed forward neural networks

Neural network ensembles at initialisation give rise to the trainability...
research
06/01/2021

Asymptotics of representation learning in finite Bayesian neural networks

Recent works have suggested that finite Bayesian neural networks may out...
research
11/25/2019

Trajectory growth lower bounds for random sparse deep ReLU networks

This paper considers the growth in the length of one-dimensional traject...

Please sign up or login with your details

Forgot password? Click here to reset