Quantitative Gaussian Approximation of Randomly Initialized Deep Neural Networks

03/14/2022
by   Andrea Basteri, et al.
0

Given any deep fully connected neural network, initialized with random Gaussian parameters, we bound from above the quadratic Wasserstein distance between its output distribution and a suitable Gaussian process. Our explicit inequalities indicate how the hidden and output layers sizes affect the Gaussian behaviour of the network and quantitatively recover the distributional convergence results in the wide limit, i.e., if all the hidden layers sizes become large.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2018

Gaussian Process Behaviour in Wide Deep Neural Networks

Whilst deep neural networks have shown great empirical success, there is...
research
07/12/2023

Quantitative CLTs in Deep Neural Networks

We study the distribution of a fully connected neural network with rando...
research
07/05/2023

Distance Preserving Machine Learning for Uncertainty Aware Accelerator Capacitance Predictions

Providing accurate uncertainty estimations is essential for producing re...
research
06/22/2022

Concentration inequalities and optimal number of layers for stochastic deep neural networks

We state concentration and martingale inequalities for the output of the...
research
02/14/2021

Double-descent curves in neural networks: a new perspective using Gaussian processes

Double-descent curves in neural networks describe the phenomenon that th...
research
06/17/2021

Wide stochastic networks: Gaussian limit and PAC-Bayesian training

The limit of infinite width allows for substantial simplifications in th...
research
12/14/2018

Products of Many Large Random Matrices and Gradients in Deep Neural Networks

We study products of random matrices in the regime where the number of t...

Please sign up or login with your details

Forgot password? Click here to reset