Random Neural Networks in the Infinite Width Limit as Gaussian Processes

07/04/2021
by   Boris Hanin, et al.
0

This article gives a new proof that fully connected neural networks with random weights and biases converge to Gaussian processes in the regime where the input dimension, output dimension, and depth are kept fixed, while the hidden layer widths tend to infinity. Unlike prior work, convergence is shown assuming only moment conditions for the distribution of weights and for quite general non-linearities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2022

Convergence of neural networks to Gaussian mixture distribution

We give a proof that, under relatively mild conditions, fully-connected ...
research
05/17/2023

Deep quantum neural networks form Gaussian processes

It is well known that artificial neural networks initialized from indepe...
research
02/14/2021

Double-descent curves in neural networks: a new perspective using Gaussian processes

Double-descent curves in neural networks describe the phenomenon that th...
research
08/27/2019

Finite size corrections for neural network Gaussian processes

There has been a recent surge of interest in modeling neural networks (N...
research
06/12/2019

Learning Curves for Deep Neural Networks: A Gaussian Field Theory Perspective

A series of recent works suggest that deep neural networks (DNNs), of fi...
research
04/03/2022

Correlation Functions in Random Fully Connected Neural Networks at Finite Width

This article considers fully connected neural networks with Gaussian ran...
research
11/04/2021

Rate of Convergence of Polynomial Networks to Gaussian Processes

We examine one-hidden-layer neural networks with random weights. It is w...

Please sign up or login with your details

Forgot password? Click here to reset