Mean Field Analysis of Deep Neural Networks

03/11/2019
by   Justin Sirignano, et al.
0

We analyze multi-layer neural networks in the asymptotic regime of simultaneously (A) large network sizes and (B) large numbers of stochastic gradient descent training iterations. We rigorously establish the limiting behavior of the multilayer neural network output. The limit procedure is valid for any number of hidden layers and it naturally also describes the limiting behavior of the training loss. The ideas that we explore are to (a) sequentially take the limits of each hidden layer and (b) characterizing the evolution of parameters in terms of their initialization. The limit satisfies a system of integro-differential equations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/28/2018

Mean Field Analysis of Neural Networks: A Central Limit Theorem

Machine learning has revolutionized fields such as image, text, and spee...
research
07/09/2019

Scaling Limit of Neural Networks with the Xavier Initialization and Convergence to a Global Minimum

We analyze single-layer neural networks with the Xavier initialization i...
research
02/07/2019

Mean Field Limit of the Learning Dynamics of Multilayer Neural Networks

Can multilayer neural networks -- typically constructed as highly comple...
research
12/30/2020

SGD Distributional Dynamics of Three Layer Neural Networks

With the rise of big data analytics, multi-layer neural networks have su...
research
10/29/2021

Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training

The mean field (MF) theory of multilayer neural networks centers around ...
research
05/27/2019

Neural Stochastic Differential Equations

Deep neural networks whose parameters are distributed according to typic...
research
09/02/2022

Normalization effects on deep neural networks

We study the effect of normalization on the layers of deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset