Scaling Limit of Neural Networks with the Xavier Initialization and Convergence to a Global Minimum

07/09/2019
by   Justin Sirignano, et al.
0

We analyze single-layer neural networks with the Xavier initialization in the asymptotic regime of large numbers of hidden units and large numbers of stochastic gradient descent training steps. We prove the neural network converges in distribution to a random ODE with a Gaussian distribution using mean field analysis. The limit is completely different than in the typical mean-field results for neural networks due to the 1/√(N) normalization factor in the Xavier initialization (versus the 1/N factor in the typical mean-field framework). Although the pre-limit problem of optimizing a neural network is non-convex (and therefore the neural network may converge to a local minimum), the limit equation minimizes a (quadratic) convex objective function and therefore converges to a global minimum. Furthermore, under reasonable assumptions, the matrix in the limiting quadratic objective function is positive definite and thus the neural network (in the limit) will converge to a global minimum with zero loss on the training set.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset