Why Learning of Large-Scale Neural Networks Behaves Like Convex Optimization

03/06/2019
by   Hui Jiang, et al.
20

In this paper, we present some theoretical work to explain why simple gradient descent methods are so successful in solving non-convex optimization problems in learning large-scale neural networks (NN). After introducing a mathematical tool called canonical space, we have proved that the objective functions in learning NNs are convex in the canonical model space. We further elucidate that the gradients between the original NN model space and the canonical space are related by a pointwise linear transformation, which is represented by the so-called disparity matrix. Furthermore, we have proved that gradient descent methods surely converge to a global minimum of zero loss provided that the disparity matrices maintain full rank. If this full-rank condition holds, the learning of NNs behaves in the same way as normal convex optimization. At last, we have shown that the chance to have singular disparity matrices is extremely slim in large NNs. In particular, when over-parameterized NNs are randomly initialized, the gradient decent algorithms converge to a global minimum of zero loss in probability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/05/2021

On the global convergence of randomized coordinate gradient descent for non-convex optimization

In this work, we analyze the global convergence property of coordinate g...
research
10/25/2020

A Dynamical View on Optimization Algorithms of Overparameterized Neural Networks

When equipped with efficient optimization algorithms, the over-parameter...
research
01/23/2020

Replica Exchange for Non-Convex Optimization

Gradient descent (GD) is known to converge quickly for convex objective ...
research
11/18/2019

Implicit Regularization of Normalization Methods

Normalization methods such as batch normalization are commonly used in o...
research
10/15/2021

Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization

Many supervised machine learning methods are naturally cast as optimizat...
research
06/16/2022

Gradient Descent for Low-Rank Functions

Several recent empirical studies demonstrate that important machine lear...
research
11/07/2016

Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks

Modern convolutional networks, incorporating rectifiers and max-pooling,...

Please sign up or login with your details

Forgot password? Click here to reset