Local minima in training of neural networks

11/19/2016
by   Grzegorz Świrszcz, et al.
0

There has been a lot of recent interest in trying to characterize the error surface of deep models. This stems from a long standing question. Given that deep networks are highly nonlinear systems optimized by local gradient methods, why do they not seem to be affected by bad local minima? It is widely believed that training of deep models using gradient methods works so well because the error surface either has no local minima, or if they exist they need to be close in value to the global minimum. It is known that such results hold under very strong assumptions which are not satisfied by real models. In this paper we present examples showing that for such theorem to be true additional assumptions on the data, initialization schemes and/or the model classes have to be made. We look at the particular case of finite size datasets. We demonstrate that in this scenario one can construct counter-examples (datasets or initialization schemes) when the network does become susceptible to bad local minima over the weight space.

READ FULL TEXT
research
01/30/2019

Blurred Images Lead to Bad Local Minima

Blurred Images Lead to Bad Local Minima...
research
02/27/2017

Depth Creates No Bad Local Minima

In deep learning, depth, as well as nonlinearity, create non-convex loss...
research
06/13/2018

Weight Initialization without Local Minima in Deep Nonlinear Neural Networks

In this paper, we propose a new weight initialization method called even...
research
11/05/2019

Guided Layer-wise Learning for Deep Models using Side Information

Training of deep models for classification tasks is hindered by local mi...
research
06/06/2019

Bad Global Minima Exist and SGD Can Reach Them

Several recent works have aimed to explain why severely overparameterize...
research
11/20/2018

Effect of Depth and Width on Local Minima in Deep Learning

In this paper, we analyze the effects of depth and width on the quality ...
research
02/11/2020

Unique Properties of Wide Minima in Deep Networks

It is well known that (stochastic) gradient descent has an implicit bias...

Please sign up or login with your details

Forgot password? Click here to reset