Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit

01/12/2019
by   Jascha Sohl-Dickstein, et al.
0

Recent work has noted that all bad local minima can be removed from neural network loss landscapes, by adding a single unit with a particular parameterization. We show that the core technique from these papers can be used to remove all bad local minima from any loss landscape, so long as the global minimum has a loss of zero. This procedure does not require the addition of auxiliary units, or even that the loss be associated with a neural network. The method of action involves all bad local minima being converted into bad (non-local) minima at infinity in terms of auxiliary parameters.

READ FULL TEXT
research
01/30/2019

Blurred Images Lead to Bad Local Minima

Blurred Images Lead to Bad Local Minima...
research
05/22/2018

Adding One Neuron Can Eliminate All Bad Local Minima

One of the main difficulties in analyzing neural networks is the non-con...
research
01/02/2019

Elimination of All Bad Local Minima in Deep Learning

In this paper, we theoretically prove that we can eliminate all suboptim...
research
01/04/2007

Statistical tools to assess the reliability of self-organizing maps

Results of neural network learning are always subject to some variabilit...
research
05/31/2023

Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape

We study the loss landscape of two-layer mildly overparameterized ReLU n...
research
01/28/2019

Depth creates no more spurious local minima

We show that for any convex differentiable loss function, a deep linear ...
research
11/10/2020

Towards a Better Global Loss Landscape of GANs

Understanding of GAN training is still very limited. One major challenge...

Please sign up or login with your details

Forgot password? Click here to reset