Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit

by   Jascha Sohl-Dickstein, et al.

Recent work has noted that all bad local minima can be removed from neural network loss landscapes, by adding a single unit with a particular parameterization. We show that the core technique from these papers can be used to remove all bad local minima from any loss landscape, so long as the global minimum has a loss of zero. This procedure does not require the addition of auxiliary units, or even that the loss be associated with a neural network. The method of action involves all bad local minima being converted into bad (non-local) minima at infinity in terms of auxiliary parameters.


Blurred Images Lead to Bad Local Minima

Blurred Images Lead to Bad Local Minima...

Adding One Neuron Can Eliminate All Bad Local Minima

One of the main difficulties in analyzing neural networks is the non-con...

Elimination of All Bad Local Minima in Deep Learning

In this paper, we theoretically prove that we can eliminate all suboptim...

Statistical tools to assess the reliability of self-organizing maps

Results of neural network learning are always subject to some variabilit...

Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape

We study the loss landscape of two-layer mildly overparameterized ReLU n...

Depth creates no more spurious local minima

We show that for any convex differentiable loss function, a deep linear ...

Towards a Better Global Loss Landscape of GANs

Understanding of GAN training is still very limited. One major challenge...

Please sign up or login with your details

Forgot password? Click here to reset