The Global Landscape of Neural Networks: An Overview

07/02/2020
by   Ruoyu Sun, et al.
14

One of the major concerns for neural network training is that the non-convexity of the associated loss functions may cause bad landscape. The recent success of neural networks suggests that their loss landscape is not too bad, but what specific results do we know about the landscape? In this article, we review recent findings and results on the global landscape of neural networks. First, we point out that wide neural nets may have sub-optimal local minima under certain assumptions. Second, we discuss a few rigorous results on the geometric properties of wide networks such as "no bad basin", and some modifications that eliminate sub-optimal local minima and/or decreasing paths to infinity. Third, we discuss visualization and empirical explorations of the landscape for practical neural nets. Finally, we briefly discuss some convergence results and their relation to landscape results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2019

Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity

Traditional landscape analysis of deep neural networks aims to show that...
research
09/16/2020

Landscape of Sparse Linear Network: A Brief Investigation

Network pruning, or sparse network has a long history and practical sign...
research
11/10/2020

Towards a Better Global Loss Landscape of GANs

Understanding of GAN training is still very limited. One major challenge...
research
12/19/2019

Optimization for deep learning: theory and algorithms

When and why can a neural network be successfully trained? This article ...
research
11/04/2019

Sub-Optimal Local Minima Exist for Almost All Over-parameterized Neural Networks

Does over-parameterization eliminate sub-optimal local minima for neural...
research
10/05/2017

Porcupine Neural Networks: (Almost) All Local Optima are Global

Neural networks have been used prominently in several machine learning a...
research
09/27/2018

On the loss landscape of a class of deep neural networks with no bad local valleys

We identify a class of over-parameterized deep neural networks with stan...

Please sign up or login with your details

Forgot password? Click here to reset