On the saddle point problem for non-convex optimization

05/19/2014
by   Razvan Pascanu, et al.
0

A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for the ability of these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, and neural network theory, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new algorithm, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep neural network training, and provide preliminary numerical evidence for its superior performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2020

Low Rank Saddle Free Newton: Algorithm and Analysis

Many tasks in engineering fields and machine learning involve minimizing...
research
11/03/2016

Finding Approximate Local Minima Faster than Gradient Descent

We design a non-convex second-order optimization algorithm that is guara...
research
12/20/2014

Explorations on high dimensional landscapes

Finding minima of a real valued non-convex function over a high dimensio...
research
06/17/2018

Laplacian Smoothing Gradient Descent

We propose a very simple modification of gradient descent and stochastic...
research
06/16/2021

Analysis and Optimisation of Bellman Residual Errors with Neural Function Approximation

Recent development of Deep Reinforcement Learning has demonstrated super...
research
11/17/2016

Towards a Mathematical Understanding of the Difficulty in Learning with Feedforward Neural Networks

Training deep neural networks for solving machine learning problems is o...
research
02/26/2020

Fast Linear Convergence of Randomized BFGS

Since the late 1950's when quasi-Newton methods first appeared, they hav...

Please sign up or login with your details

Forgot password? Click here to reset