Perturbed gradient descent with occupation time

05/09/2020
by   Xin Guo, et al.
0

This paper develops further the idea of perturbed gradient descent, by adapting perturbation with the history of state via the notation of occupation time for saddle points. The proposed algorithm PGDOT is shown to converge at least as fast as perturbed gradient descent (PGD) algorithm, and is guaranteed to avoid getting stuck at saddle points. The analysis is corroborated by experimental results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2020

A Realistic Example in 2 Dimension that Gradient Descent Takes Exponential Time to Escape Saddle Points

Gradient descent is a popular algorithm in optimization, and its perform...
research
01/18/2021

On the Differentially Private Nature of Perturbed Gradient Descent

We consider the problem of empirical risk minimization given a database,...
research
12/31/2019

A frequency-domain analysis of inexact gradient descent

We study robustness properties of inexact gradient descent for strongly ...
research
05/29/2017

Gradient Descent Can Take Exponential Time to Escape Saddle Points

Although gradient descent (GD) almost always escapes saddle points asymp...
research
07/07/2021

Fast and Accurate Optimization of Metasurfaces with Gradient Descent and the Woodbury Matrix Identity

A fast metasurface optimization strategy for finite-size metasurfaces mo...
research
09/02/2017

A convergence analysis of the perturbed compositional gradient flow: averaging principle and normal deviations

We consider in this work a system of two stochastic differential equatio...
research
06/01/2020

Exit Time Analysis for Approximations of Gradient Descent Trajectories Around Saddle Points

This paper considers the problem of understanding the exit time for traj...

Please sign up or login with your details

Forgot password? Click here to reset