From the Ravine method to the Nesterov method and vice versa: a dynamical system perspective

01/27/2022
by   H. Attouch, et al.
0

We revisit the Ravine method of Gelfand and Tsetlin from a dynamical system perspective, study its convergence properties, and highlight its similarities and differences with the Nesterov accelerated gradient method. The two methods are closely related. They can be deduced from each other by reversing the order of the extrapolation and gradient operations in their definitions. They benefit from similar fast convergence of values and convergence of iterates for general convex objective functions. We will also establish the high resolution ODE of the Ravine and Nesterov methods, and reveal an additional geometric damping term driven by the Hessian for both methods. This will allow us to prove fast convergence towards zero of the gradients not only for the Ravine method but also for the Nesterov method for the first time. We also highlight connections to other algorithms stemming from more subtle discretization schemes, and finally describe a Ravine version of the proximal-gradient algorithms for general structured smooth + non-smooth convex optimization problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2023

Adaptive Proximal Gradient Method for Convex Optimization

In this paper, we explore two fundamental first-order algorithms in conv...
research
06/03/2023

Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization

In this paper, we propose an accelerated quasi-Newton proximal extragrad...
research
10/21/2018

Understanding the Acceleration Phenomenon via High-Resolution Differential Equations

Gradient-based optimization algorithms can be studied from the perspecti...
research
06/16/2020

Hessian-Free High-Resolution Nesterov Accelerationfor Sampling

We propose an accelerated-gradient-based MCMC method. It relies on a mod...
research
06/16/2020

Hessian-Free High-Resolution Nesterov Acceleration for Sampling

We propose an accelerated-gradient-based MCMC method. It relies on a mod...
research
08/29/2023

Limited memory gradient methods for unconstrained optimization

The limited memory steepest descent method (Fletcher, 2012) for unconstr...
research
11/24/2020

Sequential convergence of AdaGrad algorithm for smooth convex optimization

We prove that the iterates produced by, either the scalar step size vari...

Please sign up or login with your details

Forgot password? Click here to reset