Hamiltonian Descent Methods

09/13/2018
by   Chris J. Maddison, et al.
0

We propose a family of optimization methods that achieve linear convergence using first-order gradient information and constant step sizes on a class of convex functions much larger than the smooth and strongly convex ones. This larger class includes functions whose second derivatives may be singular or unbounded at their minima. Our methods are discretizations of conformal Hamiltonian dynamics, which generalize the classical momentum method to model the motion of a particle with non-standard kinetic energy exposed to a dissipative force and the gradient field of the function of interest. They are first-order in the sense that they require only gradient computation. Yet, crucially the kinetic gradient map can be designed to incorporate information about the convex conjugate in a fashion that allows for linear convergence on convex functions that may be non-smooth or non-strongly convex. We study in detail one implicit and two explicit methods. For one explicit method, we provide conditions under which it converges to stationary points of non-convex functions. For all, we provide conditions on the convex function and kinetic energy pair that guarantee linear convergence, and show that these conditions can be satisfied by functions with power growth. In sum, these methods expand the class of convex functions on which linear convergence is possible with first-order computation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2017

Stochastic Conjugate Gradient Algorithm with Variance Reduction

Conjugate gradient methods are a class of important methods for solving ...
research
04/20/2023

Understanding Accelerated Gradient Methods: Lyapunov Analyses and Hamiltonian Assisted Interpretations

We formulate two classes of first-order algorithms more general than pre...
research
11/14/2018

Revisiting Projection-Free Optimization for Strongly Convex Constraint Sets

We revisit the Frank-Wolfe (FW) optimization under strongly convex const...
research
09/18/2020

Global Linear Convergence of Evolution Strategies on More Than Smooth Strongly Convex Functions

Evolution strategies (ESs) are zero-order stochastic black-box optimizat...
research
10/07/2021

Adjustment of force-gradient operator in symplectic methods

Many force-gradient explicit symplectic integration algorithms have been...
research
01/26/2022

Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

We introduce a novel framework for optimization based on energy-conservi...
research
03/04/2022

Analysis of closed-loop inertial gradient dynamics

In this paper, we analyse the performance of the closed-loop Whiplash gr...

Please sign up or login with your details

Forgot password? Click here to reset