Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic Perspectives

02/28/2020
by   Michael Muehlebach, et al.
60

We analyze the convergence rate of various momentum-based optimization algorithms from a dynamical systems point of view. Our analysis exploits fundamental topological properties, such as the continuous dependence of iterates on their initial conditions, to provide a simple characterization of convergence rates. In many cases, closed-form expressions are obtained that relate algorithm parameters to the convergence rate. The analysis encompasses discrete time and continuous time, as well as time-invariant and time-variant formulations, and is not limited to a convex or Euclidean setting. In addition, the article rigorously establishes why symplectic discretization schemes are important for momentum-based optimization algorithms, and provides a characterization of algorithms that exhibit accelerated convergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2020

On Dissipative Symplectic Integration with Applications to Gradient-Based Optimization

Continuous-time dynamical systems have proved useful in providing concep...
research
12/02/2021

Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent Flows

Accelerated gradient methods are the cornerstones of large-scale, data-d...
research
12/11/2021

Convergence Rate Analysis of Accelerated Forward-Backward Algorithm with Generalized Nesterov Momentum Scheme

Nesterov's accelerated forward-backward algorithm (AFBA) is an efficient...
research
09/24/2022

Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms

We study momentum-based first-order optimization algorithms in which the...
research
03/11/2019

Conformal Symplectic and Relativistic Optimization

Although momentum-based optimization methods have had a remarkable impac...
research
04/27/2022

Accelerated Continuous-Time Approximate Dynamic Programming via Data-Assisted Hybrid Control

We introduce a new closed-loop architecture for the online solution of a...
research
12/07/2022

Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points

Gradient-based first-order convex optimization algorithms find widesprea...

Please sign up or login with your details

Forgot password? Click here to reset