Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent Flows

12/02/2021
by   Param Budhraja, et al.
0

Accelerated gradient methods are the cornerstones of large-scale, data-driven optimization problems that arise naturally in machine learning and other fields concerning data analysis. We introduce a gradient-based optimization framework for achieving acceleration, based on the recently introduced notion of fixed-time stability of dynamical systems. The method presents itself as a generalization of simple gradient-based methods suitably scaled to achieve convergence to the optimizer in a fixed-time, independent of the initialization. We achieve this by first leveraging a continuous-time framework for designing fixed-time stable dynamical systems, and later providing a consistent discretization strategy, such that the equivalent discrete-time algorithm tracks the optimizer in a practically fixed number of iterations. We also provide a theoretical analysis of the convergence behavior of the proposed gradient flows, and their robustness to additive disturbances for a range of functions obeying strong convexity, strict convexity, and possibly nonconvexity but satisfying the Polyak-Łojasiewicz inequality. We also show that the regret bound on the convergence rate is constant by virtue of the fixed-time convergence. The hyperparameters have intuitive interpretations and can be tuned to fit the requirements on the desired convergence rates. We validate the accelerated convergence properties of the proposed schemes on a range of numerical examples against the state-of-the-art optimization algorithms. Our work provides insights on developing novel optimization algorithms via discretization of continuous-time flows.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2020

On Dissipative Symplectic Integration with Applications to Gradient-Based Optimization

Continuous-time dynamical systems have proved useful in providing concep...
research
12/07/2022

Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points

Gradient-based first-order convex optimization algorithms find widesprea...
research
02/28/2020

Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic Perspectives

We analyze the convergence rate of various momentum-based optimization a...
research
12/06/2019

Optimization algorithms inspired by the geometry of dissipative systems

Accelerated gradient methods are a powerful optimization tool in machine...
research
12/18/2019

Finite-Time Convergence of Continuous-Time Optimization Algorithms via Differential Inclusions

In this paper, we propose two discontinuous dynamical systems in continu...
research
02/10/2018

On Symplectic Optimization

Accelerated gradient methods have had significant impact in machine lear...
research
10/06/2020

Optimizing Deep Neural Networks via Discretization of Finite-Time Convergent Flows

In this paper, we investigate in the context of deep neural networks, th...

Please sign up or login with your details

Forgot password? Click here to reset