Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points

12/07/2022
by   Mayank Baranwal, et al.
0

Gradient-based first-order convex optimization algorithms find widespread applicability in a variety of domains, including machine learning tasks. Motivated by the recent advances in fixed-time stability theory of continuous-time dynamical systems, we introduce a generalized framework for designing accelerated optimization algorithms with strongest convergence guarantees that further extend to a subclass of non-convex functions. In particular, we introduce the GenFlow algorithm and its momentum variant that provably converge to the optimal solution of objective functions satisfying the Polyak-Łojasiewicz (PL) inequality, in a fixed-time. Moreover for functions that admit non-degenerate saddle-points, we show that for the proposed GenFlow algorithm, the time required to evade these saddle-points is bounded uniformly for all initial conditions. Finally, for strongly convex-strongly concave minimax problems whose optimal solution is a saddle point, a similar scheme is shown to arrive at the optimal solution again in a fixed-time. The superior convergence properties of our algorithm are validated experimentally on a variety of benchmark datasets.

READ FULL TEXT

page 1

page 6

page 7

page 9

research
12/02/2021

Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent Flows

Accelerated gradient methods are the cornerstones of large-scale, data-d...
research
07/26/2022

Fixed-Time Convergence for a Class of Nonconvex-Nonconcave Min-Max Problems

This study develops a fixed-time convergent saddle point dynamical syste...
research
02/21/2021

A Sketching Method for Finding the Closest Point on a Convex Hull

We develop a sketching algorithm to find the point on the convex hull of...
research
02/28/2020

Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic Perspectives

We analyze the convergence rate of various momentum-based optimization a...
research
12/18/2019

Finite-Time Convergence of Continuous-Time Optimization Algorithms via Differential Inclusions

In this paper, we propose two discontinuous dynamical systems in continu...
research
03/14/2021

Transient growth of accelerated first-order methods for strongly convex optimization problems

Optimization algorithms are increasingly being used in applications with...
research
06/23/2020

A Dynamical Systems Approach for Convergence of the Bayesian EM Algorithm

Out of the recent advances in systems and control (S&C)-based analysis o...

Please sign up or login with your details

Forgot password? Click here to reset