Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints

08/23/2019
by   Sepideh Hassan-Moghaddam, et al.
0

Many large-scale and distributed optimization problems can be brought into a composite form in which the objective function is given by the sum of a smooth term and a nonsmooth regularizer. Such problems can be solved via a proximal gradient method and its variants, thereby generalizing gradient descent to a nonsmooth setup. In this paper, we view proximal algorithms as dynamical systems and leverage techniques from control theory to study their global properties. In particular, for problems with strongly convex objective functions, we utilize the theory of integral quadratic constraints to prove global exponential stability of the differential equations that govern the evolution of proximal gradient and Douglas-Rachford splitting flows. In our analysis, we use the fact that these algorithms can be interpreted as variable-metric gradient methods on the forward-backward and the Douglas-Rachford envelopes and exploit structural properties of the nonlinear terms that arise from the gradient of the smooth part of the objective function and the proximal operator associated with the nonsmooth regularizer. We also demonstrate that these envelopes can be obtained from the augmented Lagrangian associated with the original nonsmooth problem and establish conditions for global exponential convergence even in the absence of strong convexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2019

Global exponential stability of primal-dual gradient flow dynamics based on the proximal augmented Lagrangian: A Lyapunov-based approach

For a class of nonsmooth composite optimization problems with linear equ...
research
11/29/2019

A note on Douglas-Rachford, subgradients, and phase retrieval

The properties of gradient techniques for the phase retrieval problem ha...
research
06/29/2023

A Low-Power Hardware-Friendly Optimisation Algorithm With Absolute Numerical Stability and Convergence Guarantees

We propose Dual-Feedback Generalized Proximal Gradient Descent (DFGPGD) ...
research
08/29/2023

Limited memory gradient methods for unconstrained optimization

The limited memory steepest descent method (Fletcher, 2012) for unconstr...
research
02/04/2014

UNLocBoX: A MATLAB convex optimization toolbox for proximal-splitting methods

Convex optimization is an essential tool for machine learning, as many o...
research
08/02/2019

Gradient Flows and Accelerated Proximal Splitting Methods

Proximal based methods are well-suited to nonsmooth optimization problem...
research
07/04/2018

Proximal algorithms for large-scale statistical modeling and optimal sensor/actuator selection

Several problems in modeling and control of stochastically-driven dynami...

Please sign up or login with your details

Forgot password? Click here to reset