Augmented Lagrangian based first-order methods for convex and nonconvex programs: nonergodic convergence and iteration complexity

03/19/2020
by   Zichong Li, et al.
0

First-order methods (FOMs) have been widely used for large-scale problems. In this paper, we first establish a nonergodic convergence rate result of an augmented Lagrangian (AL) based FOM for convex problems with functional constraints. It is a straightforward generalization of that by [Rockafellar'73, MathProg], which studied problems with only inequality constraints. By this result, we show a complexity result of the AL-based FOM for solving a strongly convex problem, which has a composite structured objective and smooth constraints. To achieve an ϵ-KKT point, the method needs O(ϵ^-1/2|logϵ|) proximal gradient steps. This result differs from an existing lower bound by |logϵ| and thus is nearly optimal. Then we apply the result to a convex problem and establish an O(ϵ^-1|logϵ|) complexity result of the AL-based FOM for convex problems. We further design a novel AL-based FOM for problems with non-convex objective and convex constraint functions. The new method follows the framework of the proximal point (PP) method. On approximately solving PP subproblems, it mixes the inexact AL method (iALM) and the quadratic penalty method, while the latter is always fed with estimated multipliers by the iALM. We show a complexity result of O(ϵ^-5/2|logϵ|) for the proposed method to achieve an ϵ-KKT point. This is the best known result. Theoretically, the hybrid method has lower iteration-complexity requirement than its counterpart that only uses iALM to solve PP subproblems, and numerically, it can perform significantly better than a pure-penalty-based method. Numerical experiments are conducted on convex quadratically constrained quadratic programs and nonconvex linearly constrained quadratic programs. The numerical results demonstrate the efficiency of the proposed methods over existing ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2017

Global convergence rates of augmented Lagrangian methods for constrained convex programming

Augmented Lagrangian method (ALM) has been popularly used for solving co...
research
12/19/2022

Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation Constrained Optimization

Many real-world problems not only have complicated nonconvex functional ...
research
10/05/2020

First-order methods for problems with O(1) functional constraints can have almost the same convergence rate as for unconstrained problems

First-order methods (FOMs) have recently been applied and analyzed for s...
research
03/27/2018

Iteration-complexity of first-order augmented Lagrangian methods for convex conic programming

In this paper we consider a class of convex conic programming. In partic...
research
05/22/2021

On anisotropic non-Lipschitz restoration model: lower bound theory and convergent algorithm

For nonconvex and nonsmooth restoration models, the lower bound theory r...
research
11/23/2020

On the Convergence of Continuous Constrained Optimization for Structure Learning

Structure learning of directed acyclic graphs (DAGs) is a fundamental pr...

Please sign up or login with your details

Forgot password? Click here to reset