A second order primal-dual method for nonsmooth convex composite optimization

09/05/2017
by   Neil K. Dhingra, et al.
0

We develop a second order primal-dual method for optimization problems in which the objective function is given by the sum of a strongly convex twice differentiable term and a possibly nondifferentiable convex regularizer. After introducing an auxiliary variable, we utilize the proximal operator of the nonsmooth regularizer to transform the associated augmented Lagrangian into a function that is once, but not twice, continuously differentiable. The saddle point of this function corresponds to the solution of the original optimization problem. We employ a generalization of the Hessian to define second order updates on this function and prove global exponential stability of the corresponding differential inclusion. Furthermore, we develop a globally convergent customized algorithm that utilizes the primal-dual augmented Lagrangian as a merit function. We show that the search direction can be computed efficiently and prove quadratic/superlinear asymptotic convergence. We use the ℓ_1-regularized least squares problem and the problem of designing a distributed controller for a spatially-invariant system to demonstrate the merits and the effectiveness of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2020

Augmented Lagrangian Method for Second-Order Cone Programs under Second-Order Sufficiency

This paper addresses problems of second-order cone programming important...
research
10/02/2019

Global exponential stability of primal-dual gradient flow dynamics based on the proximal augmented Lagrangian: A Lyapunov-based approach

For a class of nonsmooth composite optimization problems with linear equ...
research
01/12/2020

The Proximal Method of Multipliers for a Class of Nonsmooth Convex Optimization

This paper develops the proximal method of multipliers for a class of no...
research
07/31/2023

Moreau-Yoshida Variational Transport: A General Framework For Solving Regularized Distributional Optimization Problems

We consider a general optimization problem of minimizing a composite obj...
research
05/03/2022

Proximal stabilized Interior Point Methods for quadratic programming and low-frequency-updates preconditioning techniques

In this work, in the context of Linear and Quadratic Programming, we int...
research
11/22/2020

Primal-dual Learning for the Model-free Risk-constrained Linear Quadratic Regulator

Risk-aware control, though with promise to tackle unexpected events, req...
research
10/05/2021

Bilevel Imaging Learning Problems as Mathematical Programs with Complementarity Constraints

We investigate a family of bilevel imaging learning problems where the l...

Please sign up or login with your details

Forgot password? Click here to reset