DeepAI AI Chat
Log In Sign Up

A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization

by   Chuan He, et al.

In this paper we consider finding an approximate second-order stationary point (SOSP) of general nonconvex conic optimization that minimizes a twice differentiable function subject to nonlinear equality constraints and also a convex conic constraint. In particular, we propose a Newton-conjugate gradient (Newton-CG) based barrier-augmented Lagrangian method for finding an approximate SOSP of this problem. Under some mild assumptions, we show that our method enjoys a total inner iteration complexity of O(ϵ^-11/2) and an operation complexity of O(ϵ^-11/2min{n,ϵ^-5/4}) for finding an (ϵ,√(ϵ))-SOSP of general nonconvex conic optimization with high probability. Moreover, under a constraint qualification, these complexity bounds are improved to O(ϵ^-7/2) and O(ϵ^-7/2min{n,ϵ^-3/4}), respectively. To the best of our knowledge, this is the first study on the complexity of finding an approximate SOSP of general nonconvex conic optimization. Preliminary numerical results are presented to demonstrate superiority of the proposed method over first-order methods in terms of solution quality.


page 1

page 2

page 3

page 4


OpEn: Code Generation for Embedded Nonconvex Optimization

We present Optimization Engine (OpEn): an open-source code generation to...

A polynomial time log barrier method for problems with nonconvex constraints

Interior point methods (IPMs) that handle nonconvex constraints such as ...

A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization

In this work, we present a globalized stochastic semismooth Newton metho...

Debiasing a First-order Heuristic for Approximate Bi-level Optimization

Approximate bi-level optimization (ABLO) consists of (outer-level) optim...

Stochastic Gauss-Newton Algorithms for Nonconvex Compositional Optimization

We develop two new stochastic Gauss-Newton algorithms for solving a clas...