DeepAI AI Chat
Log In Sign Up

Global convergence of splitting methods for nonconvex composite optimization

07/03/2014
by   Guoyin Li, et al.
UNSW
The Hong Kong Polytechnic University
0

We consider the problem of minimizing the sum of a smooth function h with a bounded Hessian, and a nonsmooth function. We assume that the latter function is a composition of a proper closed function P and a surjective linear map M, with the proximal mappings of τ P, τ > 0, simple to compute. This problem is nonconvex in general and encompasses many important applications in engineering and machine learning. In this paper, we examined two types of splitting methods for solving this nonconvex optimization problem: alternating direction method of multipliers and proximal gradient algorithm. For the direct adaptation of the alternating direction method of multipliers, we show that, if the penalty parameter is chosen sufficiently large and the sequence generated has a cluster point, then it gives a stationary point of the nonconvex problem. We also establish convergence of the whole sequence under an additional assumption that the functions h and P are semi-algebraic. Furthermore, we give simple sufficient conditions to guarantee boundedness of the sequence generated. These conditions can be satisfied for a wide range of applications including the least squares problem with the ℓ_1/2 regularization. Finally, when M is the identity so that the proximal gradient algorithm can be efficiently applied, we show that any cluster point is stationary under a slightly more flexible constant step-size rule than what is known in the literature for a nonconvex h.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/30/2014

Douglas-Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems

We adapt the Douglas-Rachford (DR) splitting method to solve nonconvex f...
05/11/2019

Sparse Optimization Problem with s-difference Regularization

In this paper, a s-difference type regularization for sparse recovery pr...
03/27/2019

Optimization of Inf-Convolution Regularized Nonconvex Composite Problems

In this work, we consider nonconvex composite problems that involve inf-...
04/04/2022

Characterizing Parametric and Convergence Stability in Nonconvex and Nonsmooth Optimizations: A Geometric Approach

We consider stability issues in minimizing a continuous (probably parame...
02/23/2016

Learning Shapes by Convex Composition

We present a mathematical and algorithmic scheme for learning the princi...
10/04/2016

A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning

In this paper, we show how to transform any optimization problem that ar...
01/06/2023

A Levenberg-Marquardt Method for Nonsmooth Regularized Least Squares

We develop a Levenberg-Marquardt method for minimizing the sum of a smoo...