Minimization of composite functions with linear constrains finds various applications in signal and image processing, statistics, machine learning, to name a few. Mathematically, such a problem can be presented as
where , , and is usually the regularization function, and
is usually the loss function.
The well-known alternating direction method of multipliers (ADMM) method [1, 2] is a powerful tool for the problem mentioned above. The ADMM actually focuses on the augmented Lagrangian problem of (1.1) which reads as
where is a parameter. The ADMM minimizes only one variable and fixes others in each iteration; the variable is updated by a feedback strategy. Mathematically, the standard ADMM method can be presented as
The ADMM algorithm attracts increasing attention for its efficiency in dealing with sparsity-related problems [3, 4, 5, 6, 7]. Obviously, the ADMM has a self-explanatory assumption; all the subproblems shall be solved efficiently. In fact, if the proximal maps of the and are easy to calculate, the linearized ADMM  proposes the linearized technique to solve the subproblem efficiently; the subproblems all need to compute proximal map of or once. The core part of the linearized ADMM lies in linearizing the quadratic terms and in each iteration. The linearized ADMM is also called as preconditioned ADMM in ; in fact, it is also a special case when in Chambolle-Pock primal dual algorithm . In the latter paper , the linearized ADMM is further generalized as the Bregman ADMM.
The convergence of the ADMM in the convex case is also well studied; numerous excellent works have made contributions to this field [12, 13, 14, 15]. Recently, the ADMM algorithm is even developed for the infeasible problems [16, 17]. The earlier analyses focus on the convex case, i.e., both and are all convex. But as the nonconvex penalty functions perform efficiently in applications, nonconvex ADMM is developed and studied: in paper , Chartrand and Brendt directly used the ADMM to the group sparsity problems. They replace the nonconvex subproblems as a class of proximal maps. Later, Ames and Hong consider applying ADMM for certain non-convex quadratic problems . The convergence is also presented. A class of nonconvex problems is solved by Hong et al by a provably convergent ADMM . They also allow the subproblems to be solved inexactly by taking gradient steps which can be regarded as a linearization. Recently, with weaker assumptions,  present new analysis for nonconvex ADMM by novel mathematical techniques. With the Kurdyka-Łojasiewicz property, [22, 23] consider the convergence of the generated iterative points.  considered a structured constrained problem and proposed the ADMM-DC algorithm. In nonconvex ADMM literature, either the proximal maps of and or the subproblems are assumed to be easily solved.
1.1 Motivating example and problem formulation
This subsection contains two parts: the first one presents an example and discusses the problems in direct using the ADMM; the second one describes the problem considered in this paper.
1.1.1 A motivating example: the problems in directly using ADMM
The methods mentioned above are feasibly applicable provided the subproblems are relatively easy to solve, i.e., either the proximal maps of and or the subproblems are assumed to be easily solved. However, the nonconvex cases may not always promise such a convention. We recall the problem  which arises in imaging science
where is the total variation operator and . By denoting , the problem then turns to being
The direct ADMM for this problem can be presented as
The first subproblem in the algorithm needs to minimize a nonconvex and nonsmooth problem. If , the point can be explicitly calculated. This is because the proximal map of can be easily obtained. But for other , the proximal map cannot be easily derived. Thus, we may must employ iterative algorithms to compute . That indicates three drawbacks which cannot be ignored:
The stopping criterion is hard to set for the nonconvexity111The convex methods usually enjoy a convergence rate..
The error may be accumulating in the iterations due to the inexact numerical solution of the subproblem.
Even the subproblem can be numerically solved without any error, the numerical solution for the subproblem is always a critical point rather than the “real” argmin due to the nonconvexity.
1.1.2 Optimization problem and basic assumptions
In this paper, we consider the following problem
where , and , and satisfy the following assumptions:
A.1 is a differentiable convex function with a Lipschitz continuous gradient, i.e.,
And the function is strongly convex with constant .
A.2 is convex and proximable.
A.3 is a differentiable concave function with a Lipschitz continuous gradient whose Lipschitz continuity modulus is bounded by ; that is
and when .
1.2 Linearized ADMM meets the iteratively reweighted strategy: convexifying the subproblems
In this part, we present the algorithm for solving problem (1.7). The term has a deep relationship with several iteratively reweighted style algorithms [30, 31, 32, 33, 34]. Although the function may be nondifferentiable itself, the reweighted style methods still propose an elegant way: linearization of outside function . Precisely, in -th iteration of the iteratively reweighted style algorithms, the term is usually replaced by , where is obtained in the -th iteration. The extensions of reweighted style methods to matrix cases are considered and analyzed in [35, 36, 37, 38, 39]. In fact, the iteratively reweighted technique is a special majorization minimization technique, which has also been adopted in ADMM . Compared with , the most difference in our paper is the exploiting the specific structure of the problem in nonconvex settings. Motivated by the iteratively reweighted strategy, we propose the following scheme for solving (1.7)
We combined both linearized ADMM and reweighted algorithm in the new scheme: for the nonconvex part , we linearize the outside function and keep , which aims to derive the convexity of the subproblem; for the quadratic part , linearization is for the use of the proximal map of . We call this new algorithm as Iteratively Linearized Reweighted Alternating Direction Method of Multipliers (ILR-ADMM). It is easy to see that each subproblem just needs to solve a convex problem in this scheme. With the expression of proximal maps, updating can be equivalently presented as the following forms
where , and denotes the -th column of the matrix . In many applications, is the quadratic function, and then solving is also very easy. With this form, the algorithm can be programmed with the absence of the inner loop.
1.3 Contribution and Organization
In this paper, we consider a class of nonconvex and nonsmooth problems which are ubiquitous in applications. Direct use of ADMM algorithms will lead to troubles in both computations and mathematics for the nonconvexity of the subproblem. In view of this, we propose the iteratively linearized reweighted alternating direction method of multipliers for these problems. The new algorithm is a combination of iteratively reweighted strategy and the linearized ADMM. All the subproblems in the proposed algorithm are convex and easy to solve if the proximal map of is easy to solve and is quadratic. Compared with the direct application of ADMM to problem (1.7), we now list the advantages of the new algorithm:
Computational perspective: each subproblem just needs to compute once proximal map of and minimize a quadratic problem, the computational cost is low in each iteration.
Practical perspective: without any inner loop, the programming is very easy.
Mathematical perspective: all the subproblems is convex and exactly solved. Thus, we get “real” argmin everywhere, which makes the mathematical convergence analysis solid and meaningful.
With the help of the Kurdyka-Łojasiewicz property, we provide the convergence results of the algorithm with proper selections of the parameters. The applications of the new algorithm to the signal and image processing are presented. The numerical results demonstrate the efficiency of the proposed algorithm.
The rest of this paper is organized as follows. Section 2 introduces the preliminaries including the definitions of subdifferential and the Kurdyka-Łojasiewicz property. Section 3 provides the convergence analysis. The core part is using an auxiliary Lyapunov function and bounding the generated sequence. Section 4 applies the proposed algorithm to image deblurring. And several comparisons are reported. Finally, Section 5 concludes the paper.
We introduce the basic tools in the analysis: the subdifferential and Kurdyka-Łojasiewicz property. These two definitions play important roles in the variational analysis.
Given a lower semicontinuous function , its domain is defined by
The graph of a real extended valued function is defined by
Now, we are prepared to present the definition of subdifferential. More details can be found in .
Let be a proper and lower semicontinuous function.
For a given , the Frchet subdifferential of at , written as
, is the set of all vectorssatisfying
When , we set .
The (limiting) subdifferential, or simply the subdifferential, of at , written as , is defined through the following closure process
Note that if , . When is convex, the definition agrees with the subgradient in convex analysis  which is defined as
It is easy to verify that the Frchet subdifferential is convex and closed while the subdifferential is closed. Denote that
thus, is a closed set. Let be a sequence in such that . If converges to as and converges to as , then . This indicates the following simple proposition.
If , , , , and 222If is continuous, the condition certainly holds if .. Then, we have
A necessary condition for to be a minimizer of is
When is convex, (2.2) is also sufficient.
A point that satisfies (2.2) is called (limiting) critical point. The set of critical points of is denoted by .
If is a critical point of with any , it must hold that
where is defined in (1.10) and .
With [Proposition 10.5, ], we have
Noting that is differentiable and is convex, with direct computation, we have
And more, by definition, we can obtain
We then prove the first equation. The second and third are quite easy. ∎
2.2 Kurdyka-Łojasiewicz function
The domain of a subdifferential is given as
(a) The function is said to have the Kurdyka-Łojasiewicz property at if there exist , a neighborhood of and a continuous concave function such that
is on .
for all , .
for all in , it holds
(b) Proper closed functions which satisfy the Kurdyka-Łojasiewicz property at each point of are called KL functions.
More details can be found in [43, 44, 45]. In the following part of the paper, we use KL for Kurdyka-Łojasiewicz for short. Directly checking whether a function is KL or not is hard, but the proper closed semi-algebraic functions  do much help.
(a) A subset of is a real semi-algebraic set if there exists a finite number of real polynomial functions such that
(b) A function is called semi-algebraic if its graph
is a semi-algebraic subset of .
Better yet, the semi-algebraicity enjoys many quite nice properties and various kinds of functions are KL . We just put a few of them here:
Real closed polynomial functions.
Indicator functions of closed semi-algebraic sets.
Finite sums and product of closed semi-algebraic functions.
The composition of closed semi-algebraic functions.
Sup/Inf type function, e.g., is semi-algebraic when is a closed semi-algebraic function and a closed semi-algebraic set.
Closed-cone of PSD matrices, closed Stiefel manifolds and closed constant rank matrices.
Lemma 1 ().
Let be a proper and closed function. If is semi-algebraic then it satisfies the KL property at any point of .
The previous definition and property of KL is about a certain point in . In fact, the property has been extended to a certain closed set . And this property makes previous convergence proofs related to KL property much easier.
Let be a proper lower semi-continuous function and be a compact set. If is a constant on and satisfies the KL property at each point on , then there exists concave function satisfying the four properties given in Definition 3 and such that for any and any satisfying that and , it holds that
3 Convergence analysis
In this part, the function is defined in (1.10). We provide the convergence guarantee and the convergence analysis of ILR-ADMM (Algorithm 1). We first present a sketch of the proofs, which is also a big picture for the purpose of each lemma and theorem:
In the first step, we bound the dual variables by the primal points (Lemma 3).
In the second step, the sufficient descent condition is derived for a new Lyapunov function (Lemma 4).
In the third step, we provide several conditions to bound the points (Lemma 5).
In the fourth step, the relative error condition is proved (Lemma 6).
In the last step, we prove the convergence under semi-algebraic assumption (Theorem 1).
The proofs in our paper are closely related to seminal papers [20, 21, 22] in several proofs treatments. In fact, some proofs follow their techniques. For example, in Lemma 3, we employ the method used in [Lemma 3, ] to bound . In Lemma 5, boundedness of the sequence is also proved by a similar way given in [Theorem 3, ]. Besides the detailed issues, in the large picture, the keystones are also similar to [20, 21, 22]: we also prove the sufficient descent and subdifferential bound for a Lyapunov function, and the boundedness of the generated points.
However, the proofs in our paper are still different from [20, 21, 22] in various aspects. The novelties mainly lay in deriving the sufficient descent and subdifferential bound based on the specific structure of our problem. Noting that in each iteration, we minimize and rather than and . Thus, the previous methods cannot be directly used in our paper. By exploiting the structure property of the problem, we built these two conditions.
Then, we have
where , and is the smallest strictly-positive eigenvalue of
is the smallest strictly-positive eigenvalue of.
The second step in each iteration actually gives
With the expression of ,
Replacing with , we can obtain
Under condition (3.1), ; and subtraction of the two equations above gives
The condition (3.1) is satisfied if is surjective. However, in many applications, the matrix may fail to be surjective. For example, for a matrix , we consider the operator
where is the forward difference operator. Noting the when , thus, cannot be surjective in this case. However, the current convergence of nonconvex ADMM is all based on the surjective assumption on or condition (3.1), which is also used in our analysis. How to remove condition (3.1) in the nonconvex ADMM deserves further research.
Now, we introduce several notation to present the following lemma. Denote the variable and the sequence as
An auxiliary function is always used in the proof
Lemma 4 (Descent).
Let the sequence be generated by ILR-ADMM. If condition (3.1) and the following condition
hold, then there exists such that
Direct calculation shows that the first step is actually minimizing the function with respect to . Thus, we have
Similarly, actually minimizes . Noting , with assumption A.1, the strongly convex constant of is larger than ,
Direct calculation yields
Combining the equations above, we can have
Noting is concave, we have
Then, we can derive
With Lemma 3, we then have
Letting , we then prove the result. ∎
In fact, condition (3.11) can be always satisfied in applications because the parameters and are both selected by the user. Different with the ADMMs in convex setting, the parameter is nonarbitrary, the here should be sufficiently large.
Lemma 5 (Boundedness).
The sequence is bounded, if one of the following conditions holds:
B1. is coercive, and is coercive.
B2. is coercive, and is invertible.
B3. , is coercive, and is invertible.
Noting is decreasing with Lemma 4, . We then can see , , are all bounded. It is easy to see that if one of the three conditions holds, will be bounded. ∎
Both assumptions B2 and B3 actually imply condition (3.7).