# Iteratively reweighted penalty alternating minimization methods with continuation for image deblurring

In this paper, we consider a class of nonconvex problems with linear constraints appearing frequently in the area of image processing. We solve this problem by the penalty method and propose the iteratively reweighted alternating minimization algorithm. To speed up the algorithm, we also apply the continuation strategy to the penalty parameter. A convergence result is proved for the algorithm. Compared with the nonconvex ADMM, the proposed algorithm enjoys both theoretical and computational advantages like weaker convergence requirements and faster speed. Numerical results demonstrate the efficiency of the proposed algorithm.

## Authors

• 32 publications
• 42 publications
• 36 publications
• 3 publications
• ### Iteratively Linearized Reweighted Alternating Direction Method of Multipliers for a Class of Nonconvex Problems

In this paper, we consider solving a class of nonconvex and nonsmooth pr...
09/01/2017 ∙ by Tao Sun, et al. ∙ 0

• ### Inertial Proximal Deep Learning Alternating Minimization for Efficient Neutral Network Training

In recent years, the Deep Learning Alternating Minimization (DLAM), whic...
01/30/2021 ∙ by Linbo Qiao, et al. ∙ 2

• ### An Efficient ADMM-Based Algorithm to Nonconvex Penalized Support Vector Machines

Support vector machines (SVMs) with sparsity-inducing nonconvex penaltie...
09/11/2018 ∙ by Lei Guan, et al. ∙ 0

• ### A note on the minimization of a Tikhonov functional with ℓ^1-penalty

In this paper, we consider the minimization of a Tikhonov functional wit...
01/09/2020 ∙ by Fabian Hinterer, et al. ∙ 0

Penalty shootout in soccer is known to be unfair ex post: the team kicki...
06/04/2018 ∙ by László Csató, et al. ∙ 0

• ### A Preconditioned Alternating Minimization Framework for Nonconvex and Half Quadratic Optimization

For some typical and widely used non-convex half-quadratic regularizatio...
07/29/2021 ∙ by Shengxiang Deng, et al. ∙ 0

• ### A Parallel Best-Response Algorithm with Exact Line Search for Nonconvex Sparsity-Regularized Rank Minimization

In this paper, we propose a convergent parallel best-response algorithm ...
11/13/2017 ∙ by Yang Yang, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Linearly constrained problems are widely discussed through various disciplines such as image sciences, signal processing and machines learning, to name a few. The classical algorithm for the linearly constrained problems is the Alternating Direction Method of Multiplier (ADMM); and the previous literature has paid their attention to the convex case

[1, 2, 3, 4]. In recent years, nonconvex ADMM has been developed for the nonconvex problems [5, 6, 7, 8].

### 1.1 Motivations

Although ADMM can be applied to the nonconvex Total Variation (TV) deblurring problem, several drawbacks still exist. We point out three of them as follows.

1. The convergence guarantees of nonconvex ADMMs require a very large Lagrange dual multiplier. Worse still, the large multiplier makes the nonconvex ADMM run slowly.

2. When applying nonconvex ADMMs to the nonconvex TV deblurring model, by direct checks, the convergence requires TV operator to be full row-rank; however, the TV operator cannot promise such an assumption. This point has been proposed in [9].

3. The previous analyses show that the sequence converges to a critical point of an auxiliary function under several assumptions. But the relationship between the auxiliary function and the original one is unclear in the nonconvex settings.

Considering these drawbacks, from both computational and theoretical perspectives, it is necessary to consider novel and efficient solvers. The main reason why nonconvex ADMMs have these drawbacks is due to the dual variable; in the convergence proof of the nonconvex case, the dual variables are just simply processed by Cauchy inequalities; the deductions in the proofs are then somehow loose. Therefore, we consider employing the penalty method to avoid using the dual information.

### 1.2 Contributions and organization

In this paper, we consider using the penalty method for a class of nonconvex linearized constrained minimizations. Different from the nonconvex ADMM, determining the penalty multiplier in the proposed algorithm is very lowly-costly. Although the penalty multiplier is also large, we can use a continuation method, i.e., increasing the penalty multiplier in the iteration. The alternating minimization methods [10, 11] are fit for solving this penalty problem. Directly applying the alternating minimization for the penalty problem encounters an issue: the subproblem may have no closed form. To overcome this problem, combining the structure of the problem, we use the linearized techniques for the regularized part in the algorithm. In this way, all the subproblems are convex and can be minimized numerically globally even without enjoying a closed form solution. We proved the square-summability of the successive differences of the generated points. We apply our algorithm to the nonconvex image delurring problem and compare it with the nonconvex ADMM. The numerics show the efficiency and speed of the proposed algorithm.

In Section 2, we present our problem and algorithm, and the convergence results of the algorithm. Section 3 contains the applications and numerics. And then Section 4 concludes the paper.

## 2 Problem formulation and algorithm

In this paper, we consider a broad class of nonconvex and nonsmooth problems with the following form:

 minx,y{Ψ(x,y):=f(x)+N∑i=1h(g(yi)), s.t. Ax+By=c}. (1)

where , and functions , and satisfy the following assumptions:

• A.1 is a closed proper convex function and .

• A.2 is a convex function, and the proximal map of is easy to calculated.111We say the proximal map of is easy to calculate if the minimization problem can be solved very easily for any .

• A.3 is a concave function and .

A very classical problem which can be formulated as (1) is Total Variation (TV-) deblurring [12]

 minu{12∥H(u)−B∥2F+λ∥T(u)∥qq}, (2)

where is the blurring operator, is the well-known total variation operator and . By defining , the problem then turns to

 minu,v{12∥H(u)−B∥2F+λ∥v∥qq  s.% t.  v=T(u)}. (3)

### 2.1 Algorithm

We consider the penalty function as

 minx,y{Φγ(x,y):=f(x)+N∑i=1h(g(yi))+γ2∥Ax+By−c∥22}. (4)

The difference between problem (1) and (4) is determined by the parameter . They are identical if . Assume that is the solution to problem (4), and is the solution to problem (1), and is any one satisfying , we then have the following claims.

 Ψ(x∗,y∗)≤Ψ(x†,y†) (5)

and

 ∥Ax∗+By∗−c∥22≤2γ[Ψ(^x,^y)−f––−hg–––], (6)

where and . These two claims can provide the errors between (1) and (4). We present brief proofs for claims (5) and (6). First, with the definition of , we have

 Φγ(x∗,y∗)≤Φγ(x†,y†). (7)

Noting , we then have

 Ψ(x∗,y∗)+γ2∥Ax∗+By∗−c∥22≤Ψ(x†,y†). (8)

Thus, we are led to

 Ψ(x∗,y∗)≤Ψ(x†,y†). (9)

Similarly, we derive

 Ψ(x∗,y∗)+γ2∥Ax∗+By∗−c∥22≤Ψ(^x,^y). (10)

With the fact , we then get (6).

We apply the claims to the TV deblurring problem (3), we can see ; and we can choose and . Then, it holds

 ∥Tu∗−v∗∥22≤2∥B∥22γ,

where is the minimizer of the penalty problem. Thus, to achieve error approximation, we just need to set .

The classical algorithm solving this problem is the Alternating Minimization (AM) method, i.e., minimizing one variable while fixing the other one. However, if directly applying AM to model (4), the subproblem may still be nonconvex; the minimizer is hard to obtain in most cases. Considering the structure of the problem, we use a linearized technique for the nonsmooth part . This method was inspired by the reweighted algorithms [13, 14, 15, 16]. To derive the sufficient descent, we also add a proximal term. We call it as iteratively reweighted penalty alternating minimization (IRPAM) method which can be described as

 ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩xk+1∈argminx{f(x)+γ2∥Ax+Byk−c∥22},yk+1∈argminy{∑Niwkig(yi)+γ2∥Axk+1+By−c∥22+δγ∥y−yk∥222}, (11)

where , . If , ; the algorithm is actually the AM. In the algorithm, all subproblems are convex. If the proximal maps of and are easy to calculate, both subproblems are easy to solve. If , the minimizer of the second problem reduces to the following form

 yk+1i=Proxwk+1ig/(1+δ)γ(δyki1+δ+ci1+δ−Aixk+11+δ), (12)

where is the -th row of , and . Actually, in the TV deblurring model, is identical map. When implementing the algorithms, we increase in each iteration and set an upper bound . This continuation technique was used in [17, 18, 19]. In the continuation version, we use rather than constant in the -th iteration. And the scheme of IRPAM(C) can be presented as follows.

In Algorithm 1, if we set , the algorithm is indeed the IRPAM; and if , the algorithm is then the IRPAMC. When IRPAMC being applied to the TV-q deblurring problem, the subproblems just involve with FFT and soft-shrinkages which can be solved fast. More details can be founded in [20].

Variants of IRPAMC can be developed by using linearization for the quadratic term, or adding the term in the minimization in the -th iteration, or even by hybrid way. In [11], the authors introduced various AM schemes which can be modified for IRPAMC to propose variants.

We have shown that for any given , can be set explicitly. And the convergence of IRPAMC is free of the requirement for the full-rank of . Then, compared with the nonconvex ADMM, IRPAMC can overcome the three drawbacks pointed out in previous section.

### 2.2 Convergence

In this part, we present the convergence of IRPAMC. Specifically, we prove that the square of the difference of the generated point is summable. For technical reasons, we need an extra assumption.

• A.4 is strongly convex with .

Now, we discuss the validity of Assumption A.4. For the deblurring model (3), A.4 actually requires to be strongly convex. With basic linear algebra, we just need to verify . Direct computing gives us . For the blurring operator , . That means A.4 holds for the deblurring model.

###### Theorem 1

Assume that is generated by IRPAMC and Assumptions A.1, A.2, A.3 and A.4 hold, and . Then we have the following results.

(1) It holds that

 Φ¯γ(xk,yk)−Φ¯γ(xk+1,yk+1) ≥min{¯γ,ν¯γ}⋅∥xk+1−xk∥22+δ¯γ∥yk+1−yk∥222. (13)

for with .

(2) , which implies that

 limk∥xk+1−xk∥2=0, limk∥yk+1−yk∥2=0. (14)
###### Proof 1

(1) The convexity of and the fact yield

 [−h(g(yk+1i))]−[−h(g(yki))]≥⟨−wki,g(yk+1i)−g(yki)⟩. (15)

That is also

 h(g(yki))−h(g(yk+1i))≥⟨wki,g(yki)−g(yk+1i)⟩. (16)

It is easy to see that , if . In the update of , we have

 N∑iwkig(yki)+¯γ2∥Axk+1+Byk−c∥22 ≥N∑iwkig(yk+1i)+¯γ2∥Axk+1+Byk+1−c∥22 +δ¯γ∥yk+1−yk∥222. (17)

Combining (16) and (1), we then derive

 N∑ih(g(yki))+¯γ2∥Axk+1+Byk−c∥22 ≥N∑ih(g(yk+1i))+¯γ2∥Axk+1+Byk+1−c∥22 +δ¯γ∥yk+1−yk∥222. (18)

That is also

 Φ¯γ(xk+1,yk)−Φ¯γ(xk+1,yk+1)≥δ¯γ∥yk+1−yk∥222. (19)

With Assumption A.4, is then strongly convex with . While is the minimizer, the strong convexity the yields

 ¯γ2∥Axk+Byk−c∥22+f(xk) −(¯γ2∥Axk+1+Byk−c∥22+f(xk+1)) ≥min{¯γ,ν¯γ}⋅∥xk+1−xk∥22. (20)

The relation (1) also means

 Φ¯γ(xk,yk)−Φ¯γ(xk+1,yk)≥min{¯γ,ν¯γ}⋅∥xk−xk+1∥22. (21)

Summing (19) and (21), we then get (1).

(2) From (1), is non-increasing for large . Noting , we can see is convergent. Hence, we can easily have

 k∑j=K(∥xj+1−xj∥22+∥yj+1−yj∥22) ≤Φ¯γ(xK,yK)−Φ¯γ(xk+1,yk+1)min{¯γ,ν¯γ,δ¯γ2}<+∞. (22)

## 3 Application to image deblurring

In this part, we apply the proposed algorithm to image deblurring and compare the performance with the nonconvex ADMM. The codes of all algorithms are written entirely in MATLAB, and all the experiments are implemented under Windows and MATLAB R2016a running on a laptop with an Intel Core i5 CPU (2.8 GHz) and 8 GB Memory. The Lena image is used in the numerical experiments.

We solve (3) when , and use the nonconvex ADMM proposed in [9] for comparison. The performance of the proposed deblurring algorithms is routinely measured by means of the signal-to-noise ratio (SNR)

 SNR(u,u∗):=10lg{∥u−¯u∥22∥u∗−¯u∥22}, (23)

where and denote the original image and the deblurring image, respectively, and stands for the mean of the original image. In the experiments, the blurring operators is generated by the Matlab command fspecial(’gaussian’,.,.). The blurred image is generated by

 B=H(u)+e, (24)

where is the Gaussian noise with power of . In the experiment, we set and , and . The proposed algorithms are terminated after 200 iterations. The parameters are set as , and . We compare IRPAMC with the nonconvex ADMM, in which the Lagrange dual multiplier is also set as . For both algorithms, the initializations are set as the blurred image. The numerical results are shown in Fig. 1.

## 4 Conclusion

In this paper, we propose an iteratively reweighted alternating minimization algorithm for a class of linearly constrained problems. The algorithm is developed from the perspective of penalty strategy. To speed up the iteration, we also employ a continuation trick for the penalty parameter. We prove the convergence of the algorithm under weaker assumptions than the nonconvex ADMM. Numerical results on the nonconvex TV deblurring problem are also presented for demonstrating the efficiency of the proposed algorithm.

## References

• [1] Jonathan Eckstein and Wang Yao, “Understanding the convergence of the alternating direction method of multipliers: Theoretical and computational perspectives,” Pacific Journal of Optimization, vol. 11, no. 4, pp. 619–644, 2015.
• [2] Michel Fortin and Roland Glowinski, Augmented Lagrangian methods: applications to the numerical solution of boundary-value problems, Elsevier, 2000.
• [3] Daniel Gabay and Bertrand Mercier, “A dual algorithm for the solution of nonlinear variational problems via finite element approximation,” Computers & Mathematics with Applications, vol. 2, no. 1, pp. 17–40, 1976.
• [4] Roland Glowinski and JT Oden, “Numerical methods for nonlinear variational problems,” Journal of Applied Mechanics, vol. 52, pp. 739, 1985.
• [5] Yangyang Xu and Wotao Yin,

“A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion,”

SIAM Journal on imaging sciences, vol. 6, no. 3, pp. 1758–1789, 2013.
• [6] Yu Wang, Wotao Yin, and Jinshan Zeng, “Global convergence of ADMM in nonconvex nonsmooth optimization,” Journal of Scientific Computing, vol. 78, no. 1, pp. 29–63, 2015.
• [7] Mingyi Hong, Zhi-Quan Luo, and Meisam Razaviyayn, “Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems,” SIAM Journal on Optimization, vol. 26, no. 1, pp. 337–364, 2016.
• [8] Fenghui Wang, Wenfei Cao, and Zongben Xu, “Convergence of multi-block Bregman ADMM for nonconvex composite problems,” Science China Information Sciences, vol. 61, no. 12, 122102, 2018.
• [9] Tao Sun, Hao Jiang, Lizhi Cheng, and Wei Zhu, “Iteratively linearized reweighted alternating direction method of multipliers for a class of nonconvex problems,” Transactons on Signal Processing, vol. 66, no. 20, pp. 5380–5391, 2018.
• [10] Amir Beck, “On the convergence of alternating minimization for convex programming with applications to iteratively reweighted least squares and decomposition schemes,” SIAM Journal on Optimization, vol. 25, no. 1, pp. 185–209, 2015.
• [11] Tao Sun and Lizhi Cheng, “Little- convergence rates for several alternating minimization methods,” Communications in Mathematical Sciences, vol. 15, no. 1, pp. 197–211, 2017.
• [12] Michael Hintermüler and Tao Wu, “Nonconvex TV-models in image restoration: Analysis and a trust-region regularization–based superlinearly convergent solver,” SIAM Journal on Imaging Sciences, vol. 6, no. 3, pp. 1385–1415, 2013.
• [13] Rick Chartrand and Wotao Yin, “Iteratively reweighted algorithms for compressive sensing,” in Acoustics, speech and signal processing, IEEE international conference on, pp. 3869–3872, 2018.
• [14] Emmanuel J Candes, Michael B Wakin, and Stephen P Boyd, “Enhancing sparsity by reweighted minimization,” Journal of Fourier analysis and applications, vol. 14, no. 5, pp. 877–905, 2008.
• [15] Xiaojun Chen and Weijun Zhou, “Convergence of the reweighted minimization algorithm for minimization,” Computational Optimization and Applications, vol. 59, no. 1-2, pp. 47–61, 2014.
• [16] Tao Sun, Hao Jiang, and Lizhi Cheng, “Global convergence of proximal iteratively reweighted algorithm,” Journal of Global Optimization, vol. 68, no. 4, pp. 815–826, 2017.
• [17] Elaine T Hale, Wotao Yin, and Yin Zhang, “Fixed-point continuation for -minimization: Methodology and convergence,” SIAM Journal on Optimization, vol. 19, no. 3, pp. 1107–1130, 2008.
• [18] Tao Sun and Lizhi Cheng, “Convergence of iterative hard-thresholding algorithm with continuation,” Optimization Letters, vol. 11, no. 4, pp. 801–815, 2016.
• [19] Tao Sun, Hao Jiang, and Lizhi Cheng, “Hard thresholding pursuit with continuation for -regularized minimizations,” to be appeared in Mathematical Methods in the Applied Sciences.
• [20] Yilun Wang, Junfeng Yang, Wotao Yin, and Yin Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008.