# Inertial nonconvex alternating minimizations for the image deblurring

In image processing, Total Variation (TV) regularization models are commonly used to recover blurred images. One of the most efficient and popular methods to solve the convex TV problem is the Alternating Direction Method of Multipliers (ADMM) algorithm, recently extended using the inertial proximal point method. Although all the classical studies focus on only a convex formulation, recent articles are paying increasing attention to the nonconvex methodology due to its good numerical performance and properties. In this paper, we propose to extend the classical formulation with a novel nonconvex Alternating Direction Method of Multipliers with the Inertial technique (IADMM). Under certain assumptions on the parameters, we prove the convergence of the algorithm with the help of the Kurdyka-Łojasiewicz property. We also present numerical simulations on classical TV image reconstruction problems to illustrate the efficiency of the new algorithm and its behavior compared with the well established ADMM method.

There are no comments yet.

## Authors

• 32 publications
• 1 publication
• 1 publication
• 36 publications
• ### A Framework of Inertial Alternating Direction Method of Multipliers for Non-Convex Non-Smooth Optimization

In this paper, we propose an algorithmic framework dubbed inertial alter...
02/10/2021 ∙ by Le Thi Khanh Hien, et al. ∙ 0

• ### An ADMM-LAP method for total variation blind deconvolution of adaptive optics retinal images

Adaptive optics (AO) corrected flood imaging of the retina is a popular ...
01/04/2020 ∙ by Xiaotong Chen, et al. ∙ 0

• ### Variational Depth from Focus Reconstruction

This paper deals with the problem of reconstructing a depth map from a s...
08/01/2014 ∙ by Michael Moeller, et al. ∙ 0

• ### An ADMM-Newton-CNN Numerical Approach to a TV Model for Identifying Discontinuous Diffusion Coefficients in Elliptic Equations: Convex Case with Gradient Observations

Identifying the discontinuous diffusion coefficient in an elliptic equat...
03/20/2021 ∙ by Wenyi Tian, et al. ∙ 0

• ### Image Reconstruction via Discrete Curvatures

The curvature regularities are well-known for providing strong priors in...
12/01/2019 ∙ by Qiuxiang Zhong, et al. ∙ 0

• ### Inertial Stochastic PALM and its Application for Learning Student-t Mixture Models

Inertial algorithms for minimizing nonsmooth and nonconvex functions as ...
05/05/2020 ∙ by Johannes Hertrich, et al. ∙ 0

• ### Hand Gesture Recognition Based on a Nonconvex Regularization

Recognition of hand gestures is one of the most fundamental tasks in hum...
04/29/2021 ∙ by Jing Qin, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Denoising and deblurring have numerous applications in communications, control, machine learning, and many other fields of engineering and science. Restoration of distorted images is, from the theoretical, as well as from the practical point of view, one of the most interesting and important problems of image processing. One special case is the blurring, due, for instance, to incorrect focus and/or blurring due to movement, or added Gaussian noise (a Gaussian blur).

A mathematical model for the process of blurred images can be expressed as follows. Let be a two-dimensional index set representing the image domain, be the original image, be the observed image, and be a linear blurring operator. Then, the blurred image can be written [1] as

 ~f=~K(~u)+e, (1)

where

is an unknown additive noise vector. In this paper, the blurring operator

is assumed to be known, otherwise, one will deal with the blind image deblurring problem [2], in which also needs to be solved.

In image processing one typically aims at recovering an image from noisy data while still keeping edges in the image, and this goal is the main reason of the tremendous success of the Total Variation (TV) regularization [3] for solving the deblurring problem (although other methods are also used). The TV method can be presented as

 min~u(12∥~K~u−~f∥2L2(Ω)+σ∥∇~u∥L1(Ω∗)), (2)

being a parameter and where is the gradient operator, , and the norms , .

In most situations, rather than directly minimizing the support of the image, one is interested in minimizing the support of the gradient of the recovered image. In most references, the convex methodology is considered [4, 5, 6], but in recent years, some nonconvex methods have been developed [7, 8, 9]. The use of a suitable nonconvex and nondifferentiable function allows possibly a smaller number of measurements than the convex one in compressed sensing [7]. In [10] the authors showed that nonconvex regularization terms in total variation-based image restoration yields even better edge preservation when compared to the convex-type regularization. Moreover, they showed that it seems to be also more robust with respect to noise. Nonconvex regularization in image restoration poses significant challenges on the existence of solutions of associated minimization problems and on the development of efficient solution algorithms.

The main difference between the convex and nonconvex methods is replacing the -norm of the variational term by the nonconvex and nondifferentiable function that uses the nonconvex regulation function , and that we refer to as the semi-norm (). Therefore, the general nonconvex deblurring model is presented as

 min~u(12∥~K~u−~f∥2L2(Ω)+σ∥∇~u∥φ(Ω∗)). (3)

Many efficient numerical algorithms have been developed for solving the TV regularization problem. One of the most efficient methods for the convex problem (2) is the Alternating Direction Method of Multipliers (ADMM) algorithm [11, 12, 13]. In the general case (convex and nonconvex cases depend of the function ), the method is constructed by introducing an auxiliary variable , which actually represents , to reformulate (3) into a composite optimization problem with linear constraints. The augmented Lagrange dual function is then

 ~Lφδ(~u,~v,~p):=12∥~K~u−~f∥2L2(Ω)+σ∥~v∥φ(Ω∗) −⟨~p,∇~u−~v⟩+δ2∥∇~u−~v∥2L2(Ω∗), (4)

where is a parameter, and the norm . If , we use the notation for representing . Now, the standard convex ADMM method () for the deblurring problem can be presented as

 ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩~vk+1=argmin~v~L1δ(~uk,~v,~pk)~uk+1=argmin~u~L1δ(~u,~vk+1,~pk)~pk+1=~pk−δ(∇~uk+1−~vk+1) (5)

The earlier analyses of convergence and performance of the ADMM algorithms directly depended on the existing results of ADMM framework [14, 15, 16, 17, 18]. More recently, motivated by acceleration techniques proposed in [19], inertial algorithms have been proposed for many areas such as (distributed) optimization and imaging sciences in references [20, 21, 22, 23, 24]. The ideas of the inertial strategy have been also applied to ADMM in [4, 25]; and under several assumptions in convex case, some convergence results are proved in those articles. As the nonconvex penalty functions perform more efficiently in some applications, as above commented, nonconvex ADMM has been also developed and studied [26, 27, 28, 29, 30, 31, 32, 33, 34]. The main goal of this paper is to propose a new algorithm that combines the nonconvex methods and the inertial strategy organically.

In this paper, when is nonconvex, we consider a new inertial scheme for the image deblurring model (3). One of the main differences (and new difficulties) with the convex ADMM, is that in order to properly define the nonconvex ADMM some extra assumptions are needed to prove the convergence. First, at least one of the objective functions has to be smooth. And more, matrix corresponding to the smooth function is required to be injective, i.e., reversible. Thus, a direct application of the ADMM scheme to the image deblurring model cannot guarantee the convergence because the operator fails to be injective (although the numerical performance may be good in some cases). Considering this, we first modify the model (3), and then we develop the new nonconvex inertial ADMM. By using the Kurdyka-Łojasiewicz property, we prove the convergence of the new algorithm under several requirements on the parameters. In opposite to the convex case, selecting a suitable parameter is crucial to obtain the convergence of the new algorithm. In order to make the method more useful, we provide a probabilistic strategy for selecting a suitable .

The rest of the paper is organized as follows. In Section II we collect some mathematical preliminaries needed for the convergence analysis. Section III presents the details for the new algorithm (inertial alternating minimization algorithm, IADMM) including the schemes and parameters. In Section IV, we prove the convergence of the new algorithm. Section V reports the numerical results and compares the algorithm with convex and nonconvex ADMM. Section VI gives some conclusions. Finally, we provide in the Appendixes all the detailed proofs of the proposed results.

## Ii Mathematical tools

In this section we present the definitions and basic properties of the subdifferentials and the Kurdyka-Łojasiewicz functions used later in the convergence analysis. The basic notations used in this paper are detailed in Table I.

### Ii-a Subdifferentials

We collect several definitions as well as some useful properties in variational and convex analysis (see the monographes [35, 36, 37, 38]). For any matrix , we define to be the adjoint of .

###### Definition 1

Let be a proper and lower semicontinuous function. The (limiting) subdifferential, or simply the subdifferential, of at , written as , is defined as

 ∂J(x):={u∈RN:∃ xk→x, uk→u, such~{} that limy≠xkinfy→xkJ(y)−J(xk)−⟨uk,y−xk⟩∥y−xk∥2≥0 }.

It is easy to verify that the Fréchet subdifferential is convex and closed while the subdifferential is closed. When is convex, the definition agrees with the subgradient in convex analysis [38]. We call is strongly convex with constant if for any and any , it holds that And is called as -gradient continuous (Lipschitz) with constant if Noting the closedness of the subdifferential, we have the following simple proposition.

###### Proposition 1

If , and , then we have

###### Definition 2

A necessary condition for to be a minimizer of is

 0∈∂J(x), (6)

which is also sufficient when is convex. A point that satisfies (6) is called (limiting) critical point. The set of critical points of is denoted by .

With these basics, we can easily obtain the following proposition.

###### Proposition 2

If is a critical point of whose definition given in (III), it must hold that

 ⎧⎪⎨⎪⎩−¯p∈σ∂∥¯v∥φ,∇∗¯p=K∗(K¯u−f),∇¯u=¯v. (7)

Finally, the proximal map of is defined as

 SJ(x)∈argminy{J(y)+12∥x−y∥22}. (8)

Note that can be nonconvex. If is convex, is a point-to-point operator; otherwise, it may be point-to-set.

### Ii-B Kurdyka-Łojasiewicz property

In this paper the convergence analysis is based on the Kurdyka-Łojasiewicz functions, originated in the seminal works of Łojasiewicz [39] and Kurdyka [40]. This kind of functions has played a key role in several recent convergence results on nonconvex minimization problems and they are ubiquitous in applications.

###### Definition 3 ([41, 42])

(a) The function is said to have the Kurdyka-Łojasiewicz property at if there exist , a neighborhood of and a continuous concave function such that

1. .

2. is on .

3. For all , .

4. For all in , the Kurdyka-Łojasiewicz inequality holds

 ρ′(J(x)−J(ˆx))⋅dist(0,∂J(x))≥1. (9)

(b) Proper lower semicontinuous functions which satisfy the Kurdyka-Łojasiewicz inequality at each point of are called KŁ functions.

###### Remark 1

There are large sets of functions that are KŁ functions [41].

###### Lemma 1 ([42])

Let be a proper lower semi-continuous function and be a compact set. If is a constant on and satisfies the KŁ property at each point on , then there exists a concave function satisfying the four properties given in Definition 3, and constants such that for any and any satisfying that and , it holds that

 ρ′(J(x)−J(ˆx))⋅dist(0,∂J(x))≥1. (10)

## Iii Nonconvex IADMM algorithm

In this section we introduce the new extended Inertial Alternating Direction Method of Multipliers (ADMM) algorithm for nonconvex functions.

In this paper, we consider (equivalent to space ) as the two-dimensional index set representing the image domain. In this case, the image variable constrained on is actually a matrix. We use the symbol to present its vectorization (a vector of all the columns of the image variable). And then the original total variation operator then enjoys the following form

 ∇(~u)=(IN⊗DD⊗IN)⋅~u, (11)

where

the identity matrix and

the banded matrix

 D=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝1−11⋱⋱−11−1⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠(N−1)×N.

If we directly apply the inertial ADMM, the convergence is hard to be proved as fails to be injective. Therefore, we need to modify the image deblurring model (3). To that goal, we define

 T:=(IN⊗DD⊗IN),u:=(u1∈RN2u2∈RN2).

Obviously, we have . Noting that

 rank(T)=rank(IN⊗D)+rank(D⊗IN)=rank(IN)⋅rank(D)+rank(D)⋅rank(IN)=2N(N−1),

and thus, is injective. The following technical lemma focuses on giving a lower bound for the operator .

###### Lemma 2

For any , it holds that

 ∥T∗x∥≥∥x∥θ, (12)

where .

Then, the image deblurring model (3) is equivalent to

 minu1=u2(12∥~Ku1−~f∥2+σ∥Tu∥φ). (13)

Instead, we consider its extended penalty form

 minu(12∥~Ku1−~f∥2+σ∥Tu∥φ+β22∥u1−u2∥2), (14)

where is a large weight parameter. Therefore, we apply the nonconvex inertial ADMM to

 minu∈R2N2(12∥Ku−f∥2+σ∥Tu∥φ), (15)

where and . This leads us to define the function

 Lφδ(u,v,p):=12∥Ku−f∥2 +σ∥v∥φ−⟨p,Tu−v⟩+δ2∥Tu−v∥2. (16)

Inertial methods have witnessed great success in convex ADMM and nonconvex first-order algorithms. In the nonconvex optimization community, the inertial style ADMM has never been proposed and analyzed. The convex inertial ADMM has been proposed in [4], in which one first uses the “inertial method” to refresh the current sequence with last iteration, and then performs the ADMM scheme with the updated variables. However, the direct extension of convex ADMM is not allowed in the nonconvex settings. This is because without convexity, several descents are heavily dependent on the continuities of the functions, which

may fail to obey. And the difference of function values at two different points is hard to estimate, which leads to troubles in the convergence proof. Thus, in the updating of

, we used rather than the updated one. And the nonconvex IADMM scheme proposed in this paper is defined as follows

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩(ˆuk,ˆvk,ˆpk)=(uk,vk,pk)+α(uk−uk−1,vk−vk−1,pk−pk−1),vk+1=argminvLφδ(uk,v,ˆpk),uk+1=argminuLφδ(u,vk+1,ˆpk),pk+1=ˆpk−δ(Tuk+1−vk+1), (17)

where is a free parameter chosen by the user. Actually, if , the algorithm then will reduce to basic ADMM.

Now we can focus on rewriting the inertial scheme for the image deblurring model (17). First, we rearrange the minimization for ,

 vk+1=Sσδ∥⋅∥φ(Ω∗)(Tuk−ˆpkδ), (18)

being the proximal map of (8). For a matrix , and indices ,

 [Sσδ∥⋅∥φ(v)]i,j=Sσδφ(vi,j). (19)

The scheme for updating can be rewritten as

 K∗(Kuk+1−f)−T∗ˆpk+δT∗(Tuk+1−vk+1)=0. (20)

That is also

 uk+1=(K∗K+δT∗T)−1(K∗f+δT∗vk+1+T∗ˆpk). (21)

Taking into account (17), (18) and (21) we propose a nonconvex inertial version of the IADMM algorithm (Algorithm 1).

Assumption 1: We assume that , where . And the minimum single value is given as .

This hypothesis also indicates that the matrix is reversible. Note that the rank of is . Then, the assumed hypothesis is easy to be satisfied.

We remark that is the minimizer of , which is strongly convex with . If we set , then we have

 Lφδ(uk,vk+1,ˆpk)−Lφδ(uk+1,vk+1,ˆpk) ≥λmin(K∗K+δT∗T)2∥uk+1−uk∥2≥ν2∥uk+1−uk∥2, (22)

where we used the fact when .

The following problem is what exactly is. In a real situation, the dimensions of are large, and so, a direct calculation leads to a large computational cost. Therefore, we provide a probabilistic method to estimate a suitable value of . If is reversible, it is easy to see that . Then, if we obtain a bound , we then have . To that goal we employ a Lemma proposed in [43]:

###### Lemma 3 (Lemma 4.1, [43])

For a fixed positive integer and a real number , and given an independent family of standard Gaussian vectors, we have that

 ∥(K∗K+T∗T)−1∥2≤b√2πmaxi=1,2,…,M∥(K∗K+T∗T)−1wi∥2

with probability at least

.

Note that for computing we just need several FFT and inverse FFT. Therefore, its computational cost is low (), and the estimation of is very fast.

## Iv Convergence analysis

This section consists of two parts and provides a complete analysis of the convergence problem of the nonconvex IADMM algorithm. The first subsection contains the main convergence results, the proof sketch, the difficulties in the proof and theoretical contributions. While the second subsection introduces the necessary technical lemmas. Assumption 1 holds through this section.

### Iv-a Main results

###### Theorem 1 (Stationary point convergence)

Assume that the free parameter satisfies the condition

 δ>max{1,6θ2∥K∥42ν+7α2θ2∥K∥42ν} (23)

with , then any cluster point is also a critical point of .

Theorem 1 describes the stationary point convergence result for the IADMM method, which is free of using the KŁ property of the functions. If the KŁ property is further assumed, the sequence convergence can be proved giving us the Theorem 2.

###### Theorem 2 (Sequence convergence)

Let condition (33) hold, and the auxiliary function (27) be a KŁ function. Then the sequence , generated by Algorithm 1, converges to a critical point of .

The proof can be divided into two parts, and in order to help the reader we first give a brief sketch of the proof:

I. In the first part we introduce an auxiliary sequence , where are composite points from . An auxiliary function is also proposed. In Lemma 5, we prove a “sufficient descent condition” of the values of at , i.e.,

 F(wk)−F(wk+1)≥ˆh∥uk+1−uk∥2, (24)

where is a positive constant.

II. We prove a “relative error condition” of , i.e., there exists such that

 ∥vk+1∥≤γ(∥uk+1−uk∥+∥uk−uk−1∥), (25)

where is a positive constant. Note that this condition is different from the “real” relative error condition proposed in [41].

The major difficulty in deriving these two conditions is the use of inertial terms, with which the descent values are lower bounded by and rather than and . Similarly, the relative error is bounded by and . The relative error can be expanded by triangle inequalities, which is relatively proved. However, for the sufficient descent, the use of the triangle inequalities is much more difficult and technical for the lower boundedness.

The theoretical contributions in this paper are two-fold. The first one is, of course, dealing with the difference caused by the inertial terms. This part also includes how to design the scheme of the algorithm, whose details have been presented in previous section. The second one is to determine the parameters for IADMM applied to the image deblurring.

The main results can be proved with the following lemmas.

### Iv-B Technical lemmas

###### Lemma 4

Let the sequence be generated by Algorithm 1 to solve problem (3), then

 ∥pk−pk+1∥≤θ∥K∥22∥uk+1−uk∥, (26)

where is given in Lemma 2 and denotes the spectral radius of .

Now we provide the main technical lemma that states the descent condition for a suitable function of the sequences of the Algorithm 1.

###### Lemma 5

Let the sequence be generated by Algorithm 1 and conditions of Theorem 1 hold. By defining the auxiliary function

 F(u,v,p,x):=Lφδ(u,v,p)+7α2θ2∥K∥422δ∥u−x∥2, (27)

and

 wk:=(uk,vk,pk,uk−1), (28)

we have that

 F(wk)−F(wk+1)≥ˆh∥uk+1−uk∥2, (29)

where

 ˆh:=ν2−(3θ2∥K∥42δ+7α2θ2∥K∥422δ)>0. (30)

If the sequence is bounded, then, it holds that

 limk∥uk+1−uk∥=limk∥vk+1−vk∥=limk∥pk+1−pk∥=0. (31)
###### Remark 2

Based on Lemma 5, it is important to guarantee that the condition (23) can be satisfied. This fact can be reached if is large enough. Fortunately, in the Algorithm 1 the parameter can be fixed by the user. Thus, the parameter shall be chosen large enough to guarantee the convergence considering condition (23).

###### Lemma 6

If the nonconvex regulation function is coercive and

 δ>θ2∥K∥42, (32)

the sequence is bounded.

###### Remark 3

By combining the conditions (23) and (32), we just need that

 δ>max{1,6θ2∥K∥42ν+7α2θ2∥K∥42ν,θ2∥K∥42}. (33)
###### Lemma 7

Let the sequence be generated by Algorithm 1. Then, for any , there exists and such that

 ∥sk+1∥≤γ(∥uk+1−uk∥+∥uk−uk−1∥). (34)

Now, we recall a definition about the limit point set introduced in [42], which denotes the set of all the stationary points generated by the nonconvex IADMM. The specific mathematical definition of is given as follows.

###### Definition 4

Let be generated by the nonconvex IADMM . We define the set by

 M:={x∈RN:∃ an % increasing sequence of integers {kj}j∈N such that xkj→x as j→∞}. (35)
###### Lemma 8

Let the sequence be generated by Algorithm 1, the auxiliary function defined in (27) and suppose that condition (33) holds. Then, we have the following results.

(1) is nonempty and .

(2) .

(3) The objective function is finite and constant on .

## V Numerics

In this section, we illustrate the effectiveness of the proposed algorithm on different numerical blurred images with Gaussian blur.

All the programs have been written entirely in C++, and all the experiments are implemented under Linux running on a desktop computer with an Intel Core i5-2400S CPU (2.5 GHz) and 4 GB Memory. The FFT subroutines used in the algorithms are taken from the fftw-3 library. As test problems we have selected twelve images (see Figure 1), which include seven images from USC-SIPI image database, two classical test images (Lena and cameraman), one text image from “El Quijote” book and two medical images. In order to obtain the blurred images, we use, as it is common in literature, the blurring operator generated using a convolution with Gaussian kernel (KernelSize , KernelMu , KernelSigma ) and circular mapping on the edges of the image.

The proposed algorithm IADMM (Algorithm 1) is compared with the widely used augmented Lagrangian methods (ADMM [22]) for image deblurring. We mainly consider two models, i.e., and . We call them as TV1 and TV(1/2) methods, respectively. Note that TV1 is a convex method, while TV(1/2) is nonconvex. In the tests we have considered (unless so indicated) for all the methods the value of the penalty parameter (a small one) and/or (a large one) just to see the behaviour of the IADMM algorithm.

The performance of the deblurring algorithms is quantitatively measured by means of the objective function (Equations 2 or 3), the Real Error as of the difference between the original and deblurred images, the signal-to-noise ratio (SNR) [22]

 SNR(u,u∗)=10×log10(∥u−¯u∥2∥u−u∗∥2), (36)

where and denote the original image and the restored image, respectively, and represents the mean of the original image , and the residual ( and in the standard and inertial versions, respectively) as described in [22]:

 res(k,k+1):=∥(uk+1,pk+1)−(uk,pk)∥1+∥(uk,pk)∥,resi(k,k+1):=∥(uk+1,pk+1)−(ˆuk,ˆpk)∥1+∥(ˆuk,ˆpk)∥. (37)

In the tests we do not provide CPU tests as all the algorithms have a very similar computation cost per iteration (mainly from the FFT routines). Therefore, there is almost no difference between pictures showing iterations or CPU cost, and the consumed CPU is basically proportional to the respective number of iterations.

On our first test we use the brain M0 image with a low value of and we show in Figure 2 that the performances of the ADMM TV1 and TV(1/2) are quite similar in error and SNR. Therefore, in the rest of comparisons we will just consider the TV1 method. For the IADMM method the inertial parameter in Algorithm 1 is investigated firstly by using two different values ( and ). The main difference observed in these tests is that the nonconvex method is more unstable once reached the maximum precision (at this point the ADMM TV1 convex method seems to be the more stable with a quite smooth behaviour). The fastest convergence is observed using the IADMM (with the largest stepsize value ) method, but when the maximum precision is obtained unstable behaviour appears. Therefore, using mainly the information provided by the residual (Eq. (37)), we provide a stop control criterion (in the same spirit as [22]) that stops the iterative process at the black points of Figure 2 (d) in the tests. That is, we stop when the following condition is hold

 res(k,k+1)<εorres(k−1,k)

In the first case the convergence till the desired tolerance error is obtained, while in the second case the algorithm has reached its stability limit and the residual grows, behaving later in an unstable way. Note that the residual is used in the stop criterion, as it uses known data from the iterations (and which does not depend on the original image that is unknown). We remark that the use of stop control techniques avoids the use of unnecessary iterations, and also to stop at the limit accuracy of the method. Also, from the pictures we observe that the IADMM algorithm provides enough precision in a lower number of iterations. The larger means faster method, but at the price of a more unstable method as it can be seen on Fig. 2(d). On that picture we observe that when the residual begins to behave chaotically, with sudden increases, it means that it is advisable to stop the iterative process as considered in the stop criterion (38) (black dot points on Fig. 2(d)). With that criterion, the IADMM method seems to be an interesting option for fast deblurring problems.

To observe more clearly the influence of the parameter in Algorithm 1 we perform several tests on the brain M0 image on Figure 3 for values and . Note that this parameter plays a role similar to the stepsize (as it also occurs to the parameter ), as it controls the perturbation at each step. A large value will provide, when the method works, a quite fast method, but on the other hand it makes the method more unstable. In fact, from the plot 3(b) we observe that in this case it would be optimal to use the parameter value in combination with the stop criterion, giving the maximum precision in just 28 iterations. Besides, it is shown that after the values selected by the stop criterion the residual begins to oscillate among values that provides similar error but that generates an unstable behaviour giving rise to an increment of the error in subsequent iterations (this instability is delayed when the parameter decreases, what is expected because the increment is smaller, as the vertical lines connecting the error and residual plots show).

The influence of the penalty parameter is also quite relevant, but a detailed analysis is out of the scope of this paper. On the Figure 4 we show the evolution of the residual using two values of and several values of the parameter and . We observe that low values of the penalty parameter gives a lower residual, but the error is lower for large providing a faster convergence, and it has a big effect on the empirical performance of the methods as shown in [33], but it remains to study optimal combinations of the parameters and and suitable criteria for an automatic selection (this will be part of the next steps in our study of these methods).

On Figure 5, we present the original medium resolution brain M0 image, the blurred one using, as indicated, a convolution with Gaussian kernel, and the results of the IADMM deblurring images using two error tolerances (, ) in the stop criterion (38). We can see that in both cases the quality of the recovered image is visually good.