Inertial nonconvex alternating minimizations for the image deblurring

07/27/2019 ∙ by Tao Sun, et al. ∙ University of Zaragoza NetEase, Inc 4

In image processing, Total Variation (TV) regularization models are commonly used to recover blurred images. One of the most efficient and popular methods to solve the convex TV problem is the Alternating Direction Method of Multipliers (ADMM) algorithm, recently extended using the inertial proximal point method. Although all the classical studies focus on only a convex formulation, recent articles are paying increasing attention to the nonconvex methodology due to its good numerical performance and properties. In this paper, we propose to extend the classical formulation with a novel nonconvex Alternating Direction Method of Multipliers with the Inertial technique (IADMM). Under certain assumptions on the parameters, we prove the convergence of the algorithm with the help of the Kurdyka-Łojasiewicz property. We also present numerical simulations on classical TV image reconstruction problems to illustrate the efficiency of the new algorithm and its behavior compared with the well established ADMM method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Denoising and deblurring have numerous applications in communications, control, machine learning, and many other fields of engineering and science. Restoration of distorted images is, from the theoretical, as well as from the practical point of view, one of the most interesting and important problems of image processing. One special case is the blurring, due, for instance, to incorrect focus and/or blurring due to movement, or added Gaussian noise (a Gaussian blur).

A mathematical model for the process of blurred images can be expressed as follows. Let be a two-dimensional index set representing the image domain, be the original image, be the observed image, and be a linear blurring operator. Then, the blurred image can be written [1] as

(1)

where

is an unknown additive noise vector. In this paper, the blurring operator

is assumed to be known, otherwise, one will deal with the blind image deblurring problem [2], in which also needs to be solved.

In image processing one typically aims at recovering an image from noisy data while still keeping edges in the image, and this goal is the main reason of the tremendous success of the Total Variation (TV) regularization [3] for solving the deblurring problem (although other methods are also used). The TV method can be presented as

(2)

being a parameter and where is the gradient operator, , and the norms , .

In most situations, rather than directly minimizing the support of the image, one is interested in minimizing the support of the gradient of the recovered image. In most references, the convex methodology is considered [4, 5, 6], but in recent years, some nonconvex methods have been developed [7, 8, 9]. The use of a suitable nonconvex and nondifferentiable function allows possibly a smaller number of measurements than the convex one in compressed sensing [7]. In [10] the authors showed that nonconvex regularization terms in total variation-based image restoration yields even better edge preservation when compared to the convex-type regularization. Moreover, they showed that it seems to be also more robust with respect to noise. Nonconvex regularization in image restoration poses significant challenges on the existence of solutions of associated minimization problems and on the development of efficient solution algorithms.

The main difference between the convex and nonconvex methods is replacing the -norm of the variational term by the nonconvex and nondifferentiable function that uses the nonconvex regulation function , and that we refer to as the semi-norm (). Therefore, the general nonconvex deblurring model is presented as

(3)

Many efficient numerical algorithms have been developed for solving the TV regularization problem. One of the most efficient methods for the convex problem (2) is the Alternating Direction Method of Multipliers (ADMM) algorithm [11, 12, 13]. In the general case (convex and nonconvex cases depend of the function ), the method is constructed by introducing an auxiliary variable , which actually represents , to reformulate (3) into a composite optimization problem with linear constraints. The augmented Lagrange dual function is then

(4)

where is a parameter, and the norm . If , we use the notation for representing . Now, the standard convex ADMM method () for the deblurring problem can be presented as

(5)

The earlier analyses of convergence and performance of the ADMM algorithms directly depended on the existing results of ADMM framework [14, 15, 16, 17, 18]. More recently, motivated by acceleration techniques proposed in [19], inertial algorithms have been proposed for many areas such as (distributed) optimization and imaging sciences in references [20, 21, 22, 23, 24]. The ideas of the inertial strategy have been also applied to ADMM in [4, 25]; and under several assumptions in convex case, some convergence results are proved in those articles. As the nonconvex penalty functions perform more efficiently in some applications, as above commented, nonconvex ADMM has been also developed and studied [26, 27, 28, 29, 30, 31, 32, 33, 34]. The main goal of this paper is to propose a new algorithm that combines the nonconvex methods and the inertial strategy organically.

In this paper, when is nonconvex, we consider a new inertial scheme for the image deblurring model (3). One of the main differences (and new difficulties) with the convex ADMM, is that in order to properly define the nonconvex ADMM some extra assumptions are needed to prove the convergence. First, at least one of the objective functions has to be smooth. And more, matrix corresponding to the smooth function is required to be injective, i.e., reversible. Thus, a direct application of the ADMM scheme to the image deblurring model cannot guarantee the convergence because the operator fails to be injective (although the numerical performance may be good in some cases). Considering this, we first modify the model (3), and then we develop the new nonconvex inertial ADMM. By using the Kurdyka-Łojasiewicz property, we prove the convergence of the new algorithm under several requirements on the parameters. In opposite to the convex case, selecting a suitable parameter is crucial to obtain the convergence of the new algorithm. In order to make the method more useful, we provide a probabilistic strategy for selecting a suitable .

The rest of the paper is organized as follows. In Section II we collect some mathematical preliminaries needed for the convergence analysis. Section III presents the details for the new algorithm (inertial alternating minimization algorithm, IADMM) including the schemes and parameters. In Section IV, we prove the convergence of the new algorithm. Section V reports the numerical results and compares the algorithm with convex and nonconvex ADMM. Section VI gives some conclusions. Finally, we provide in the Appendixes all the detailed proofs of the proposed results.

Ii Mathematical tools

In this section we present the definitions and basic properties of the subdifferentials and the Kurdyka-Łojasiewicz functions used later in the convergence analysis. The basic notations used in this paper are detailed in Table I.

stands for or ( norm)
dist
stands for the Kronecker product
stands for the function class whose derivatives are continuous
for a matrix A, rank(A) stands for its rank,

stands for the minimum eigenvalue of

TABLE I: Basic notations, where stand for points, stands for a set, stands for matrix

Ii-a Subdifferentials

We collect several definitions as well as some useful properties in variational and convex analysis (see the monographes [35, 36, 37, 38]). For any matrix , we define to be the adjoint of .

Definition 1

Let be a proper and lower semicontinuous function. The (limiting) subdifferential, or simply the subdifferential, of at , written as , is defined as

It is easy to verify that the Fréchet subdifferential is convex and closed while the subdifferential is closed. When is convex, the definition agrees with the subgradient in convex analysis [38]. We call is strongly convex with constant if for any and any , it holds that And is called as -gradient continuous (Lipschitz) with constant if Noting the closedness of the subdifferential, we have the following simple proposition.

Proposition 1

If , and , then we have

Definition 2

A necessary condition for to be a minimizer of is

(6)

which is also sufficient when is convex. A point that satisfies (6) is called (limiting) critical point. The set of critical points of is denoted by .

With these basics, we can easily obtain the following proposition.

Proposition 2

If is a critical point of whose definition given in (III), it must hold that

(7)

Finally, the proximal map of is defined as

(8)

Note that can be nonconvex. If is convex, is a point-to-point operator; otherwise, it may be point-to-set.

Ii-B Kurdyka-Łojasiewicz property

In this paper the convergence analysis is based on the Kurdyka-Łojasiewicz functions, originated in the seminal works of Łojasiewicz [39] and Kurdyka [40]. This kind of functions has played a key role in several recent convergence results on nonconvex minimization problems and they are ubiquitous in applications.

Definition 3 ([41, 42])

(a) The function is said to have the Kurdyka-Łojasiewicz property at if there exist , a neighborhood of and a continuous concave function such that

  1. .

  2. is on .

  3. For all , .

  4. For all in , the Kurdyka-Łojasiewicz inequality holds

    (9)

(b) Proper lower semicontinuous functions which satisfy the Kurdyka-Łojasiewicz inequality at each point of are called KŁ functions.

Remark 1

There are large sets of functions that are KŁ functions [41].

Lemma 1 ([42])

Let be a proper lower semi-continuous function and be a compact set. If is a constant on and satisfies the KŁ property at each point on , then there exists a concave function satisfying the four properties given in Definition 3, and constants such that for any and any satisfying that and , it holds that

(10)

Iii Nonconvex IADMM algorithm

In this section we introduce the new extended Inertial Alternating Direction Method of Multipliers (ADMM) algorithm for nonconvex functions.

In this paper, we consider (equivalent to space ) as the two-dimensional index set representing the image domain. In this case, the image variable constrained on is actually a matrix. We use the symbol to present its vectorization (a vector of all the columns of the image variable). And then the original total variation operator then enjoys the following form

(11)

where

the identity matrix and

the banded matrix

If we directly apply the inertial ADMM, the convergence is hard to be proved as fails to be injective. Therefore, we need to modify the image deblurring model (3). To that goal, we define

Obviously, we have . Noting that

and thus, is injective. The following technical lemma focuses on giving a lower bound for the operator .

Lemma 2

For any , it holds that

(12)

where .

Then, the image deblurring model (3) is equivalent to

(13)

Instead, we consider its extended penalty form

(14)

where is a large weight parameter. Therefore, we apply the nonconvex inertial ADMM to

(15)

where and . This leads us to define the function

(16)

Inertial methods have witnessed great success in convex ADMM and nonconvex first-order algorithms. In the nonconvex optimization community, the inertial style ADMM has never been proposed and analyzed. The convex inertial ADMM has been proposed in [4], in which one first uses the “inertial method” to refresh the current sequence with last iteration, and then performs the ADMM scheme with the updated variables. However, the direct extension of convex ADMM is not allowed in the nonconvex settings. This is because without convexity, several descents are heavily dependent on the continuities of the functions, which

may fail to obey. And the difference of function values at two different points is hard to estimate, which leads to troubles in the convergence proof. Thus, in the updating of

, we used rather than the updated one. And the nonconvex IADMM scheme proposed in this paper is defined as follows

(17)

where is a free parameter chosen by the user. Actually, if , the algorithm then will reduce to basic ADMM.

Now we can focus on rewriting the inertial scheme for the image deblurring model (17). First, we rearrange the minimization for ,

(18)

being the proximal map of (8). For a matrix , and indices ,

(19)

The scheme for updating can be rewritten as

(20)

That is also

(21)

Taking into account (17), (18) and (21) we propose a nonconvex inertial version of the IADMM algorithm (Algorithm 1).

0:  parameters Initialization: for                             end for
Algorithm 1 Nonconvex Inertial Alternating Minimization (IADMM) for Image Deblurring

Assumption 1: We assume that , where . And the minimum single value is given as .

This hypothesis also indicates that the matrix is reversible. Note that the rank of is . Then, the assumed hypothesis is easy to be satisfied.

We remark that is the minimizer of , which is strongly convex with . If we set , then we have

(22)

where we used the fact when .

The following problem is what exactly is. In a real situation, the dimensions of are large, and so, a direct calculation leads to a large computational cost. Therefore, we provide a probabilistic method to estimate a suitable value of . If is reversible, it is easy to see that . Then, if we obtain a bound , we then have . To that goal we employ a Lemma proposed in [43]:

Lemma 3 (Lemma 4.1, [43])

For a fixed positive integer and a real number , and given an independent family of standard Gaussian vectors, we have that

with probability at least

.

Note that for computing we just need several FFT and inverse FFT. Therefore, its computational cost is low (), and the estimation of is very fast.

Iv Convergence analysis

This section consists of two parts and provides a complete analysis of the convergence problem of the nonconvex IADMM algorithm. The first subsection contains the main convergence results, the proof sketch, the difficulties in the proof and theoretical contributions. While the second subsection introduces the necessary technical lemmas. Assumption 1 holds through this section.

Iv-a Main results

Theorem 1 (Stationary point convergence)

Assume that the free parameter satisfies the condition

(23)

with , then any cluster point is also a critical point of .

Theorem 1 describes the stationary point convergence result for the IADMM method, which is free of using the KŁ property of the functions. If the KŁ property is further assumed, the sequence convergence can be proved giving us the Theorem 2.

Theorem 2 (Sequence convergence)

Let condition (33) hold, and the auxiliary function (27) be a KŁ function. Then the sequence , generated by Algorithm 1, converges to a critical point of .

The proof can be divided into two parts, and in order to help the reader we first give a brief sketch of the proof:

I. In the first part we introduce an auxiliary sequence , where are composite points from . An auxiliary function is also proposed. In Lemma 5, we prove a “sufficient descent condition” of the values of at , i.e.,

(24)

where is a positive constant.

II. We prove a “relative error condition” of , i.e., there exists such that

(25)

where is a positive constant. Note that this condition is different from the “real” relative error condition proposed in [41].

The major difficulty in deriving these two conditions is the use of inertial terms, with which the descent values are lower bounded by and rather than and . Similarly, the relative error is bounded by and . The relative error can be expanded by triangle inequalities, which is relatively proved. However, for the sufficient descent, the use of the triangle inequalities is much more difficult and technical for the lower boundedness.

The theoretical contributions in this paper are two-fold. The first one is, of course, dealing with the difference caused by the inertial terms. This part also includes how to design the scheme of the algorithm, whose details have been presented in previous section. The second one is to determine the parameters for IADMM applied to the image deblurring.

The main results can be proved with the following lemmas.

Iv-B Technical lemmas

Lemma 4

Let the sequence be generated by Algorithm 1 to solve problem (3), then

(26)

where is given in Lemma 2 and denotes the spectral radius of .

Now we provide the main technical lemma that states the descent condition for a suitable function of the sequences of the Algorithm 1.

Lemma 5

Let the sequence be generated by Algorithm 1 and conditions of Theorem 1 hold. By defining the auxiliary function

(27)

and

(28)

we have that

(29)

where

(30)

If the sequence is bounded, then, it holds that

(31)
Remark 2

Based on Lemma 5, it is important to guarantee that the condition (23) can be satisfied. This fact can be reached if is large enough. Fortunately, in the Algorithm 1 the parameter can be fixed by the user. Thus, the parameter shall be chosen large enough to guarantee the convergence considering condition (23).

Lemma 6

If the nonconvex regulation function is coercive and

(32)

the sequence is bounded.

Remark 3

By combining the conditions (23) and (32), we just need that

(33)
Lemma 7

Let the sequence be generated by Algorithm 1. Then, for any , there exists and such that

(34)

Now, we recall a definition about the limit point set introduced in [42], which denotes the set of all the stationary points generated by the nonconvex IADMM. The specific mathematical definition of is given as follows.

Definition 4

Let be generated by the nonconvex IADMM . We define the set by

(35)
Lemma 8

Let the sequence be generated by Algorithm 1, the auxiliary function defined in (27) and suppose that condition (33) holds. Then, we have the following results.

(1) is nonempty and .

(2) .

(3) The objective function is finite and constant on .

V Numerics

In this section, we illustrate the effectiveness of the proposed algorithm on different numerical blurred images with Gaussian blur.

Fig. 1: Original images (some of them from USC-SIPI image database) in low (L), medium (M) and hight-resolution (H). (L0) Cameraman (); (L1) Lena (); (L2) “El Quijote” (); (L3) 5.1.09 (); (M0) Brain (); (M1) Heart (); (M2) ruler.512 (); (M3) texmos1.p512 (); (H0) 5.3.01 (); (H1) 5.3.02 (); (H2) 3.2.25 (); (H3) 1.3.11 ().
Fig. 2: The evolution curves of the (a) objective function, (b) the Real Error, (c) the signal-to-noise ratio (SNR) and (d) the Residual, for the brain M0 image versus the iteration number. All the simulations have been done using the ADMM TV1 and TV(1/2) algorithms and the IADMM algorithm using and , and .

All the programs have been written entirely in C++, and all the experiments are implemented under Linux running on a desktop computer with an Intel Core i5-2400S CPU (2.5 GHz) and 4 GB Memory. The FFT subroutines used in the algorithms are taken from the fftw-3111http://www.fftw.org/ library. As test problems we have selected twelve images (see Figure 1), which include seven images from USC-SIPI image database222http://sipi.usc.edu/database/, two classical test images (Lena and cameraman), one text image from “El Quijote” book and two medical images. In order to obtain the blurred images, we use, as it is common in literature, the blurring operator generated using a convolution with Gaussian kernel (KernelSize , KernelMu , KernelSigma ) and circular mapping on the edges of the image.

deblurring results for ,

IADMM () IADMM () ADMM(TV1)
IMG I1 ERROR SNR RES I2 ERROR SNR RES I2/I1 I3 ERROR SNR RES I3/I1
L0 4 17.4 11.1 4.59e-03 6 17.4 11.1 4.62e-03 1.50 7 17.6 11.0 4.80e-03 1.75
L1 5 16.7 10.0 4.67e-03 8 16.5 10.1 4.60e-03 1.60 9 16.8 9.9 4.95e-03 1.80
L2 2 19.4 8.7 4.36e-03 2 19.9 8.5 4.69e-03 1.00 2 20.2 8.3 4.88e-03 1.00
L3 2 16.6 13.2 3.01e-03 2 17.1 13.0 3.25e-03 1.00 2 17.9 12.6 3.67e-03 1.00
M0 8 30.6 13.8 4.50e-03 13 30.4 13.8 4.67e-03 1.62 16 30.6 13.8 4.88e-03 2.00
M1 9 28.5 13.7 4.33e-03 14 28.7 13.6 4.86e-03 1.56 18 28.5 13.7 4.78e-03 2.00
M2 6 36.6 12.8 3.48e-03 7 40.1 12.0 4.82e-03 1.17 9 39.9 12.1 4.86e-03 1.50
M3 7 35.5 12.5 4.34e-03 11 35.6 12.4 4.50e-03 1.57 13 36.3 12.3 4.97e-03 1.86
H0 8 71.4 10.3 4.50e-03 13 71.0 10.3 4.65e-03 1.62 16 71.4 10.3 4.84e-03 2.00
H1 9 69.2 6.3 4.47e-03 14 69.7 6.2 4.96e-03 1.56 18 69.2 6.3 4.89e-03 2.00
H2 7 74.2 3.3 4.16e-03 10 76.2 3.0 4.87e-03 1.43 13 75.5 3.1 4.77e-03 1.86
H3 6 77.4 1.5 4.08e-03 8 80.8 1.1 4.87e-03 1.33 11 78.9 1.3 4.48e-03 1.83

deblurring results for ,

IADMM () IADMM () ADMM(TV1)
IMG I1 ERROR SNR RES I2 ERROR SNR RES I2/I1 I3 ERROR SNR RES I3/I1
L0 19 12.3 14.1 9.96e-04 32 12.2 14.2 9.75e-04 1.68 40 12.2 14.2 9.82e-04 2.11
L1 22 12.0 12.9 9.92e-04 37 11.8 13.0 9.69e-04 1.68 46 11.8 13.0 9.82e-04 2.09
L2 14 13.2 12.0 9.86e-04 23 13.1 12.1 1.00e-03 1.64 29 13.1 12.1 9.99e-04 2.07
L3 11 12.2 15.9 9.10e-04 17 12.2 15.9 9.76e-04 1.55 21 12.3 15.8 9.96e-04 1.91
M0 26 21.7 16.8 9.84e-04 43 21.5 16.8 9.78e-04 1.65 54 21.4 16.9 9.79e-04 2.08
M1 29 20.3 16.6 9.91e-04 48 20.1 16.7 9.80e-04 1.66 60 20.1 16.7 9.86e-04 2.07
M2 16 27.2 15.4 9.52e-04 26 27.1 15.4 9.84e-04 1.62 33 27.0 15.5 9.79e-04 2.06
M3 24 25.1 15.5 9.66e-04 39 25.0 15.5 9.80e-04 1.62 48 25.1 15.5 1.00e-03 2.00
H0 28 50.7 13.2 9.92e-04 46 50.3 13.3 9.89e-04 1.64 58 50.2 13.3 9.85e-04 2.07
H1 33 48.5 9.4 9.82e-04 53 48.5 9.4 9.99e-04 1.61 67 48.3 9.4 9.91e-04 2.03
H2 22 54.3 6.0 9.88e-04 36 53.9 6.0 9.98e-04 1.64 46 53.6 6.1 9.80e-04 2.09
H3 18 57.3 4.1 9.43e-04 29 57.2 4.1 9.81e-04 1.61 36 57.4 4.1 9.99e-04 2.00

deblurring results for ,

IADMM () IADMM () ADMM(TV1)
IMG I1 ERROR SNR RES I2 ERROR SNR RES I2/I1 I3 ERROR SNR RES I3/I1
L0 36 10.8 15.2 4.94e-04 59 10.8 15.3 4.92e-04 1.64 73 10.8 15.3 4.99e-04 2.03
L1 42 10.5 14.0 4.96e-04 68 10.5 14.1 4.99e-04 1.62 86 10.5 14.1 4.95e-04 2.05
L2 27 11.5 13.2 4.91e-04 44 11.4 13.3 4.95e-04 1.63 55 11.4 13.3 4.97e-04 2.04
L3 21 10.6 17.2 4.80e-04 34 10.5 17.2 4.88e-04 1.62 42 10.6 17.1 4.97e-04 2.00
M0 49 18.2 18.3 4.99e-04 80 18.1 18.3 4.96e-04 1.63 100 18.1 18.3 4.97e-04 2.04
M1 50 17.4 17.9 5.50e-04 79 17.5 17.9 5.64e-04 1.58 113 16.8 18.2 4.96e-04 2.26
M2 34 22.1 17.2 5.83e-04 54 22.1 17.2 5.90e-04 1.59 158 17.8 19.1 5.00e-04 4.65
M3 41 22.0 16.6 6.65e-04 65 22.0 16.6 6.74e-04 1.59 401 16.3 19.2 5.00e-04 9.78
H0 40 45.8 14.1 6.73e-04 64 45.8 14.1 6.85e-04 1.60 110 41.8 14.9 4.96e-04 2.75
H1 68 39.5 11.1 4.99e-04 108 39.6 11.1 5.05e-04 1.59 138 39.3 11.2 4.97e-04 2.03
H2 42 45.3 7.6 4.90e-04 67 45.3 7.6 4.99e-04 1.60 84 45.3 7.6 4.99e-04 2.00
H3 32 48.8 5.5 4.91e-04 52 48.6 5.5 4.94e-04 1.62 65 48.7 5.5 4.96e-04 2.03
TABLE II: Deblurring results for a small value of () using different values of the tolerance . Iteration number (I1, I2, I3), Real error (ERROR), SNR, Residual (RES) and efficiency rates I2/I1 and I3/I1 using the IADMM ( and ) and the ADMM (TV1) methods applied to all the 12 test images (IMG).

The proposed algorithm IADMM (Algorithm 1) is compared with the widely used augmented Lagrangian methods (ADMM [22]) for image deblurring. We mainly consider two models, i.e., and . We call them as TV1 and TV(1/2) methods, respectively. Note that TV1 is a convex method, while TV(1/2) is nonconvex. In the tests we have considered (unless so indicated) for all the methods the value of the penalty parameter (a small one) and/or (a large one) just to see the behaviour of the IADMM algorithm.

The performance of the deblurring algorithms is quantitatively measured by means of the objective function (Equations 2 or 3), the Real Error as of the difference between the original and deblurred images, the signal-to-noise ratio (SNR) [22]

(36)

where and denote the original image and the restored image, respectively, and represents the mean of the original image , and the residual ( and in the standard and inertial versions, respectively) as described in [22]:

(37)

In the tests we do not provide CPU tests as all the algorithms have a very similar computation cost per iteration (mainly from the FFT routines). Therefore, there is almost no difference between pictures showing iterations or CPU cost, and the consumed CPU is basically proportional to the respective number of iterations.

On our first test we use the brain M0 image with a low value of and we show in Figure 2 that the performances of the ADMM TV1 and TV(1/2) are quite similar in error and SNR. Therefore, in the rest of comparisons we will just consider the TV1 method. For the IADMM method the inertial parameter in Algorithm 1 is investigated firstly by using two different values ( and ). The main difference observed in these tests is that the nonconvex method is more unstable once reached the maximum precision (at this point the ADMM TV1 convex method seems to be the more stable with a quite smooth behaviour). The fastest convergence is observed using the IADMM (with the largest stepsize value ) method, but when the maximum precision is obtained unstable behaviour appears. Therefore, using mainly the information provided by the residual (Eq. (37)), we provide a stop control criterion (in the same spirit as [22]) that stops the iterative process at the black points of Figure 2 (d) in the tests. That is, we stop when the following condition is hold

(38)

In the first case the convergence till the desired tolerance error is obtained, while in the second case the algorithm has reached its stability limit and the residual grows, behaving later in an unstable way. Note that the residual is used in the stop criterion, as it uses known data from the iterations (and which does not depend on the original image that is unknown). We remark that the use of stop control techniques avoids the use of unnecessary iterations, and also to stop at the limit accuracy of the method. Also, from the pictures we observe that the IADMM algorithm provides enough precision in a lower number of iterations. The larger means faster method, but at the price of a more unstable method as it can be seen on Fig. 2(d). On that picture we observe that when the residual begins to behave chaotically, with sudden increases, it means that it is advisable to stop the iterative process as considered in the stop criterion (38) (black dot points on Fig. 2(d)). With that criterion, the IADMM method seems to be an interesting option for fast deblurring problems.

Fig. 3: The evolution curves of (a) the Real Error and (b) the Residual, for the brain M0 image versus the iteration number. All the simulations have been done using the IADMM algorithm with several values of the parameter and and .
Fig. 4: The evolution curves of (a) the Real Error and (b) the Residual, for the brain M0 image versus the iteration number. All the simulations have been done using the IADMM algorithm with several values of the parameters and and .

To observe more clearly the influence of the parameter in Algorithm 1 we perform several tests on the brain M0 image on Figure 3 for values and . Note that this parameter plays a role similar to the stepsize (as it also occurs to the parameter ), as it controls the perturbation at each step. A large value will provide, when the method works, a quite fast method, but on the other hand it makes the method more unstable. In fact, from the plot 3(b) we observe that in this case it would be optimal to use the parameter value in combination with the stop criterion, giving the maximum precision in just 28 iterations. Besides, it is shown that after the values selected by the stop criterion the residual begins to oscillate among values that provides similar error but that generates an unstable behaviour giving rise to an increment of the error in subsequent iterations (this instability is delayed when the parameter decreases, what is expected because the increment is smaller, as the vertical lines connecting the error and residual plots show).

The influence of the penalty parameter is also quite relevant, but a detailed analysis is out of the scope of this paper. On the Figure 4 we show the evolution of the residual using two values of and several values of the parameter and . We observe that low values of the penalty parameter gives a lower residual, but the error is lower for large providing a faster convergence, and it has a big effect on the empirical performance of the methods as shown in [33], but it remains to study optimal combinations of the parameters and and suitable criteria for an automatic selection (this will be part of the next steps in our study of these methods).

Fig. 5: Deblurred images at different stages of the IADMM method for brain M0 image. (a) Original image; (b) Blurred image; (c) Recovery by IADMM using error tolerance ; (d) Recovery by IADMM using error tolerance .

On Figure 5, we present the original medium resolution brain M0 image, the blurred one using, as indicated, a convolution with Gaussian kernel, and the results of the IADMM deblurring images using two error tolerances (, ) in the stop criterion (38). We can see that in both cases the quality of the recovered image is visually good.

deblurring results for ,

IADMM () IADMM () ADMM(TV1)
IMG I1 ERROR SNR RES I2 ERROR SNR RES I2/I1 I3 ERROR SNR RES I3/I1
L0 5 3.0 26.5 4.72e-01 5 2.8 27.1 4.31e-01 1.00 10 6.2 20.0 9.56e-02 2.00
L1 3 3.3 24.1 5.38e-01 5 3.2 24.3 4.62e-01 1.67 10 5.8 19.2 9.48e-02 3.33
L2 1 3.8 22.9 8.22e-01 3 3.7 23.0 5.13e-01 3.00 10 9.3 15.0 9.68e-02 10.00
L3 6 1.4 34.6 2.41e-01 7 1.3 35.5 2.00e-01 1.17 10 7.0 20.7 9.86e-02 1.67
M0 3 6.2 27.6 5.79e-01 5 6.0 28.0 4.98e-01 1.67 10 7.0 26.5 9.74e-02 3.33
M1 7 4.3 30.2 4.40e-01 7 3.9 31.0 4.12e-01 1.00 10 5.4 28.1 9.78e-02 1.43
M2 4 3.0 34.5 1.50e-01 4 3.0 34.7 1.00e-01 1.00 10 20.6 17.8 9.90e-02 2.50
M3 1 14.6 20.2 8.94e-01 1 14.5 20.2 8.68e-01 1.00 10 31.7 13.4 9.64e-02 10.00
H0 5 13.8 24.5 4.96e-01 5 12.6 25.3 4.44e-01 1.00 10 15.2 23.7 9.52e-02 2.00
H1 3 16.6 18.7 5.73e-01 5 16.2 18.9 4.98e-01 1.67 10 20.4 16.9 9.47e-02 3.33
H2 5 15.7 16.8 5.11e-01 5 14.6 17.4 4.57e-01 1.00 10 20.7 14.4 9.29e-02 2.00
TABLE III: Deblurring results for a large value of () using the tolerance . Iteration number (I1, I2, I3), Real error (ERROR), SNR, Residual (RES) and efficiency rates I2/I1 and I3/I1 using the IADMM ( and ) and the ADMM (TV1) methods applied to all the 12 test images (IMG).

deblurring results for on M0 image

IADMM ()