# Image Restoration by Iterative Denoising and Backward Projections

Inverse problems appear in many applications such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this work, we propose an alternative method for solving inverse problems using denoising algorithms, that requires less parameter tuning. We provide theoretical analysis of the method, and empirically demonstrate that it is competitive with task-specific techniques and the P&P approach for image inpainting and deblurring.

## Authors

• 12 publications
• 76 publications
• ### Acceleration of RED via Vector Extrapolation

Models play an important role in inverse problems, serving as the prior ...
05/06/2018 ∙ by Tao Hong, et al. ∙ 0

• ### Denoising Score-Matching for Uncertainty Quantification in Inverse Problems

Deep neural networks have proven extremely efficient at solving a wide r...
11/16/2020 ∙ by Zaccharie Ramzi, et al. ∙ 0

• ### Scene-Adapted Plug-and-Play Algorithm with Guaranteed Convergence: Applications to Data Fusion in Imaging

The recently proposed plug-and-play (PnP) framework allows leveraging re...
01/02/2018 ∙ by Afonso M. Teodoro, et al. ∙ 0

• ### A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging

Recently, impressive denoising results have been achieved by Bayesian ap...
06/10/2017 ∙ by Cecilia Aguerrebere, et al. ∙ 0

• ### The Projected GSURE for Automatic Parameter Tuning in Iterative Shrinkage Methods

Linear inverse problems are very common in signal and image processing. ...
03/21/2010 ∙ by Raja Giryes, et al. ∙ 0

• ### Plug-and-play ISTA converges with kernel denoisers

Plug-and-play (PnP) method is a recent paradigm for image regularization...
04/07/2020 ∙ by Ruturaj G. Gavaskar, et al. ∙ 0

• ### Poisson Inverse Problems by the Plug-and-Play scheme

The Anscombe transform offers an approximate conversion of a Poisson ran...
11/08/2015 ∙ by Arie Rond, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

We consider the reconstruction of an image from its degraded version, which may be noisy, blurred, downsampled, or all together. This general problem has many important applications, such as medical imaging, surveillance, entertainment, and more. Traditionally, the design of task-specific algorithms has been the ruling approach. Many works specifically considered image denoising [1, 2, 3], deblurring [4, 5, 6], inpainting [7, 8, 9]

[10, 11], etc.

Recently, a new approach attracts much interest. This approach suggests leveraging the impressive capabilities of existing denoising algorithms for solving other tasks that can be formulated as an inverse problem. The pioneering algorithm that introduced this concept is the Plug-and-Play (P&P) method [12], which presents an elegant way to decouple the measurement model and the image prior, such that the latter is handled solely by a denoising operation. Thus, it is not required to explicitly specify the prior, since it is implicitly defined through the choice of the denoiser.

The P&P method has already found many applications, e.g. bright field electron tomography [13], Poisson denoising [14], and postprocessing of compressed images [15]. It also inspired new related techniques [16, 17, 18]. However, it has been noticed that the P&P often requires a burdensome parameter tuning in order to obtain high quality results [17, 19]. Moreover, since it is an iterative method, sometimes a large number of iterations is required.

In this work, we propose a simple iterative method for solving linear inverse problems using denoising algorithms, which provides an alternative to P&P. Our strategy has less parameters that require tuning (e.g. no tuning is required for the noisy inpainting problem), often requires less iterations, and its recovery performance is competitive with task-specific algorithms and with the P&P approach. We demonstrate the advantages of the new technique on inpainting and deblurring problems.

The paper is organized as follows. In Section II we present the problem formulation and the P&P approach. The proposed algorithm is presented in Section III. Section IV includes mathematical analysis of the algorithm and provides a practical way to tune its parameter. In Section V the usage of the method is demonstrated and examined for inpainting and deblurring problems. Section VI concludes the paper.

## Ii Background

### Ii-a Problem formulation

The problem of image restoration can be generally formulated by

 y=Hx+e, (1)

where represents the unknown original image, represents the observations, is an degradation matrix and

is a vector of independent and identically distributed Gaussian random variables with zero mean and standard deviation of

. The model in (1) can represent different image restoration problems; for example: image denoising when is the identity matrix , image inpainting when is a selection of rows of , and image deblurring when is a blurring operator.

In all of these cases, a prior image model

is required in order to successfully estimate

from the observations . Specifically, note that is ill-conditioned in the case of image deblurring, thus, in practice it can be approximated by a rank-deficient matrix, or alternatively by a full rank matrix (). Therefore, for a unified formulation of inpainting and deblurring problems, which are the test cases of this paper, we assume .

Almost any approach for recovering involves formulating a cost function, composed of fidelity and penalty terms, which is minimized by the desired solution. The fidelity term ensures that the solution agrees with the measurements, and is often derived from the negative log-likelihood function. The penalty term regularizes the optimization problem through the prior image model . Hence, the typical cost function is

 f(~x)=12σ2n∥y−H~x∥22+s(~x), (2)

where stands for the Euclidean norm.

### Ii-B Plug and Play approach

Instead of devising a separate algorithm to solve for each type of matrix , a general recovery strategy has been proposed in [12], denoted as the Plug-and-Play (P&P). For completeness, we briefly describe this technique.

Using variable splitting, the P&P method restates the minimization problem as

 min~x,~vℓ(~x)+βs(~v)s.t.~x=~v, (3)

where is the fidelity term in (2), and is a positive parameter that adds flexibility to the cost function. This problem can be solved using ADMM [20] by constructing an augmented Lagrangian, which is given by

 Lλ =ℓ(~x)+βs(~v)+uT(~x−~v)+λ2∥~x−~v∥22 =ℓ(~x)+βs(~v)+λ2∥~x−~v+~u∥22+λ2∥~u∥22, (4)

where is the dual variable, is the scaled dual variable, and is the ADMM penalty parameter. The ADMM algorithm consists of iterating until convergence over the following three steps

 ˇxk =argmin~xLλ(~x,ˇvk−1,ˇuk−1), ˇvk =argmin~vLλ(ˇxk,~v,ˇuk−1), ˇuk =ˇuk−1+(ˇxk−ˇvk). (5)

By plugging (II-B) in (II-B) we have

 ˇxk =argmin~xℓ(~x)+λ2∥~x−(ˇvk−1−ˇuk−1)∥22, ˇvk =argmin~vλ2β∥(ˇxk+ˇuk−1)−~v∥22+s(~v), ˇuk =ˇuk−1+(ˇxk−ˇvk). (6)

Note that the first step in (II-B) is just solving a least squares (LS) problem and the third step is a simple update. The second step is more interesting. It describes obtaining

using a white Gaussian denoiser with noise variance of

, applied on the image . This can be written compactly as , where is a denoising operator. Since general denoising algorithms can be used to implement the operator , the P&P method does not require knowing or explicitly specifying the prior function . Instead, is implicitly defined through the choice of . The obtained P&P algorithm is presented in Algorithm 1.

From ADMM theory, global convergence (i.e. iterations approach feasibility and objective reaches its optimal value) is ensured if and are convex, closed, proper, and the unaugmented Lagrangian has a saddle point [20]. Global convergence of P&P is proved for a denoiser that has a symmetric gradient and is non-expansive [13]. However, the latter is difficult to be proved, and well-known denoisers such as BM3D [1], K-SVD [2], and standard NLM [3], lead to good results despite violating these conditions. Another type of convergence is fixed point convergence, which guarantees that an iterative algorithm asymptotically enters a steady state. A modified version of P&P, where the ADMM parameter increases between iterations, is guaranteed to have such a convergence under some mild conditions on the denoiser [19].

The P&P method is not free of drawbacks. Its main difficulties are the large number of iterations, which is often required by the P&P to converge to a good solution, and the setting of the design parameters and , which is not always clear and strongly affects the performance.

## Iii The Proposed Algorithm

In this work we take another strategy for solving inverse problems using denoising algorithms. We start with formulating the cost function (2) in somewhat strange but equivalent way

 f(~x) =12σ2n∥H(H†y−~x)∥22+s(~x) =12σ2n∥H†y−~x∥2HTH+s(~x), (7)

where

 H† ≜HT(HHT)−1 (8) ∥u∥2HTH ≜uTHTHu. (9)

Note that is the pseudoinverse of the full row rank matrix , and is not a real norm, since is not a positive definite matrix in our case. Moreover, as mentioned above, since the null space of is not empty, the prior is essential in order to obtain a meaningful solution.

The optimization problem can be equivalently written as

 min~x,~y12σ2n∥~y−~x∥2HTH+s(~x)s.t.~y=H†y. (10)

Note that due to the degenerate constraint, the solution for is trivial .

Now, we make two major modifications to the above optimization problem. The basic idea is to loose the variable in a restricted manner, that can facilitate the estimation of

. First, we give some degrees of freedom to

by using the constraint instead of . Next, we turn to prevent large components of in the null space of that may strongly disagree with the prior . We do it by replacing the multiplication by in the fidelity term, which implies projection onto a subspace, with multiplication by that implies a full dimensional space, where is a design parameter.

This leads to the following optimization problem

 min~x,~y12(σn+δ)2∥~y−~x∥22+s(~x)s.t.H~y=y. (11)

Note that introduces a tradeoff. On the one hand, exaggerated value of should be avoided, as it may over-reduce the effect of the fidelity term. On the other hand, too small value of may over-penalize unless it is very close to the affine subspace . This limits the effective feasible set of in problem (11), such that it may not include potential solutions of the original problem (10). Therefore, we suggest setting the value of as

 δ =argmin~δ(σn+~δ)2 s.t.1σ2n∥H†y−~x∥HTH≥1(σn+~δ)2∥~y−~x∥2 ∀~x,~y∈S(???), (12)

where denotes the feasible set of problem (11). Note that the feasibility of is dictated by and the feasibility of is dictated by the constraint in (11). The problem of obtaining such value for (or an approximation) is discussed in Section IV-A, where a relaxed version of the condition in (III) is presented.

Assuming that solves (III), the property that for feasible and , together with the fact that is one of the solutions of the underdetermined system , prevents increasing the penalty on potential solutions of the original optimization problem (10). Therefore, roughly speaking, we do not lose solutions when we solve (11) instead of (10). As a sanity check, observe that if then the constraint in (11) degenerates to and the solution to (III) is . Therefore, (11) reduces to the original image denoising problem.

We solve (11) using alternating minimization. Iteratively, is estimated by solving

 ~xk=argmin~x12(σn+δ)2∥~yk−1−~x∥22+s(~x), (13)

and is estimated by solving

 ~yk=argmin~y∥~y−~xk∥22s.t.H~y=y, (14)

which describes a projection of onto the affine subspace , and has a closed-form solution

 ~yk=H†y+(In−H†H)~xk. (15)

Similarly to the P&P technique, (13) describes obtaining using a white Gaussian denoiser with noise variance of , applied on the image , and can be written compactly as , where is a denoising operator. Moreover, as in the case of the P&P, the proposed method does not require knowing or explicitly specifying the prior function . Instead, is implicitly defined through the choice of .

The variable is expected to be closer to the true signal than the raw observations . Thus, our algorithm alternates between estimating the signal and using this estimation in order to obtain improved measurements (that also comply with the original observations ). The proposed algorithm, which we call Iterative Denoising and Backward Projections (IDBP), is presented in Algorithm 2.

## Iv Mathematical Analysis of the Algorithm

### Iv-a Setting the value of the parameter δ

Setting the value of that solves (III) is required for simple theoretical justification of our method. However, it is not clear how to obtain such in general. Therefore, in order to relax the condition in (III), that should be satisfied by all and in , we can focus only on the sequences and generated by the proposed alternating minimization process. Then, we can use the following proposition.

###### Proposition 1.

Set . If there exist an iteration of IDBP that violates the following condition

 1σ2n∥y−H~xk∥2≥1(σn+~δ)2∥H†(y−H~xk)∥2, (16)

then also violates the condition in (III).

###### Proof.

Assume that and generated by IDBP at some iteration violate (16), then they also violate the equivalent condition

 1σ2n∥H†y−~xk∥HTH≥1(σn+~δ)2∥H†y−H†H~xk∥2. (17)

Note that (17) is obtained simply by plugging (15) into in (III). Therefore, and also violate the inequality in (III). Finally, it is easy to see that and are feasible points of (11), since is a feasible point of and satisfies . Therefore, the condition in (III) does not hold for all feasible and , which means that violates it. ∎

Note that (16) can be easily evaluated for each iteration. Thus, violation of (III) can be spotted (by violation of (16)) and used for stopping the process, increasing and running the algorithm again. Of course, the opposite direction does not hold. Even when (16) is satisfied for all iterations, it does not guarantee satisfying (III). However, the relaxed condition (16) provides an easy way to set with an approximation to the solution of (III), which gives very good results in our experiments.

In the special case of the inpainting problem, (16) becomes ridiculously simple. Since is a selection of rows of , it follows that , which is an

zeros the vector on which it is applied. Therefore, , implying that satisfies (16) in this case. Obviously, if , a small positive is required in order to prevent the algorithm from getting stuck (because in this case ).

Condition (16) is more complex when considering the deblurring problem. In this case

is an ill-conditioned matrix. Therefore

must be approximated, either by approximating by a full rank matrix before computing (8), or by regularized inversion techniques for , e.g. standard Tikhonov regularization. A research on how to compute in this case is ongoing. We empirically observed that using a fixed value for (for all noise levels, blur kernels and images) exhibits good performance. However, we had to add another parameter , which controls the amount of regularization in the approximation of , that slightly changes between scenarios. This issue is discussed in Section V-B. An interesting observation is that the pairs of which give the best results indeed satisfy condition (16). On the other hand, pairs of that give bad results often violate this condition (recall that the condition should be met during all iterations). An example of this behavior is given in Section V-B, where we also introduce an automatic tuning mechanism based on Proposition 1.

### Iv-B Analysis of the sequence {~yk}

The IDBP algorithm creates the sequence that can be interpreted as a sequence of updated measurements. It is desired that is improved with each iteration, i.e. that , obtained from , estimates better than , which is obtained from .

Assuming that the result of the denoiser, denoted by , is perfect, i.e. , we get from (15)

 ¯¯¯y =H†y+(In−H†H)¯¯¯x =H†(Hx+e)+(In−H†H)x =x+H†e. (18)

The last equality describes a model that has only noise (possibly colored), and is much easier to deal with than the original model (1). Therefore, can be considered as the optimal improved measurements that our algorithm can achieve. As we wish to make no specific assumptions on the denoising scheme , improvement of will be measured by the Euclidean distance to .

Denote by the orthogonal projection onto the row space of , and its orthogonal complement by . The updated measurements are always consistent with on , and do not depend on , as can be seen from

 ~yk =H†(Hx+e)+QH~xk =PHx+H†e+QH~xk. (19)

Thus, the following theorem ensures that iteration improves the results, provided that is closer to than on the null space of , i.e.,

 ∥QH(~xk−x)∥2<∥QH(~yk−1−x)∥2. (20)
###### Theorem 2.

Assuming that (20) holds at the th iteration of IDBP, then we have

 ∥~yk−¯¯¯y∥2<∥~yk−1−¯¯¯y∥2. (21)
###### Proof.

Note that

 QH~yk−1=QH(H†y+QH~xk−1)=QH~xk−1. (22)

Equation (21) is obtained by

 ∥~yk−¯¯¯y∥2 =∥(PHx+H†e+QH~xk)−(x+H†e)∥2 =∥QH(~xk−x)∥2 <∥QH(~xk−1−x)∥2 =∥(PHx+H†e+QH~xk−1)−(x+H†e)∥2 =∥~yk−1−¯¯¯y∥2, (23)

where the inequality follows from (20) and (22). ∎

A denoiser that makes use of a good prior (and suitable ) is expected to satisfy (20), at least in early iterations. For example, in the inpainting problem is associated with the missing pixels, and in the deblurring problem is associated with the data that suffer the greatest loss by the blur kernel. Therefore, in both cases is expected to be closer to than . Note that if (20) holds for all iterations, then Theorem 2 ensures monotonic improvement and convergence of , and thus, a fixed point convergence of IDBP. However, note that it does not guarantee that is the limit of the sequence .

### Iv-C Recovery guarantees

Similar to P&P, in order to prove more than a fixed point convergence of IDBP, strict assumptions on the denoising scheme are required. For global convergence of P&P, it is enough to assume that the denoiser is non-expansive and has a symmetric gradient [13], which allows using the proximal mapping theorem of Moreau [21]. However, non-expansiveness property of a denoiser is very demanding, as it requires that for a given noise level we have

 ∥D(z1;σ)−D(z2;σ)∥2≤Kσ∥z1−z2∥2, (24)

for any and in , with .

In this work we take a different route that exploits the structure of the IDBP algorithm, where the denoiser’s output is always projected onto the null space of . Instead of assuming (20), we use the following assumptions:

###### Condition 1.

The denoiser is bounded, in the sense of

 ∥D(z;σ)−z∥2≤σB, (25)

for any , where is a universal constant independent of .

###### Condition 2.

For a given noise level , the projection of the denoiser onto the null space of is a contraction, i.e., it satisfies

 ∥QHD(z1;σ)−QHD(z2;σ)∥2≤Kσ∥z1−z2∥2, (26)

for any in , where , and .

Condition 1 implies that , as can be expected from a denoiser. Thus, it prevents considering a trivial mapping, e.g. for all , which trivially satisfies Condition 2. Regarding the second condition, even though it describes a contraction, it considers the operator . Therefore, for some cases of , it might be weaker than non-expansiveness of . Our main recovery guarantee is given in the following theorem.

###### Theorem 3.

Let , apply IDBP with some for the denoising operation, and assume that Condition 1 holds. Assume also that Conditions 2 holds for this choice of . Then, with the notation of IDBP we have

 ∥~xk+1−x∥2≤Kkσ∥~y0−¯¯¯y∥2+11−Kσ∥H†e∥2+Cσ, (27)

where and .

The proof of Theorem 3 appears in the appendix.

Theorem 3 provides an upper bound on the error of IDBP w.r.t. the true signal . It can be seen that for a fixed number of iterations, a bounded denoiser with a smaller is expected to perform better. If is close to 1, more iterations will reduce the first term in the bound, but not the second term, which may be an artifact of our proof. The third term may suggest using IDBP with the smallest possible . However, Condition 2 implies that smaller yields larger , since the denoiser has a smaller effect on its input, and (26) needs to be satisfied for any two signals and . Still, assuming that the effect on is small and can be compensated by using more iterations, smaller is beneficial. The last observation on agrees with our suggestion to choose according to (III), where is minimized under a constraint that aims to prevent losing solutions when (11) is being solved instead of (10).

To the best of our knowledge, there is no equivalent result like Theorem 3 for P&P, as its existing convergence guarantees refer to approaching a minimizer of the original cost function (2), which is not necessarily identical to . Therefore, even though we propose an alternative method to minimize (2), we choose to consider IDBP error w.r.t. the true . Note though that the proof technique we show here can be also used to bound the Euclidean distance between the IDBP estimation and a (pre-computed) solution of (2) with only minor technical changes.

## V Experiments

We demonstrate the usage of IDBP for two test scenarios: the inpainting and the deblurring problems. We compare the IDBP performance to P&P and another algorithm that has been specially tailored for each problem [6], [22]. In all experiments we use BM3D [1] as the denoising algorithm for IDBP and P&P. We use the following eight test images in all experiments: cameraman, house, peppers, Lena, Barbara, boat, hill and couple. Their intensity range is 0-255.

### V-a Image inpainting

In the image inpainting problem, is a selection of rows of and , which simplifies both P&P and IDBP. In P&P, the first step can be solved for each pixel individually. In IDBP, is obtained merely by taking the observed pixels from and the missing pixels from . For both methods we use the result of a simple median scheme as their initialization (for in P&P and for in IDBP). It is also possible to alternatively use for initialization, but then many more iterations are required. Note that the computational cost of each iteration of P&P and IDBP is of the same scale, dominated by the complexity of the denoising operation.

The first experiment demonstrates the performance of IDBP, P&P and inpainting based on Image Processing using Patch Ordering (IPPO) approach [22], for the noiseless case () with 80% missing pixels, selected at random. The parameters of IPPO are set exactly as in [22], where the same scenario is examined. The parameters of P&P are optimized for best reconstruction quality. We use , and 150 iterations. Also, for P&P we assume that the noise standard deviation is 0.001, i.e. nonzero, in order to compute .

Considering IDBP, in Section IV-A, it is suggested that . However, since in this case , a small positive , e.g. , is required. Indeed, this setting gives good performance, but also requires ten times more iterations than P&P. Therefore, we use an alternative approach. We set , which allows us to use only 150 iterations (same as P&P), but take the last as the final estimate, which is equivalent to performing the last denoising with the recommended . Figure 1 shows the results of both IDBP implementations for the house image. It approves that the alternative implementation performs well and requires significantly less iterations (note that the x-axis has a logarithmic scale). Therefore, for the comparison of the different inpainting methods in this experiment, we use the alternative implementation of IDBP with . The empirical behavior observed here, agrees with the theoretical observation at the end of Section IV-C: larger requires less iterations (due to smaller ) but results in higher error. Note also that it is possible to decrease as the iterations increase. However, in this work we aim at demonstrating IDBP performance with minimal parameter tuning as possible.

The results of the three algorithms are given in Table I. IDBP is usually better than IPPO, but slightly inferior to P&P. This is the cost of enhancing IDBP by setting to a value which is significantly larger than zero. However, this observation also hints that IDBP may shine for noisy measurements, where can be used without increasing the number of iterations. We also remark that IPPO gives the best results for peppers and Barbara because in these images P&P and IDBP require more than the fixed 150 iterations.

The second experiment demonstrates the performance of IDBP and P&P with 80% missing pixels, as before, but this time . Noisy inpainting has not been implemented yet by IPPO [22]. The parameters of P&P that give us the best results are , and 150 iterations. Using the same parameter values as before deteriorates the performance significantly. Contrary to P&P, in this experiment tuning the parameters of IDBP can be avoided. We follow Section IV-A and set . Moreover, IDBP now requires only 75 iterations, half the number of P&P. The results are given in Table II. P&P is slightly inferior to IDBP, despite having twice the number of iterations and a burdensome parameter tuning. The results for house are also presented in Figure 2, where it can be seen that P&P reconstruction suffers from more artifacts (e.g. ringing artifacts near the right window).

We repeat the last experiment with slightly increased noise level of , but still use the same parameter tuning for P&P, which is optimized for (i.e. , and the fixed ). This situation is often encountered in practice, when calibrating a system for all possible scenarios is impossible. The results are given in Table III. The IDBP clearly outperforms P&P in this case. This experiment clearly shows the main advantage of our algorithm over P&P as it is less sensitive to parameter tuning.

### V-B Image deblurring

In the image deblurring problem, for a circular shift-invariant blur operator whose kernel is

, both P&P and IDBP can be efficiently implemented using Fast Fourier Transform (FFT). In P&P,

can be computed by

 ˇxk=F−1{F∗{h}F{y}+λσ2nF{ˇvk−1−ˇuk−1}|F{h}|2+λσ2n}. (28)

where denotes the FFT operator, denotes the inverse FFT operator.

Recall that is an ill-conditioned matrix. Therefore, In IDBP we replace with a regularized inversion of , using standard Tikhonov regularization, which is given in the Fourier domain by

 ~g≜F∗{h}|F{h}|2+ϵ⋅σ2n, (29)

where is a parameter that controls the amount of regularization in the approximation of . Then, in IDBP can be computed by

 ~yk=F−1{~gF{y}}+~xk−F−1{~gF{h}F{~xk}}. (30)

We use trivial initialization in both methods, i.e. in P&P and in IDBP. Similarly to inpainting, the computational cost of each iteration of P&P and IDBP is on the same scale, dominated by the complexity of the denoising operation.

We consider four deblurring scenarios used as benchmarks in many publications (e.g. [5, 6]). The blur kernel and noise level of each scenario are summarized in Table IV. The kernels are normalized such that .

We compare the performance of IDBP and P&P with IDD-BM3D [6], which is a state-of-the-art deblurring algorithm. We use IDD-BM3D exactly as in [6], where the same scenarios are examined: it is initialized using BM3D-DEB [23], performs 200 iterations and its parameters are manually tuned per scenario. The parameters of P&P are also optimized for each scenario. It uses 50 iterations and 0.85, 0.85, 0.9, 0.8 and 2, 1, 3, 1, for scenarios 1-4, respectively.

For the tuning of IDBP, as mentioned in Section IV-A, we observed that pairs of that give the best results indeed satisfy condition (16), while pairs of that lead to bad results often violate this condition. This behavior is demonstrated for house image in Scenario 1 (see Table IV). Figure 2(a) shows the PSNR as a function of the iteration number for several pairs of . The left-hand side (LHS) of (16) divided by its right-hand side (RHS) is presented in Figure 2(b) as a function of the iteration number. If this division is less than 1, even for a single iteration, it means that the original condition in (III) is violated by the associated . Recall that even when the division is higher than 1 for all iterations, it does not guarantee satisfying (III). Therefore, a small margin should be kept. For example, the pair (=5, =7e-3), which reaches the highest PSNR in Figure 2(a), has smallest LHS/RHS ratio slightly below 3. When the margin further increases, graceful degradation in PSNR occurs, as observed for (=7, =7e-3) and (=5, =10e-3).

Equipped with the above observation, we suggest fixing (or ) and automatically tuning (or ) using condition (16) with some confidence margin. A scheme for IDBP with automatic tuning of is presented in Algorithm 3. Starting with a small value of , the ratio LHS/RHS of (16) is evaluated at the end of each IDBP iteration. If the ratio is smaller than a threshold , then is slightly increased and IDBP is restarted. We do not check the ratio at the first iteration, as it strongly depends on the initial . An alternative scheme that uses a fixed and gradually increases can be obtained in a similar way. We noticed that the restarts in Algorithm 3 happen in early iterations (e.g., restarts will occur at the second iteration for the bad initializations in Figure 2(b)). Therefore, the proposed initialization scheme is not computationally demanding.

The efficiency of the auto-tuned IDBP is demonstrated by improving the performance for the worst two initializations in Figure 2(a), i.e. (=2, =7e-3) and (=5, =3e-3). For each of them, one parameter is kept as is and the second is auto-tuned using a threshold . The results are shown in Figures 2(c) and 2(d).

We examine two different tuning strategies for IDBP. The first one is a manual tuning per scenario, which is simpler than the tuning of the competing methods. We fix for all scenarios and only change to 7e-3, 4e-3, 8e-3, 2e-3 for scenarios 1-4, respectively. The second strategy applies Algorithm 3, with the suggested default settings. In the latter, can be set differently for different images in the same scenario, while all of them use the same method with the same default parameters. We use a stopping criterion of only 30 iterations for both IDBP versions.

Table V shows the results of IDBP, auto-tuned IDBP, P&P and the dedicated algorithm IDD-BM3D. For each scenario it shows the input PSNR (i.e. PSNR of ) and the BSNR (blurred signal-to-noise-ratio, defined as ) for each image, as well as the ISNR (improvement signal-to-noise-ratio) for each method and image, which is the difference between the PSNR of the reconstruction and the input PSNR. Note that in Scenario 3, is set slightly different for each image, ensuring that the BSNR is 40 dB.

From Table V it is clear that IDBP’s plain and auto-tuned implementations have similar performance on average. Both of them perform better than P&P, and have only a small performance gap below IDD-BM3D, which is especially tailored for the deblurring problem and requires more iterations and parameter tuning. Figure 4 displays the results for Barbara in Scenario 4. It can be seen that IDBP reconstruction, especially with auto-tuning in this case, restores the texture better (e.g. look near the right palm).

## Vi Conclusion

In this work we introduced the Iterative Denoising and Backward Projections (IDBP) method for solving linear inverse problems using denoising algorithms. This method, in its general form, has only a single parameter that should be set according to a given condition. We presented a mathematical analysis of this strategy and provided a practical way to tune its parameter. Therefore, it can be argued that our approach has less parameters that require tuning than the P&P method. Specifically, for the noisy inpainting problem, the single parameter of the IDBP can be just set to zero, and for the deblurring problem our suggested automatic parameter tuning can be employed. Experiments demonstrated that IDBP is competitive with state-of-the-art task-specific algorithms and with the P&P approach for the inpainting and deblurring problems.

## Appendix A Proof of Theorem 3

###### Lemma 4.

Assuming that Condition 1 holds, i.e. for any , we have

 ∥D(z1;σ)−D(z2;σ)∥2≤∥z1−z2∥2+2σB (31)

for any and in .

###### Proof.

Using the triangle inequality followed by Condition 1, we get the desired result

 D(z1;σ)−D(z2;σ)∥2 ≤∥D(z1;σ)−z1∥2+∥D(z2;σ)−z2∥2+∥z1−z2∥2 ≤∥z1−z2∥2+2σB. (32)

We now turn to the proof of the theorem.

###### Proof.

By , and using the triangle inequality, we have

 ∥~xk+1−x∥2 =∥D(~yk;σ)−x∥2 ≤∥D(~y<