Image Restoration by Iterative Denoising and Backward Projections

10/18/2017 ∙ by Tom Tirer, et al. ∙ Tel Aviv University 0

Inverse problems appear in many applications such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this work, we propose an alternative method for solving inverse problems using denoising algorithms, that requires less parameter tuning. We provide theoretical analysis of the method, and empirically demonstrate that it is competitive with task-specific techniques and the P&P approach for image inpainting and deblurring.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

We consider the reconstruction of an image from its degraded version, which may be noisy, blurred, downsampled, or all together. This general problem has many important applications, such as medical imaging, surveillance, entertainment, and more. Traditionally, the design of task-specific algorithms has been the ruling approach. Many works specifically considered image denoising [1, 2, 3], deblurring [4, 5, 6], inpainting [7, 8, 9]

, super-resolution

[10, 11], etc.

Recently, a new approach attracts much interest. This approach suggests leveraging the impressive capabilities of existing denoising algorithms for solving other tasks that can be formulated as an inverse problem. The pioneering algorithm that introduced this concept is the Plug-and-Play (P&P) method [12], which presents an elegant way to decouple the measurement model and the image prior, such that the latter is handled solely by a denoising operation. Thus, it is not required to explicitly specify the prior, since it is implicitly defined through the choice of the denoiser.

The P&P method has already found many applications, e.g. bright field electron tomography [13], Poisson denoising [14], and postprocessing of compressed images [15]. It also inspired new related techniques [16, 17, 18]. However, it has been noticed that the P&P often requires a burdensome parameter tuning in order to obtain high quality results [17, 19]. Moreover, since it is an iterative method, sometimes a large number of iterations is required.

In this work, we propose a simple iterative method for solving linear inverse problems using denoising algorithms, which provides an alternative to P&P. Our strategy has less parameters that require tuning (e.g. no tuning is required for the noisy inpainting problem), often requires less iterations, and its recovery performance is competitive with task-specific algorithms and with the P&P approach. We demonstrate the advantages of the new technique on inpainting and deblurring problems.

The paper is organized as follows. In Section II we present the problem formulation and the P&P approach. The proposed algorithm is presented in Section III. Section IV includes mathematical analysis of the algorithm and provides a practical way to tune its parameter. In Section V the usage of the method is demonstrated and examined for inpainting and deblurring problems. Section VI concludes the paper.

Ii Background

Ii-a Problem formulation

The problem of image restoration can be generally formulated by

(1)

where represents the unknown original image, represents the observations, is an degradation matrix and

is a vector of independent and identically distributed Gaussian random variables with zero mean and standard deviation of

. The model in (1) can represent different image restoration problems; for example: image denoising when is the identity matrix , image inpainting when is a selection of rows of , and image deblurring when is a blurring operator.

In all of these cases, a prior image model

is required in order to successfully estimate

from the observations . Specifically, note that is ill-conditioned in the case of image deblurring, thus, in practice it can be approximated by a rank-deficient matrix, or alternatively by a full rank matrix (). Therefore, for a unified formulation of inpainting and deblurring problems, which are the test cases of this paper, we assume .

Almost any approach for recovering involves formulating a cost function, composed of fidelity and penalty terms, which is minimized by the desired solution. The fidelity term ensures that the solution agrees with the measurements, and is often derived from the negative log-likelihood function. The penalty term regularizes the optimization problem through the prior image model . Hence, the typical cost function is

(2)

where stands for the Euclidean norm.

Ii-B Plug and Play approach

Instead of devising a separate algorithm to solve for each type of matrix , a general recovery strategy has been proposed in [12], denoted as the Plug-and-Play (P&P). For completeness, we briefly describe this technique.

Using variable splitting, the P&P method restates the minimization problem as

(3)

where is the fidelity term in (2), and is a positive parameter that adds flexibility to the cost function. This problem can be solved using ADMM [20] by constructing an augmented Lagrangian, which is given by

(4)

where is the dual variable, is the scaled dual variable, and is the ADMM penalty parameter. The ADMM algorithm consists of iterating until convergence over the following three steps

(5)

By plugging (II-B) in (II-B) we have

(6)

Note that the first step in (II-B) is just solving a least squares (LS) problem and the third step is a simple update. The second step is more interesting. It describes obtaining

using a white Gaussian denoiser with noise variance of

, applied on the image . This can be written compactly as , where is a denoising operator. Since general denoising algorithms can be used to implement the operator , the P&P method does not require knowing or explicitly specifying the prior function . Instead, is implicitly defined through the choice of . The obtained P&P algorithm is presented in Algorithm 1.

Input: , denoising operator , stopping criterion. , such that and is an unknown signal whose prior model is specified by .
Output: an estimate for .
Initialize: some initialization, , , some initialization for and .
while stopping criterion not met do
       ;
       ;
       ;
       ;
      
end while
;
Algorithm 1 Plug and Play (P&P)

From ADMM theory, global convergence (i.e. iterations approach feasibility and objective reaches its optimal value) is ensured if and are convex, closed, proper, and the unaugmented Lagrangian has a saddle point [20]. Global convergence of P&P is proved for a denoiser that has a symmetric gradient and is non-expansive [13]. However, the latter is difficult to be proved, and well-known denoisers such as BM3D [1], K-SVD [2], and standard NLM [3], lead to good results despite violating these conditions. Another type of convergence is fixed point convergence, which guarantees that an iterative algorithm asymptotically enters a steady state. A modified version of P&P, where the ADMM parameter increases between iterations, is guaranteed to have such a convergence under some mild conditions on the denoiser [19].

The P&P method is not free of drawbacks. Its main difficulties are the large number of iterations, which is often required by the P&P to converge to a good solution, and the setting of the design parameters and , which is not always clear and strongly affects the performance.

Iii The Proposed Algorithm

In this work we take another strategy for solving inverse problems using denoising algorithms. We start with formulating the cost function (2) in somewhat strange but equivalent way

(7)

where

(8)
(9)

Note that is the pseudoinverse of the full row rank matrix , and is not a real norm, since is not a positive definite matrix in our case. Moreover, as mentioned above, since the null space of is not empty, the prior is essential in order to obtain a meaningful solution.

The optimization problem can be equivalently written as

(10)

Note that due to the degenerate constraint, the solution for is trivial .

Now, we make two major modifications to the above optimization problem. The basic idea is to loose the variable in a restricted manner, that can facilitate the estimation of

. First, we give some degrees of freedom to

by using the constraint instead of . Next, we turn to prevent large components of in the null space of that may strongly disagree with the prior . We do it by replacing the multiplication by in the fidelity term, which implies projection onto a subspace, with multiplication by that implies a full dimensional space, where is a design parameter.

This leads to the following optimization problem

(11)

Note that introduces a tradeoff. On the one hand, exaggerated value of should be avoided, as it may over-reduce the effect of the fidelity term. On the other hand, too small value of may over-penalize unless it is very close to the affine subspace . This limits the effective feasible set of in problem (11), such that it may not include potential solutions of the original problem (10). Therefore, we suggest setting the value of as

(12)

where denotes the feasible set of problem (11). Note that the feasibility of is dictated by and the feasibility of is dictated by the constraint in (11). The problem of obtaining such value for (or an approximation) is discussed in Section IV-A, where a relaxed version of the condition in (III) is presented.

Assuming that solves (III), the property that for feasible and , together with the fact that is one of the solutions of the underdetermined system , prevents increasing the penalty on potential solutions of the original optimization problem (10). Therefore, roughly speaking, we do not lose solutions when we solve (11) instead of (10). As a sanity check, observe that if then the constraint in (11) degenerates to and the solution to (III) is . Therefore, (11) reduces to the original image denoising problem.

We solve (11) using alternating minimization. Iteratively, is estimated by solving

(13)

and is estimated by solving

(14)

which describes a projection of onto the affine subspace , and has a closed-form solution

(15)

Similarly to the P&P technique, (13) describes obtaining using a white Gaussian denoiser with noise variance of , applied on the image , and can be written compactly as , where is a denoising operator. Moreover, as in the case of the P&P, the proposed method does not require knowing or explicitly specifying the prior function . Instead, is implicitly defined through the choice of .

The variable is expected to be closer to the true signal than the raw observations . Thus, our algorithm alternates between estimating the signal and using this estimation in order to obtain improved measurements (that also comply with the original observations ). The proposed algorithm, which we call Iterative Denoising and Backward Projections (IDBP), is presented in Algorithm 2.

Input: , denoising operator , stopping criterion. , such that and is an unknown signal whose prior model is specified by .
Output: an estimate for .
Initialize: some initialization, , approx. satisfying (III).
while stopping criterion not met do
       ;
       ;
       ;
      
end while
;
Algorithm 2 Iterative Denoising and Backward Projections (IDBP)

Iv Mathematical Analysis of the Algorithm

Iv-a Setting the value of the parameter

Setting the value of that solves (III) is required for simple theoretical justification of our method. However, it is not clear how to obtain such in general. Therefore, in order to relax the condition in (III), that should be satisfied by all and in , we can focus only on the sequences and generated by the proposed alternating minimization process. Then, we can use the following proposition.

Proposition 1.

Set . If there exist an iteration of IDBP that violates the following condition

(16)

then also violates the condition in (III).

Proof.

Assume that and generated by IDBP at some iteration violate (16), then they also violate the equivalent condition

(17)

Note that (17) is obtained simply by plugging (15) into in (III). Therefore, and also violate the inequality in (III). Finally, it is easy to see that and are feasible points of (11), since is a feasible point of and satisfies . Therefore, the condition in (III) does not hold for all feasible and , which means that violates it. ∎

Note that (16) can be easily evaluated for each iteration. Thus, violation of (III) can be spotted (by violation of (16)) and used for stopping the process, increasing and running the algorithm again. Of course, the opposite direction does not hold. Even when (16) is satisfied for all iterations, it does not guarantee satisfying (III). However, the relaxed condition (16) provides an easy way to set with an approximation to the solution of (III), which gives very good results in our experiments.

In the special case of the inpainting problem, (16) becomes ridiculously simple. Since is a selection of rows of , it follows that , which is an

matrix that merely pads with

zeros the vector on which it is applied. Therefore, , implying that satisfies (16) in this case. Obviously, if , a small positive is required in order to prevent the algorithm from getting stuck (because in this case ).

Condition (16) is more complex when considering the deblurring problem. In this case

is an ill-conditioned matrix. Therefore

must be approximated, either by approximating by a full rank matrix before computing (8), or by regularized inversion techniques for , e.g. standard Tikhonov regularization. A research on how to compute in this case is ongoing. We empirically observed that using a fixed value for (for all noise levels, blur kernels and images) exhibits good performance. However, we had to add another parameter , which controls the amount of regularization in the approximation of , that slightly changes between scenarios. This issue is discussed in Section V-B. An interesting observation is that the pairs of which give the best results indeed satisfy condition (16). On the other hand, pairs of that give bad results often violate this condition (recall that the condition should be met during all iterations). An example of this behavior is given in Section V-B, where we also introduce an automatic tuning mechanism based on Proposition 1.

Iv-B Analysis of the sequence

The IDBP algorithm creates the sequence that can be interpreted as a sequence of updated measurements. It is desired that is improved with each iteration, i.e. that , obtained from , estimates better than , which is obtained from .

Assuming that the result of the denoiser, denoted by , is perfect, i.e. , we get from (15)

(18)

The last equality describes a model that has only noise (possibly colored), and is much easier to deal with than the original model (1). Therefore, can be considered as the optimal improved measurements that our algorithm can achieve. As we wish to make no specific assumptions on the denoising scheme , improvement of will be measured by the Euclidean distance to .

Denote by the orthogonal projection onto the row space of , and its orthogonal complement by . The updated measurements are always consistent with on , and do not depend on , as can be seen from

(19)

Thus, the following theorem ensures that iteration improves the results, provided that is closer to than on the null space of , i.e.,

(20)
Theorem 2.

Assuming that (20) holds at the th iteration of IDBP, then we have

(21)
Proof.

Note that

(22)

Equation (21) is obtained by

(23)

where the inequality follows from (20) and (22). ∎

A denoiser that makes use of a good prior (and suitable ) is expected to satisfy (20), at least in early iterations. For example, in the inpainting problem is associated with the missing pixels, and in the deblurring problem is associated with the data that suffer the greatest loss by the blur kernel. Therefore, in both cases is expected to be closer to than . Note that if (20) holds for all iterations, then Theorem 2 ensures monotonic improvement and convergence of , and thus, a fixed point convergence of IDBP. However, note that it does not guarantee that is the limit of the sequence .

Iv-C Recovery guarantees

Similar to P&P, in order to prove more than a fixed point convergence of IDBP, strict assumptions on the denoising scheme are required. For global convergence of P&P, it is enough to assume that the denoiser is non-expansive and has a symmetric gradient [13], which allows using the proximal mapping theorem of Moreau [21]. However, non-expansiveness property of a denoiser is very demanding, as it requires that for a given noise level we have

(24)

for any and in , with .

In this work we take a different route that exploits the structure of the IDBP algorithm, where the denoiser’s output is always projected onto the null space of . Instead of assuming (20), we use the following assumptions:

Condition 1.

The denoiser is bounded, in the sense of

(25)

for any , where is a universal constant independent of .

Condition 2.

For a given noise level , the projection of the denoiser onto the null space of is a contraction, i.e., it satisfies

(26)

for any in , where , and .

Condition 1 implies that , as can be expected from a denoiser. Thus, it prevents considering a trivial mapping, e.g. for all , which trivially satisfies Condition 2. Regarding the second condition, even though it describes a contraction, it considers the operator . Therefore, for some cases of , it might be weaker than non-expansiveness of . Our main recovery guarantee is given in the following theorem.

Theorem 3.

Let , apply IDBP with some for the denoising operation, and assume that Condition 1 holds. Assume also that Conditions 2 holds for this choice of . Then, with the notation of IDBP we have

(27)

where and .

The proof of Theorem 3 appears in the appendix.

Theorem 3 provides an upper bound on the error of IDBP w.r.t. the true signal . It can be seen that for a fixed number of iterations, a bounded denoiser with a smaller is expected to perform better. If is close to 1, more iterations will reduce the first term in the bound, but not the second term, which may be an artifact of our proof. The third term may suggest using IDBP with the smallest possible . However, Condition 2 implies that smaller yields larger , since the denoiser has a smaller effect on its input, and (26) needs to be satisfied for any two signals and . Still, assuming that the effect on is small and can be compensated by using more iterations, smaller is beneficial. The last observation on agrees with our suggestion to choose according to (III), where is minimized under a constraint that aims to prevent losing solutions when (11) is being solved instead of (10).

To the best of our knowledge, there is no equivalent result like Theorem 3 for P&P, as its existing convergence guarantees refer to approaching a minimizer of the original cost function (2), which is not necessarily identical to . Therefore, even though we propose an alternative method to minimize (2), we choose to consider IDBP error w.r.t. the true . Note though that the proof technique we show here can be also used to bound the Euclidean distance between the IDBP estimation and a (pre-computed) solution of (2) with only minor technical changes.

V Experiments

We demonstrate the usage of IDBP for two test scenarios: the inpainting and the deblurring problems. We compare the IDBP performance to P&P and another algorithm that has been specially tailored for each problem [6], [22]. In all experiments we use BM3D [1] as the denoising algorithm for IDBP and P&P. We use the following eight test images in all experiments: cameraman, house, peppers, Lena, Barbara, boat, hill and couple. Their intensity range is 0-255.

V-a Image inpainting

In the image inpainting problem, is a selection of rows of and , which simplifies both P&P and IDBP. In P&P, the first step can be solved for each pixel individually. In IDBP, is obtained merely by taking the observed pixels from and the missing pixels from . For both methods we use the result of a simple median scheme as their initialization (for in P&P and for in IDBP). It is also possible to alternatively use for initialization, but then many more iterations are required. Note that the computational cost of each iteration of P&P and IDBP is of the same scale, dominated by the complexity of the denoising operation.

The first experiment demonstrates the performance of IDBP, P&P and inpainting based on Image Processing using Patch Ordering (IPPO) approach [22], for the noiseless case () with 80% missing pixels, selected at random. The parameters of IPPO are set exactly as in [22], where the same scenario is examined. The parameters of P&P are optimized for best reconstruction quality. We use , and 150 iterations. Also, for P&P we assume that the noise standard deviation is 0.001, i.e. nonzero, in order to compute .

Considering IDBP, in Section IV-A, it is suggested that . However, since in this case , a small positive , e.g. , is required. Indeed, this setting gives good performance, but also requires ten times more iterations than P&P. Therefore, we use an alternative approach. We set , which allows us to use only 150 iterations (same as P&P), but take the last as the final estimate, which is equivalent to performing the last denoising with the recommended . Figure 1 shows the results of both IDBP implementations for the house image. It approves that the alternative implementation performs well and requires significantly less iterations (note that the x-axis has a logarithmic scale). Therefore, for the comparison of the different inpainting methods in this experiment, we use the alternative implementation of IDBP with . The empirical behavior observed here, agrees with the theoretical observation at the end of Section IV-C: larger requires less iterations (due to smaller ) but results in higher error. Note also that it is possible to decrease as the iterations increase. However, in this work we aim at demonstrating IDBP performance with minimal parameter tuning as possible.

The results of the three algorithms are given in Table I. IDBP is usually better than IPPO, but slightly inferior to P&P. This is the cost of enhancing IDBP by setting to a value which is significantly larger than zero. However, this observation also hints that IDBP may shine for noisy measurements, where can be used without increasing the number of iterations. We also remark that IPPO gives the best results for peppers and Barbara because in these images P&P and IDBP require more than the fixed 150 iterations.

Fig. 1: IDBP recovery (PSNR vs. iteration) of house test image with 80% missing pixels and no noise.
camera. house peppers Lena Barbara boat hill couple
IPPO 24.78 32.64 27.98 31.84 29.89 28.17 29.47 28.22
P&P 24.83 34.72 26.88 32.41 25.68 28.83 29.95 29.01
IDBP 24.86 33.78 26.86 32.13 25.55 28.51 29.74 28.80
TABLE I: Inpainting results (PSNR in dB) for 80% missing pixels and .

The second experiment demonstrates the performance of IDBP and P&P with 80% missing pixels, as before, but this time . Noisy inpainting has not been implemented yet by IPPO [22]. The parameters of P&P that give us the best results are , and 150 iterations. Using the same parameter values as before deteriorates the performance significantly. Contrary to P&P, in this experiment tuning the parameters of IDBP can be avoided. We follow Section IV-A and set . Moreover, IDBP now requires only 75 iterations, half the number of P&P. The results are given in Table II. P&P is slightly inferior to IDBP, despite having twice the number of iterations and a burdensome parameter tuning. The results for house are also presented in Figure 2, where it can be seen that P&P reconstruction suffers from more artifacts (e.g. ringing artifacts near the right window).

Fig. 2: Recovery of house image with 80% missing pixels and . From left to right and from top to bottom: original image, subsampled and noisy image, reconstruction of P&P, and reconstruction of the proposed IDBP.
camera. house peppers Lena Barbara boat hill couple
P&P 24.55 31.53 26.16 30.10 24.45 27.01 27.94 27.23
IDBP 24.68 31.62 26.13 30.14 25.03 27.02 28.00 27.22
TABLE II: Inpainting results (PSNR in dB) for 80% missing pixels and .

We repeat the last experiment with slightly increased noise level of , but still use the same parameter tuning for P&P, which is optimized for (i.e. , and the fixed ). This situation is often encountered in practice, when calibrating a system for all possible scenarios is impossible. The results are given in Table III. The IDBP clearly outperforms P&P in this case. This experiment clearly shows the main advantage of our algorithm over P&P as it is less sensitive to parameter tuning.

camera. house peppers Lena Barbara boat hill couple
P&P 24.43 30.78 25.80 29.47 24.12 26.53 27.44 26.71
IDBP 24.51 31.14 25.92 29.69 25.06 26.64 27.61 26.77
TABLE III: Inpainting results (PSNR in dB) for 80% missing pixels and , with the same parameters of Table II (tuned for ).

V-B Image deblurring

In the image deblurring problem, for a circular shift-invariant blur operator whose kernel is

, both P&P and IDBP can be efficiently implemented using Fast Fourier Transform (FFT). In P&P,

can be computed by

(28)

where denotes the FFT operator, denotes the inverse FFT operator.

Recall that is an ill-conditioned matrix. Therefore, In IDBP we replace with a regularized inversion of , using standard Tikhonov regularization, which is given in the Fourier domain by

(29)

where is a parameter that controls the amount of regularization in the approximation of . Then, in IDBP can be computed by

(30)

We use trivial initialization in both methods, i.e. in P&P and in IDBP. Similarly to inpainting, the computational cost of each iteration of P&P and IDBP is on the same scale, dominated by the complexity of the denoising operation.

We consider four deblurring scenarios used as benchmarks in many publications (e.g. [5, 6]). The blur kernel and noise level of each scenario are summarized in Table IV. The kernels are normalized such that .

We compare the performance of IDBP and P&P with IDD-BM3D [6], which is a state-of-the-art deblurring algorithm. We use IDD-BM3D exactly as in [6], where the same scenarios are examined: it is initialized using BM3D-DEB [23], performs 200 iterations and its parameters are manually tuned per scenario. The parameters of P&P are also optimized for each scenario. It uses 50 iterations and 0.85, 0.85, 0.9, 0.8 and 2, 1, 3, 1, for scenarios 1-4, respectively.

For the tuning of IDBP, as mentioned in Section IV-A, we observed that pairs of that give the best results indeed satisfy condition (16), while pairs of that lead to bad results often violate this condition. This behavior is demonstrated for house image in Scenario 1 (see Table IV). Figure 2(a) shows the PSNR as a function of the iteration number for several pairs of . The left-hand side (LHS) of (16) divided by its right-hand side (RHS) is presented in Figure 2(b) as a function of the iteration number. If this division is less than 1, even for a single iteration, it means that the original condition in (III) is violated by the associated . Recall that even when the division is higher than 1 for all iterations, it does not guarantee satisfying (III). Therefore, a small margin should be kept. For example, the pair (=5, =7e-3), which reaches the highest PSNR in Figure 2(a), has smallest LHS/RHS ratio slightly below 3. When the margin further increases, graceful degradation in PSNR occurs, as observed for (=7, =7e-3) and (=5, =10e-3).

Equipped with the above observation, we suggest fixing (or ) and automatically tuning (or ) using condition (16) with some confidence margin. A scheme for IDBP with automatic tuning of is presented in Algorithm 3. Starting with a small value of , the ratio LHS/RHS of (16) is evaluated at the end of each IDBP iteration. If the ratio is smaller than a threshold , then is slightly increased and IDBP is restarted. We do not check the ratio at the first iteration, as it strongly depends on the initial . An alternative scheme that uses a fixed and gradually increases can be obtained in a similar way. We noticed that the restarts in Algorithm 3 happen in early iterations (e.g., restarts will occur at the second iteration for the bad initializations in Figure 2(b)). Therefore, the proposed initialization scheme is not computationally demanding.

The efficiency of the auto-tuned IDBP is demonstrated by improving the performance for the worst two initializations in Figure 2(a), i.e. (=2, =7e-3) and (=5, =3e-3). For each of them, one parameter is kept as is and the second is auto-tuned using a threshold . The results are shown in Figures 2(c) and 2(d).

Scenario
1 2
2 8
3 uniform 0.3
4 49
TABLE IV: Blur kernel and noise variance of different scenarios.
Input: , denoising operator , stopping criterion. , such that and is an unknown signal whose prior model is specified by .
Output: an estimate for .
Params.: some initialization, , moderate fixed value, small initial value, small increment, confidence margin greater than 1.
Default init.: , , 1e-3, =1e-4, .
while stopping criterion not met do
       ;
       ;
       Compute using (30) (note that depends on );
       if  and LHSRHS of (16) is smaller than  then
             ;
             Restart process: ;
            
      
end while
;
Algorithm 3 Auto-tuned IDBP for deblurring
(a)
(b)
(c)
(d)
Fig. 3: (fig:demonstation_psnr) IDBP deblurring results (PSNR vs. iteration number) for house in Scenario 1 for several pairs of ; (fig:demonstation_cond) LHS of (16) divided by its RHS vs. iteration number. Note that if any iteration’s value is less than 1, then the condition in (III) is violated. Since the opposite direction does not hold, it is preferable to keep a margin above 1; (fig:demonstation_psnr_tuned) the results of the auto-tuned IDBP initialized with the values of that give the two worst results in (fig:demonstation_psnr); (fig:demonstation_cond_tuned) LHS of (16) divided by its RHS after auto-tuning.

We examine two different tuning strategies for IDBP. The first one is a manual tuning per scenario, which is simpler than the tuning of the competing methods. We fix for all scenarios and only change to 7e-3, 4e-3, 8e-3, 2e-3 for scenarios 1-4, respectively. The second strategy applies Algorithm 3, with the suggested default settings. In the latter, can be set differently for different images in the same scenario, while all of them use the same method with the same default parameters. We use a stopping criterion of only 30 iterations for both IDBP versions.

Table V shows the results of IDBP, auto-tuned IDBP, P&P and the dedicated algorithm IDD-BM3D. For each scenario it shows the input PSNR (i.e. PSNR of ) and the BSNR (blurred signal-to-noise-ratio, defined as ) for each image, as well as the ISNR (improvement signal-to-noise-ratio) for each method and image, which is the difference between the PSNR of the reconstruction and the input PSNR. Note that in Scenario 3, is set slightly different for each image, ensuring that the BSNR is 40 dB.

From Table V it is clear that IDBP’s plain and auto-tuned implementations have similar performance on average. Both of them perform better than P&P, and have only a small performance gap below IDD-BM3D, which is especially tailored for the deblurring problem and requires more iterations and parameter tuning. Figure 4 displays the results for Barbara in Scenario 4. It can be seen that IDBP reconstruction, especially with auto-tuning in this case, restores the texture better (e.g. look near the right palm).

Scenario 1 camera. house peppers Lena Barbara boat hill couple Average
BSNR 31.87 29.16 29.99 29.89 30.81 29.37 30.19 28.81
input PSNR 22.23 25.61 22.60 27.25 23.34 25.00 26.51 24.87
IDD-BM3D 8.86 9.95 10.46 7.97 7.64 7.68 6.03 7.61 8.28
P&P 8.03 9.74 10.02 8.02 6.84 7.48 5.78 7.34 7.91
IDBP 8.51 9.82 10.07 7.92 7.90 7.54 5.90 7.34 8.13
Auto-tuned IDBP 8.40 9.83 10.06 8.02 7.59 7.61 5.90 7.46 8.11
Scenario 2 camera. house peppers Lena Barbara boat hill couple Average
BSNR 25.85 23.14 23.97 23.87 24.79 23.35 24.17 22.79
input PSNR 22.16 25.46 22.53 27.04 23.25 24.88 26.33 24.75
IDD-BM3D 7.12 8.55 8.65 6.61 3.96 5.96 4.69 5.88 6.43
P&P 6.06 8.20 8.15 6.49 2.72 5.65 4.46 5.56 5.91
IDBP 6.61 8.15 7.97 6.58 3.94 5.87 4.61 5.71 6.18
Auto-tuned IDBP 6.56 8.15 8.00 6.54 3.94 5.91 4.61 5.77 6.19
Scenario 3 camera. house peppers Lena Barbara boat hill couple Average
BSNR 40.00 40.00 40.00 40.00 40.00 40.00 40.00 40.00
input PSNR 20.77 24.11 21.33 25.84 22.49 23.36 25.04 23.24
IDD-BM3D 10.45 12.89 12.06 8.91 6.05 9.77 7.78 10.06 9.75
P&P 9.49 13.17 11.70 9.04 5.36 9.71 7.63 9.98 9.51
IDBP 9.78 12.96 11.92 9.03 6.22 9.64 7.66 9.85 9.63
Auto-tuned IDBP 9.67 12.96 11.90 9.07 6.01 9.74 7.67 9.98 9.63
Scenario 4 camera. house peppers Lena Barbara boat hill couple Average
BSNR 18.53 15.99 17.01 16.47 17.35 16.06 16.68 15.55
input PSNR 24.62 28.06 24.77 28.81 24.22 27.10 27.74 26.94
IDD-BM3D 3.98 5.79 4.45 4.97 1.88 3.60 3.29 3.61 3.95
P&P 3.31 5.43 4.95 4.84 1.50 3.42 3.13 3.39 3.75
IDBP 3.61 5.69 4.44 5.07 1.97 3.54 3.12 3.50 3.87
Auto-tuned IDBP 3.65 5.42 4.36 4.94 2.72 3.52 3.15 3.41 3.90
TABLE V: Deblurring inputs (BSNR and input PSNR in dB) and reconstruction results (Improvement SNR in dB for each method) for scenarios 1-4.
(a) Original image
(b) Blurred and noisy image
(c) IDD-BM3D
(d) P&P
(e) IDBP
(f) Auto-tuned IDBP
Fig. 4: Deblurring of Barbara image, Scenario 4. From top to bottom, fragments of: original image, blurred and noisy image, reconstruction of IDD-BM3D, reconstruction of P&P, reconstruction of the proposed IDBP, and reconstruction of the proposed auto-tuned IDBP.

Vi Conclusion

In this work we introduced the Iterative Denoising and Backward Projections (IDBP) method for solving linear inverse problems using denoising algorithms. This method, in its general form, has only a single parameter that should be set according to a given condition. We presented a mathematical analysis of this strategy and provided a practical way to tune its parameter. Therefore, it can be argued that our approach has less parameters that require tuning than the P&P method. Specifically, for the noisy inpainting problem, the single parameter of the IDBP can be just set to zero, and for the deblurring problem our suggested automatic parameter tuning can be employed. Experiments demonstrated that IDBP is competitive with state-of-the-art task-specific algorithms and with the P&P approach for the inpainting and deblurring problems.

Appendix A Proof of Theorem 3

We start with proving an auxiliary lemma.

Lemma 4.

Assuming that Condition 1 holds, i.e. for any , we have

(31)

for any and in .

Proof.

Using the triangle inequality followed by Condition 1, we get the desired result

(32)

We now turn to the proof of the theorem.

Proof.

By , and using the triangle inequality, we have