1 Introduction
Image denoising is a fundamental restoration problem. Consider a given measurement image , obtained from the clean signal by a contamination of the form
(1) 
where is a zeromean additive noise that is independent with respect to . Note that and
are held in the above equation as column vectors after lexicographic ordering. A solution to this inverse problem is an approximation
of the unknown clean image .Plenty of sophisticated algorithms have been developed in order to estimate the original image content, the NLM
[6], KSVD [15], BM3D [13], EPLL [53], and others [51, 9, 35, 26, 46, 36]. These algorithms rely on powerful image models/priors, where sparse representations [5, 14] and processing of local patches [27] have become two prominent ingredients.Despite the effectiveness of the above denoising algorithms, improved results can be obtained by applying a boosting technique (see [46, 10, 31] for more details). There are several such techniques that were proposed over the years, e.g. ”twicing” [49], Bregman iterations [32], boosting [8], SAIF [46] and more (e.g. [36]). These algorithms are closely related and share in common the use of the residuals (also known as the ”methodnoise” [6]) in order to improve the estimates. The residual is defined as the difference between the noisy image and its denoised version. Naturally, the residual contains signal leftovers due to imperfect denoising (together with noise).
For example, motivated by this observation, the idea behind the twicing technique [49] is to extract these leftovers by denoising the residual, and then add them back to the estimated image. This can be expressed as [10]
(2) 
where the operator represents the denoising algorithm and is the iteration denoised image. The initialization is done by setting .
Using the concept of Bregman distance [4] in the context of totalvariation denoising [39], Osher et al. [32] suggest exploiting the residual by
(3) 
where the recursive function is initialized by setting . Note that if the denoising algorithm can be represented as a linear (dataindependent) matrix, Equations (2) and (3) coincide [10]. Furthermore, for these two boosting techniques, it has been shown [46] that as increases, the estimate returns to the noisy image .
Motivated by the abovementioned algorithms, our earlier work [36] improves the KSVD [15], NLM [6] and the firststage of the BM3D [13] by applying an iterative boosting algorithm that extracts the ”stolen” image content from the methodnoise image. The improvement is achieved by adding the extracted content back to the initial denoised result. The work in [36] suggests representing the signal leftovers of the methodnoise patches using the same basis/ support that was chosen for the representation of the corresponding clean patch in the initial denoising stage. As an example, in the context of the KSVD, the supports are sets of atoms that participate in the representation the noisy patches.
However, in addition to signal leftovers that reside in the residual image, there are noise leftovers that are found in the denoised image. Driven by this observation, SAIF [46] offers a general framework for improving spatial domain denoising algorithms. Their algorithm controls the denoising strength locally by filtering iteratively the image patches. Per each patch, it chooses automatically the improvement mechanism: twicing or diffusion, and the number of iterations to apply. The diffusion [31] is a boosting technique that suggests repeating applications of the same denoising filter, thereby removing the noise leftovers that rely in the previous estimate (sometimes also sacrificing some of the highfrequencies of the signal).
In this paper we propose a generic recursive function that treats the denoising method as a ”blackbox” and has the ability to push it forward to improve its performance. Differently from the above methods, instead of adding the residual (which mostly contains noise) back to the noisy image, or filtering the previous estimate over and over again (which could lead to oversmoothing), we suggest strengthening the signal by leveraging on the availability of the denoised image. More specifically, given an initial estimation of the cleaned image, improved results can be achieved by repeating iteratively the following SOS procedure:
Strengthen the signal by adding the previous denoised image to the noisy input image.
Operate the denoising method on the strengthened image.
Subtract the previous denoised image from the restored signalstrengthened outcome. The core equation that describes this procedure can be written in the following form:
(4) 
where . As we show hereafter, a performance improvement is achieved since the signalstrengthened image can be denoised more effectively compared to the noisy input image, due to the improved Signal to Noise Ratio (SNR).
The convergence of the proposed algorithm is studied in this paper by formulating the linear part of the denoising method and assessing the iterative system’s matrix properties. In this work we put special emphasis on the KSVD and describe the resulting denoising matrix and the corresponding convergence properties related to it. The work by Milanfar [31] shows that most existing denoising algorithms (e.g. NLM [6], Bilateral filter [47], LARK [11]) can be represented as a rowstochastic positive definite matrices. In this context, our analysis suggests that for most denoising algorithms, the proposed SOS boosting method is guaranteed to converge. Therefore, we get a straightforward stopping criterion.
In addition, we introduce an interesting interpretation of the SOS boosting algorithm, related to a major shortcoming of patchbased methods: the gap between the local patchprocessing and the global need for a whole restored image. In general, patchbased methods (i) break the image into overlapping patches, (ii) restore each patch (local processing), and (iii) reconstruct the image by aggregating the overlapping patches (the global need). The aggregation is usually done by averaging the overlapping patches. The proposed SOS boosting is related to a different algorithm that aims to narrow the localglobal gap mentioned above [37]. Per each patch, this algorithm defines the difference between the local (intermediate) result and the patch from the global outcome as a ”disagreement”. Since each patch is processed independently, such disagreement naturally exists.
Interestingly, in the context of the KSVD image denoising, the SOS algorithm is equivalent to repeating the following steps (see [37] and Section 6.2 for more details): (i) compute the disagreement per patch, (ii) subtract the disagreement from the degraded input patches, (iii) apply the restoration algorithm on these patches, and (iv) reconstruct the image. Therefore, the proposed algorithm encourages the overlapping patches to share their local information, thus reducing the gap between the local patchprocessing and the global restoration task.
The above should remind the reader of the EPLL framework [53], which also addresses the localglobal gap. EPLL encourages the patches of the final image
(i.e. after patchaveraging) to comply with the local prior. In EPLL, given a local patch model, the algorithm alternates between denoising the previous result according to the local prior, followed by an image reconstruction step (patchaveraging). Several local priors can use this paradigm – Gaussian Mixture Model (GMM) is suggested in the original paper
[53]. Similarly, EPLL with sparse and redundant representation modeling has been recently proposed in [43]. EPLL bares some resemblance to diffusion methods [31], as it amounts to iterated denoising with a diminishing variance setup, in order to avoid an oversmoothed outcome. In practice, at each diffusion step, the authors of
[53, 43] empirically estimate the noise that resides in (which is neither Gaussian nor independent of ). In contrast, in our scheme, setting this parameter is trivial – the noise level of is nearly , regardless of the iteration number.In the context of image denoising, several works (e.g. [16, 2, 23, 24, 21]) suggest representing an image as a weighted graph, where the weights measure the similarity between the pixels/patches. Since the graph Laplacian describes the structure of the underlying signal, it can be used as an adaptive regularizer, as done in the abovementioned methods. Put differently, the graph Laplacian preserves the geometry of the image by promoting similar pixels to remain similar, thus achieving an effective denoising performance. It turns out that the steadystate outcome of the SOS minimizes a cost function that involves the graph Laplacian as a regularizer, providing another justification for the success of our method. Furthermore, influenced by the SOS mechanism, we offer novel iterative algorithms that minimize the graph Laplacian cost functions that are defined in [16, 2, 23]. Similarly to the SOS, the proposed iterative algorithms treat the denoiser as a ”blackbox” and operate on the strengthened image, without an explicit construction of the weighted graph.
This paper is organized as follows: In Section 2 we provide brief background material on sparse representation and dictionary learning, with a special attention to the KSVD denoising and its matrix form. In Section 3 we introduce our novel SOS boosting algorithm, study its convergence, and generalize it by introducing two parameters that govern the steadystate outcome, the requirements for convergence and the rateofconvergence. In Section 4 we discuss the relation between the SOS boosting and the localglobal gap. In Section 5 we provide a graphbased analysis to the steadystate outcome of the SOS, and offer novel recursive algorithms for related graph Laplacian methods. Experiments are brought in Section 6, showing a meaningful improvement of the KSVD image denoising, and similar boosting for other methods – the NLM, BM3D, and EPLL. Conclusions and future research directions are drawn in Section 7.
2 KSVD Image Denoising Revisited
We bring the following discussion on sparse representations and specifically the KSVD image denoising algorithm, because its matrix interpretation will serve hereafter as a benchmark in the convergence analysis.
2.1 Sparse Representation & KSVD Denoising
The sparseland modeling [5, 14] assumes that a given signal (in this context, the signal is not necessarily an image) can be well represented as , where is a dictionary composed of atoms as its columns, and is a sparse vector, i.e, has a few nonzero coefficients. For a noisy signal , we seek a representation that approximates up to an error bound, which is proportional to the amount of noise in . This is an NPhard problem that can be expressed as
(5) 
where counts the nonzero coefficients in , and the constant is an error bound. There are many efficient sparsecoding algorithms that approximate the solution of Equation (5), such as OMP [33], BP [12], and others [14, 48].
The above discussion assumes that is known and fixed. A line of work (e.g. [17, 42, 1]) shows that adapting the dictionary to the input signal results in a sparser representation. In the case of denoising, under an error constraint, since the dictionary is adapted to the image content, the subspace that the noisy signal is projected onto is of smaller dimension, compared to the case of a fixed dictionary. This leads to a stronger noise reduction, i.e, better restoration. Given a set of measurements , a typical dictionary learning process [1, 17] is formulated as
(6) 
where and are the resulting dictionary and representations, respectively. The scalars are signal dependent, so as to comply with a set of constraints of the form .
Due to computational demands, adapting a dictionary to large signals (images in our case) is impractical. Therefore, a treatment of an image is done by breaking it into overlapping patches (e.g. of size ). Then, each patch is restored according to the sparsityinspired prior. More specifically, the KSVD image denoising algorithm [15] divides the noisy image into fully overlapping patches, then processes them locally by performing iterations of sparsecoding (using OMP) and dictionary learning as described in Equation (6). Finally, the global denoised image is obtained by returning the cleaned patches to their original locations, followed by an averaging with the input noisy image. The above procedure approximates the solution of
(7) 
where is the resulting denoised image, is the number of patches, and is a matrix that extracts the patch from the image. The first term in Equation (7) demands a proximity between the noisy and denoised images. The second term demands that each patch is represented sparsely up to an error bound, with respect to a dictionary . As to the coefficients , those are spatially dependent, and set as explained in Equation (6).
2.2 KSVD Image Denoising: A Matrix Formulation
The KSVD image denoising can be divided into nonlinear and linear parts. The former is composed of preparation steps that include the support determination within the sparsecoding and the dictionary update, while the outcome of the latter is the actual imageadaptive filter that cleans the noisy image. The matrix formulation of the KSVD denoising represents its linear part, assuming the availability of the nonlinear computations. At this stage we should note that the following formulation is given as a background to the theoretical analysis that follows, and it is not necessary when using the proposed SOS boosting in practice.
Sparsecoding determines per each noisy patch a small set of atoms that participate in its representation. Following the last step of the OMP [33], given , the representation^{1}^{1}1We abuse notations here as refers hereafter only to the nonzero part of the representation, being a vector of length . of the clean patch is obtained by solving
(8) 
which has a closedform solution
(9) 
Given , the clean patch is obtained by applying the inverse transform from the representation to the signal/ patch space, i.e.,
(10)  
Notice that although the computation of is nonlinear, the clean patch is obtained by filtering its noisy version, , with a linear, imageadaptive, symmetric and normalized filter.
Following Equation (7) and given all , the globally denoised image is obtained by minimizing
(11) 
This is a quadratic expression that has a closedform solution of the form
(12)  
where
is the identity matrix. The term
is a diagonal matrix that counts the appearances of each pixel (e.g. 64 for patches of size ) and originates from the averaging with the noisy image . The matrix returns a clean patch to its original location in the global image. The matrix is the resulting filter matrix formulation of the linear part of the KSVD image denoising. In the context of graph theory, and are called the degree and similarity matrices, respectively (see Section 5 for more information).A series of works [46, 31, 45] studies the algebraic properties of such formulations for several image denoising algorithms (NLM [6], Bilateral filter [47], Kernel Regression [11]
), for which the filtermatrix is nonsymmetric and rowstochastic matrix. Thus, this matrix has real and positive eigenvalues in the range of
, and the largest eigenvalue is unique and equals to, with a corresponding eigenvector
[40, 22]. In the KSVD case, and under the assumption of periodic boundary condition^{2}^{2}2See Appendix A for an explanation on this requirement., the properties of the resulting matrix somewhat different, and are given in the following theorem.The resulting matrix has the following properties:

Symmetric , and thus all eigenvalues are real.

Positive definite , and thus all eigenvalues are strictly positive.

Minimal eigenvalue of satisfy , where is the patch size.

Doubly stochastic, in the sense of . Note that may have negative entries, which violates the classic definition of row or column stochasticity.

The above implies that is an eigenvalue corresponding to the eigenvector .

The spectral radius of equals to , i.e, .

The above implies that maximal eigenvalue satisfy .

The spectral radius .
Appendix B provides a proof for these claims.
For the denoising algorithms studied in [46, 31, 45], the matrix is not symmetric nor positive definite, however it can be approximated as such using the Sinkhorn procedure [31]. In the context of the KSVD, as describe in Appendix A, can become symmetric by a proper treatment of the boundaries (essentially performing cyclic processing of the patches).
To conclude, the discussion above shows that the KSVD is a member in a large family of denoising algorithms that can be represented as matrices [31]. We will use this formulation in order to study the convergence of the proposed SOS boosting and for demonstrating the localglobal interpretation.
3 SOS Boosting
In this section we describe the proposed algorithm, study its convergence, and generalize this algorithm by introducing two parameters that govern its steadystate outcome, the requirements for convergence and its rate.
3.1 SOS Boosting  The Core Idea
Leading image/patch priors are able to effectively distinguish the signal content from the noise. However, an emphasis of the signal over the noise could help the prior to better identify the image content, thereby leading to better denoising performance. As an example, the sparsitybased KSVD could choose atoms that better fit the underlying signal. Similarly, the NLM, which cleans a noisy patch by applying a weighted average with its spatial neighbors, could determine better weights. This is the key idea behind the proposed SOS boosting algorithm, which exploits the previous estimation in order to enhance the underlying signal. In addition, the proposed algorithm treats the denoiser as a ”blackbox”, thus it is easy to use and becomes applicable to a wide range of denoising methods.
As mentioned in Section 1, the first class of boosting algorithms (twicing [49] or its variants [32, 8, 36]) suggest extracting the ”stolen” content from the methodnoise image, with the risk of returning noise back to the denoised image, together with the extracted information. On the other hand, the second class of boosting methods (diffusion [31] or EPLL [53, 43]) aim at removing the noise that resides in the estimated image, with the risk of obtaining an oversmoothed result (this depends on the number of iterations or the denoiser parameters at each iteration). As a consequence, these two classes of boosting algorithms are somewhat lacking as they address only one kind of leftovers [46] – the one that reside in the methodnoise or the other which is found in the denoised image. Also, these methods may result in under or over smoothed version of the noisy image.
Adopting a different perspective, we suggest strengthening the signal by adding the clean image to the noisy input , and then operating the denoising algorithm on the strengthened result. Differently from diffusion filtering, as the estimated part of the signal is emphasized, there is no loss of signal content that has not been estimated correctly (due to the availability of ). Differently from twicing, we hardly increase the noise level (under the assumption that the energy of the noise which resides in the clean image is small). Finally, a subtraction of from the outcome should be done in order to obtain a faithful denoised result. This procedure is formulated in Equation (4):
where .
The SOS boosting obtains improved denoising performance due to higher SNR of the signalstrengthened image, compared to the noisy input. In order to demonstrate this, let us denote
(13) 
where is the error that resides in the outcome , containing both noise residuals and signal errors. Assuming that the denoising algorithm is effective, and has an improved SNR compared to , this means that
(14) 
implying
(15) 
Thus, referring now to the addition , its SNR satisfies
(16)  
In the above we used the CauchyShwartz inequality. Using (15) we get
(17)  
Since , we have that
(18) 
where in the ideal case (), the relation becomes
(19) 
3.2 Convergence Analysis
Studying the convergence of the SOS boosting is done by leveraging the linear matrix formulation of the denoising algorithm. The error of the SOS recursive function
(20) 
is defined as the difference between the estimate,
(21) 
and the outcome that is obtained after a large number iterations,
(22) 
where is a filter matrix, which is equivalent to applying on the signalstrengthened image. Substituting Equations (21) and (22) into Equation (20) lead to
(23)  
where we use the recursive connection . We should note that the nonlinearity of is neglected in the above derivation by allowing an operation of the form .
In the following convergence analysis we shall assume a fixed filtermatrix that operates on the signalstrengthened image along the whole SOSsteps, i.e., . This comes up in practice after applying the SOS boosting for a large number of iterations (as explained in the context of Figure 1). In this case, the abovementioned abuse of the nonlinearity becomes correct, and thus the convergence analysis is valid. Assume that , and that the spectral radius of the transition matrix . The error converges exponentially, i.e., for . Thus, the SOS recursive function is guaranteed to converge. By assigning , the second term in Equation (23) vanishes, thus
(24)  
where is a constant vector. Using matrixnorm inequalities we get
(25)  
where we use . As a result, is bounded by and approaches zero for when .
As such, the SOS boosting is guaranteed to converge for a wide range of denoising algorithms – the ones that can be formulated/approximated such that is convergent, e.g., the KSVD [15], NLM [6], Bilateral filter [47] and LARK [11]. In the next subsection we soften the convergence requirements and intensify its properties, along with a practical demonstration.
3.3 Parametrization
We generalize the SOS boosting algorithm by introducing two parameters that modify the steadystate outcome, the requirements for convergence (the eigenvalues range) and its rate. Starting with the first parameter, , which controls the signal emphasis, the formulation proposed is:
(26) 
where a large value of implies a strong emphasis of the underlying signal. Assigning and replacing with a fixed filtermatrix lead to
(27) 
which implies a steadystate result
(28) 
This is the new steadystate outcome, obtained only if the SOS boosting converges. We should note that this outcome also minimizes a cost function that involves the graph Laplacian as a regularizer (see Section 5for further details). The conditions for convergence are studied hereafter.
The second parameter, , modifies the eigenvalues of the error’s transition matrix, thereby leading to a faster convergence and relaxing the requirement that only with eigenvalues between 0 to 1 is guaranteed to converge. We introduce this parameter in such a way that it will not affect that steadystate outcome (at least as far as the linear approximation is concerned). We start with the steadystate relation
(29) 
We multiply both sides by and add the term to the RHS,
(30) 
Thus, the same solving (29) will also solve (30) and thus the steadystate is not affected. Rearranging this equality leads to
(31) 
As a result, the proposed generalized SOS boosting is given by
(32) 
It is important to note that although does not affect explicitly, it may modify the estimates over the iterations. Due to the adaptivity of to its input, such modifications may eventually affect the steadystate outcome.
Studying the convergence of Equation (32) is done in the same way that was taken in Section 3.2. Starting with the error computation, expressed by
(33)  
Next, following Theorem 3.2, and assuming , we get that the condition for convergence is:
(34) 
where and are the eigenvalues of and the error’s transition matrix, respectively. In order to achieve the fastest convergence, we seek for the parameter that minimizes
(35) 
Given , the rateofconvergence is governed by
(36) 
Appendix D provides the following closedform solution for Equation (35),
(37) 
along with optimal convergence rate,
(38) 
In the context of the KSVD image denoising [15], Figure 1 demonstrates the properties of the generalized SOS recursive function for the image House, corrupted by zeromean Gaussian noise with . Each KSVD operation includes 5 iterations of sparsecoding and dictionaryupdate with noise level of . We repeat these operations for 100 SOSsteps and set . In the following experiment, the denoised images are the outcome of , where is held fixed as , initializing with .
According to Theorem 2.2 and based on the original KSVD parameters ( , , ), we get and . These are leading to (see Equation (37)). Figure 1(a) plots the logarithm of for . As can be seen, the error norm decreases linearly, and bounded by , where . The fastest convergence is obtained for with . While the slowest one is obtained for with .
Figure 1(b) demonstrates the PSNR improvement (the higher the better) as a function of the SOSstep. As can be seen, faster convergence of translates well into faster improvement of the final image. The SOS boosting achieves PSNR of dB, offering an impressive improvement over the original KSVD algorithm that obtains dB.
4 LocalGlobal Interpretation
As described in Section 1, there is a stubborn gap between the local processing of image patches and the global need (creating a final image by aggregating the patches). Consider a denoising scenario based on overlapping patches (e.g. [15, 51]): At the local processing stage, each patch is denoised independently^{3}^{3}3Note that in our terminology, even methods like BM3D [13] are considered as local, even though they share information between groups of patches. Indeed, our discussion covers this and related methods as well. In a way, the approach taken in [35] offers some sort of remedy to the BM3D method. without any influence from neighboring patches. Then, a global stage merges these outcomes by plainly averaging the local denoising results.
Inspired by gametheory ideas, in particular the ”consensus and sharing” optimization problem
[3], we introduce an interesting localglobal interpretation to the aboveproposed SOS boosting algorithm. A game theoretical terminology of a patchbased processing can be viewed as the following: There are several agents, where each one of them adjusts its local variable to minimize its individual cost (in our case – representing the noisy patch sparsely). In addition, there is a shared objective term (the global image) that describes the overall goal. Imitating this concept, we name the following SOS interpretation as ”sharing the disagreement”. This approach, reported in [37], reduces the localglobal gap by encouraging the overlapping patches to reach an agreement before they merge their forces by the averaging.The proposed boosting algorithm reduces the localglobal gap in the following way. Per each patch, we define the difference between the local (intermediate) result and the patch from the global outcome as a ”disagreement”. Since each patch is denoised independently, such disagreement is almost always nonzero and even substantial. Sharing the information between the overlapping patches is done by subtracting the disagreement from the noisy image patches, i.e., seeking for an agreement between them. These modified patches are the new inputs to the denoising algorithm. In this way we push the overlapping patches to share their local results, influence each other and reduce the localglobal gap.
More specifically, given an initial denoised version of and its intermediate patch results, we suggest repeating the following procedure: (i) compute the disagreement per each patch, (ii) subtract the result from the noisy input patches, (iii) apply the denoising algorithm to these modified patches, and (iv) reconstruct the image by averaging on the overlaps. Focusing on the KSVD image denoising, this procedure is detailed in Algorithm 1.
(39) 
(40) 
(41) 
The modified input patches contain their neighbors information, thus encouraging the locally denoised patches to agree on the global result. Substituting in Equation (39) leads to
(42) 
Now, by denoting the local residual (methodnoise) as , we get
(43) 
where the representation is the denoised version of the patch . In this formulation, the input to the KSVD is a patch from the global (previous iteration) cleaned image , contaminated by its own local methodnoise . Notice the major differences between Equation (2) that denoises the methodnoise, Equation (3) that adds the methodnoise to the noisy image and then denoises the result, and our local approach that aims at recovering the previous global estimation, thereby leading to an agreement between the patches. Our algorithm is also different from the EPLL [53], which denoises the previous cleaned image without considering its methodnoise.
Still in the context of the KSVD, Appendix C shows, under some assumptions, an equivalence between the SOS recursive function (Equation (4)) and the above ”sharing the disagreement” algorithm. It is important to emphasize that the former treats the KSVD as a ”blackbox”, thereby being blind to the KSVD intermediate results (the independent denoised patches, before the patch averaging step). On the contrary, in the case of the disagreement approach, these intermediate results are crucial – they are central in the algorithm. Therefore, the connection between the SOS and the disagreement algorithms is far from trivial.
5 Graph Laplacian Interpretation
In this section we present a graphbased analysis to the SOS boosting. We start by providing a brief background on graph representation of an image in the context of denoising. Second, we explore the graph Laplacian regularization in general, and in the context of Equation (28), the steadystate outcome of the SOS boosting. Finally, we suggest novel recursive algorithms (that treat the denoiser as a ”blackbox”) to the graph Laplacian regularizers that are described in [16, 2, 23, 24].
Recent works [19, 20, 16, 2, 41, 29, 21, 23, 24] suggest representing an image as a weighted graph , where the vertices represent the image pixels, the edges represent the connection/similarity between pairs of pixels, with a corresponding weight .
A constructive approach for composing a graph Laplacian for an image is via image denoising algorithms. Given a denoising process for an image, which can be represented as a matrix multiplication, , one can refer to the entry as revealing information about the proximity between the th and th pixels. We note that the existence of the matrix does not imply that the denoising process is linear. Rather, the nonlinearity is hidden within the construction of the entries of . For example, in the case of the NLM [6], Bilateral [47] and LARK [11] filters, the entries of can be expressed by
(44) 
where measures the distance between the pixels (or patches), and is a smoothing parameter. Notice that in the case of the sparsitybased KSVD denoising [15], the weights , as defined in Equation (12), measure the similarity between the pixels through the dictionary . Dealing with an undirected graph , the degree of the vertex can be defined by
(45) 
where is a sum over the weights on the edges that are connected to , and is a diagonal matrix (called the degree matrix), containing the values of in its diagonal^{4}^{4}4The KSVD degree matrix, as defined in Equation (12), also holds the relation described in Equation (45). According to Theorem 2.2, is an eigenvector of , corresponding to eigenvalue , leading to . Multiplying both sides by results in the desired relation , i.e., ..
The graph Laplacian has a major importance in describing functions on a graph [50], and in the case of image denoising – representing the structure of the underlying signal [16, 2, 29, 21, 24]. There are several definitions of the graph Laplacian. In the context of the proposed SOS boosting, we shall use a normalized Laplacian, defined as
(46) 
where is a filter matrix, representing the denoiser (see Equation (12)). Note that is a normalized version of the similarity matrix , thus has eigenvalues in a range of 0 to 1. There are several ways to obtain from , e.g., is used in [44] and in this work (leading to a random walk Laplacian), another way is as used in [30]. Recently, Kheradmand and Milanfar [24] suggest , where is the outcome of Sinkhorn algorithm [25]. Notice that different versions of result in different properties of (refer to [24] for more information).
In general, the spectrum of a graph is defined by the eigenvectors and eigenvalues of . In the context of image denoising, as argued in [18, 30, 21, 24], the eigenvectors that correspond to the small eigenvalues of encapsulate the underlying structure of the image. On the other hand, the eigenvectors that correspond to the large eigenvalues mostly represent the noise. Meyer et al. [30] showed that the small eigenvalues are stable even for high noise scenarios. As a result, the graph Laplacian can be used as a regularizer, preserving the geometry of the image by encouraging similar pixels to remain similar in the final estimate [16, 2].
What can we do with ? The obvious usage of it is as an image adaptive regularizer in inverse problems. There are several ways to integrate in a cost function, for example, [16, 2] suggest solving the following minimization problem^{5}^{5}5The work in [21] is closely related, but their regularization term is , and thus it leads to in the steadystate formula, where is an unnormalized graph Laplacian. Thus, we omit it from the next discussion.
(47) 
leading to a closedform expression for ,
(48) 
The authors of [24] suggest an iterative graphbased framework for image restoration. Specifically, in the case of image denoising [23], they suggest a variant to Equation (47),
(49) 
Differently from Equation (47), the above expression offers a weighted data fidelity term, resulting in the following closedform expression to the final estimate:
(50) 
It turns out that Equation (28), the steadystate result of the SOS boosting, i.e.,
(51)  
can be also treated as emerging from a graph Laplacian regularizer, being the outcome of the following cost function
(52) 
Notice the differences between Equations (47), (49), and (52). The last expression suggests that SOS aims to find an image that is close to the estimated image , rather than the noisy itself. In the spirit of the SOS boosting,
we can suggest expressing the abovementioned graph Laplacian regularization methods, i.e., Equations (48) and (50), as recursive, providing novel ”blackbox” iterative algorithms that minimize their corresponding penalty functions without explicitly building the matrix . Starting with Equation (48), the steadystate outcome should satisfy
(53) 
There are many ways to rearrange this expression using the fixed point strategy, in order to get a recursive update formula. We shall adopt a path that leads to an iterative process that operates on the strengthened image, , in order to expose the similarities and differences to our scheme. Therefore, we suggest adding to the RHS, i.e.,
(54) 
Rearranging the above expression results in
(55) 
As a consequence, the obtained iterative ”blackbox” formulation to the conventional graph Laplacian regularization [16, 2] is given by
(56) 
As can be seen, we got an iterative algorithm that, similar to SOS, operates on the strengthened image. However, rather than simply subtracting from the outcome, we add the method noise, and then normalize.
In a similar way, Equation (50), which is formulated as
(57) 
can be expressed by
(58) 
and in the general case, the ”blackbox” version of [23] is formulated by
(59) 
where . Again, we see a close resemblance to our SOS method. However, instead of subtracting from the denoised strengthened image, we simply normalize accordingly.
Equations (56) and (59) offer two iterative algorithms that are essentially minimizing the penalty functions (47) and (49), respectively. However, these algorithms offer far more – both can be applied with the denoiser as a ”blackbox”, implying that no explicit matrix construction of (nor ) is required. Furthermore, these schemes, given in the form of denoising on the strengthened image, imply that parameter setting is trivial – the noise level is nearly , regardless of the iteration number. Lastly, an update of within the iterations of these recursive formulas seems most natural.
6 Experimental Results
In this section, we provide detailed results of the SOS boosting and its localglobal variant – ”sharing the disagreement”. The results are presented for the images Foreman, Lena, House, Fingerprint and Peppers (see Figure 2
). These images are extensively tested in earlier work, thus enabling a convenient and fair demonstration of the potential of the proposed boosting. The images are corrupted by an additive zeromean Gaussian noise with a standarddeviation
. The denoising performance is evaluated using the Peak Signal to Noise Ratio (PSNR), defined as , where MSE is the Mean Squared Error between the original image and its denoised version.6.1 SOS Boosting with stateoftheart algorithms
The proposed SOS boosting is applicable to a wide range of denoising algorithms. We demonstrate its abilities by improving several stateoftheart methods: (i) KSVD [15], (ii) NLM [6, 7], (iii) BM3D [13], and (iv) EPLL [53]. The KSVD [15], which was discussed in detail in this paper, is based on an adaptive sparsity model. The NLM [6] leverages the ”selfsimilarity” property of natural images, i.e., the assumption that each patch may have similar patches within the image. The BM3D [13] combines the ”selfsimilarity” property with a sparsity model, achieving the best restoration and even touches some recently developed image denoising bounds [28]. The EPLL [53], which was described in Section 1, represents the image patches using the Gaussian Mixture Model (GMM), and encourages the global result to comply with the local patches prior. As can be inferred, these algorithms are diverse and build upon different models and forces. Furthermore, the EPLL can be considered as a boosting method byitself, designed to improve a GMM denoising algorithm. The diversity of the above algorithms emphasizes the potential of the SOS boosting.
The improved denoising performance is gained simply by applying the authors’ original software as a ”blackbox”, without any internal algorithmic modifications or parameters settings^{6}^{6}6The original KSVD uses patches, but our experiments show that yields nearly the same results for the core algorithm, while enabling better improvement with the SOS boosting. As a consequence, in the following experiments we demonstrate the results of the version.. Such modifications may lead to better results and we leave these for future study. In order to apply SOS boosting we need to set the parameters , , and a modified noiselevel (although is known). The parameter , which might be a little higher than , represents the noiselevel of . We can estimate automatically (e.g using [52]) or tunning a fixed value manually. In the following experiments we choose the second option. We set (the effect of is demonstrated later on) and run several tests to tune and per each noise level and denoising algorithm, as detailed in Table 1 under the ’SOS params’ column.
In the case of the EPLL and BM3D, the authors’ software is designed to denoise an input image in the range of 0 to 1. As such, we apply the SOS boosting () in the following formulation:
(60) 
with a corresponding . In order to remain consistent with the SOS parameters of the KSVD and NLM, which apply Equation (26), we provide hereafter the parameters and for the EPLL and BM3D.
KSVD [15]  
SOS params  Foreman  Lena  House  Fingerprint  Peppers  Average  
Orig  SOS  Orig  SOS  Orig  SOS  Orig  SOS  Orig  SOS  Orig  SOS  Imprv.  
10  0.30  36.92  37.13  35.47  35.58  36.25  36.49  32.27  32.35  34.68  34.71  35.12  35.25  0.13  
20  0.60  33.81  34.11  32.43  32.67  33.34  33.62  28.31  28.54  32.29  32.35  32.04  32.26  0.22  
25  1.00  32.83  33.12  31.32  31.62  32.39  32.72  27.13  27.44  31.43  31.49  31.02  31.28  0.26  
50  1.00  28.88  29.85  27.75  28.37  28.01  28.98  23.20  23.98  28.16  28.66  27.20  27.97  0.77  
75  1.00  26.24  27.32  25.74  26.40  25.23  26.85  19.93  21.88  25.73  26.72  24.57  25.83  1.26  
100  1.00  25.21  25.39  24.50  24.99  23.69  24.59  17.98  19.61  24.17  25.03 