In recent years we see an interesting trend, in which many image restoration algorithms choose to operate on local image patches rather than processing the image as a whole. These techniques impose statistical prior knowledge on the patches of the processed image. Surprisingly, in many cases these methods lead to state-of-the-art results. For example, in the Gaussian denoising case, the algorithm presented in  performs denoising using a statistical model based on sparse representation of the image patches, training a dictionary using the K-SVD algorithm. The BM3D algorithm reported in  exploits interrelations between patches by grouping them into 3D groups, and applying collaborative filtering on them that is based on sparse representation as well. The work reported in [5, 6, 7] extends PCA and dictionary learning, both in the context of patches for handling severe Poisson image denoising. The scheme reported in  adopts the sparse representation model to handle the single image super-resolution problem. The papers [9, 10] both propose a GMM modeling of image patches, and demonstrate the effectiveness of this to variety of inverse problems. The NSCR method by Dong et. al.  uses sparse representation for solving both the super-resolution and the deblurring problems. The IDD-BM3D method in  employs BM3D frames for image deblurring. All these papers and many others rely on operating on patches in order to complete the restoration task at hand. Many works use sophisticated priors when operating locally, and most often they resort to a simple averaging when combining the restored patches.
The work reported in  takes another step toward exploiting interrelations between image patches in the image. The work reported in  proposes to construct a 1D smoothed signal by applying a permutation on the pixels of the corrupted image. The permutation is obtained by ordering image patches to form ”the shortest possible path”, approximating the solution of the traveling salesman problem (TSP). Given the sorted image, the clean image is recovered by applying a 1D filter on the ordered signal. This method is simple and yet it leads to high-quality results. On the down side, it is limited to the Gaussian denoising and inpainting problems. The work reported in  takes an extra step in exploring the patch-ordering concept. This work constructs a sophisticated and very powerful regularizer by combining the patch-permutation idea with a wavelet pyramid [13, 14]. The obtained regularizer is used for solving general inverse problems in imaging. This method leads to high-quality results in series of tests, however it is quite involved.
In this paper we propose to compose the whole image from local patches using a prior that exploits interrelation between them. We harness the patch-permutation idea, merging it with the classical Maximum A’posteriori probability (MAP) estimator, by proposing a new, simple, yet powerful regularization for inverse imaging problems. We formulate the inverse problem as a weighted sum of two penalty terms: (i) a regular negative log-likelihood, and (ii) a novel regularization expression that forces smoothness in a robust way, by basing it on reordered list of the image pixels, obtained according to the similarity of image patches.
For constructing the permutation-based regulatization, we follow the core idea presented in . We rely on the assumption that in an ideal image, close similarity between patches indicates a proximity between their center pixels. We therefore build a 1D (piece-wise) smoothed signal by applying a patch-based permutation on the restored (unknown) image pixels. The permutation is obtained by extracting all possible patches with overlaps from the currently recovered image, and ordering them to form the (approximated) shortest possible path. The resulting ordering induces the permutation.
The proposed regularization forces smoothness on the obtained 1D signal via a Laplacian, penalized by the robust norm. Since the ordering is associated with all the pixels withing the patches, the smoothness term is accumulated over all these pixels. We also deploy weights that take into account the actual distances between the consecutive patches in the ordering.
The proposed scheme is demonstrated on several different problems: (i) White additive Gaussian image denoising, (ii) severe Poisson image denoising, (iii) image deblurring, and (iv) single image super-resolution. We initialize our algorithm with an output of a recent method that handles each of these problems. The reconstructed image is then obtained by minimizing the penalty function described above using the L-BFGS method [15, 16]. Our extensive experiments indicate that applying the proposed scheme leads to state-of-the-art results.
We should note that the proposed scheme bares some similarity to recent work offering regularization of inverse problems by utilizing the similarity between patches formed as a graph [17, 18, 19, 20, 21, 22, 23, 24, 25]. These works propose various formats of using this graph’s Laplacian as a sparsifying operator. Our approach could be considered as a special case of such a Laplacian regularization, which forces the graph to be a simple and continuous chained ordering of the image pixels. As such, the regularization we obtain is simpler and easier to manage (since we keep only one forward and one backward neighbors per each pixel). In addition, our approach also provides a stronger stabilizing effect for highly ill-posed problems since it ties all the pixels to each other.
The paper is organized as follows. In Section 2 we discuss the principles behind the permutation construction, and how it becomes useful as a regularizer. Section 3 describes the proposed algorithm, along with the numerical scheme used. Section 4 presents experiment results and compares the new method with other leading schemes. Section 6 concludes this paper and raises directions for a future work.
2 Constructing the permutation
A common assumption in image processing is that clean images are usually (piece-wise) smooth, i.e. the difference between any two neighboring pixels tends to be small. Most image processing algorithms, be it for restoration, segmentation, compression, and more, rely on this model to some extent. The problem with this assumption, however, is that violation from this behavior is due to image edges and texture, and both are central in forming and defining the visual content of an image, and as such, they cannot be sacrificed as simple outliers.
Adopting a totally different perspective towards handling of an image, a convenient and often used technique for developing an image processing algorithm is to convert the 2D image into a 1D array, treat the image as a vector, and then convert the resulting 1D array back to a 2D array. There are several popular scan methods that convert an image to a vector, the prominent of which are raster scan, zigzag scan, Hilbert Peano and other space-filling curves.
The question we pose now is this: how can we combine the two approaches mentioned above? Namely, given a corrupted image , how should we construct a 2D-to-1D conversion that produces the smoothest possible vector when applied to a clean image, . Such a conversion would be extremely helpful for image restoration tasks, because in the 1D domain its processing is expected to be very simple and effective. We should emphasize that the sought conversion method must be robust, i.e. be able to produce a meaningful ordering even when operated on a corrupted data.
A word on notations: The 2D-to-1D conversion we are seeking is denoted by , and represented by a permutation matrix . I.e., applying to column-stacked image , , produces a vector with the reordering .
In this work we build on the ideas presented in , while giving it a new and novel interpretation. In order to propose such a smoothing conversion, we shall assume that each pixel of the clean image can be represented by the corresponding surrounding patch taken from the corrupted image . In other words, the assumption is that if two patches of the corrupted (or clean) image are close in terms of some distance function, then their corresponding central pixels in the clean image are expected to be similar as well. We will refer hereafter to this as the assumption. Throughout this work we shall use the Euclidean distance for assessing proximity between patches. We extract all possible patches of size with overlaps from the image , where . Our method refers to the patches as points in , defining a graph where the patches are its vertices and distances between them are the edges.
Recall that our aim is to find an ordering (or , its permutation matrix equivalent) such that the resulting permuted clean image is the smoothest. This can be defined as the following minimization task:
where is a 1D difference operator (simple derivative). Interestingly, it is tempting to propose an ordering of the image patches in based on a simple sort of their center pixels, as it will lead to the ideal (smoothest) permutation. However, since we assume that is not available and instead we have , this option is impossible. Thus, the above objective is replaced by an alternative that relies on the assumption made above:
This means that the graph vertices are ordered to form the shortest possible path that visits each vertex exactly once. Namely, we formulate the quest of as a classic Traveling Salesman Problem (TSP) .
A natural way to proceed is to use a known approximation algorithm that is known to perform well for the TSP. The problem with this approach is that the permutation induced from a too-good TSP solver is in fact creating and magnifying artifacts. An example for this behavior and the artifacts induced are shown in Figure 1, where we use the Lin-Kernighan algorithm for approximating the TSP solution [27, 28], and then apply our recovery algorithm. We should note that the two orderings (our approach, as described in Algorithm LABEL:alg:reordering and Lin-Kernighan) lead to an average total variation measure, , of and , respectively, when assessed on the original (true) image. This implies that the better ordering leads to % improvement in terms of the smoothness obtained, but this does not translate to better outcome in our algorithm.
The problem with a too-good TSP solver is that it adjusts the ordering to artifacts in the initial image, thereby creating correlation between the regularizer and the artifacts, which magnifies these artifacts.
The output of our scheme with Lin-Kernighan heuristics or AlgorithmLABEL:alg:reordering for solving the TSP problem. The scheme is applied for the Gaussian denoising task with , and initialized with the BM3D result.
In order to overcome this artifact magnification problem we use a randomized approach for the TSP solution. More specifically, we use the randomized version of NN (Nearest Neighbor) heuristics presented in  as a TSP solver. This algorithm starts from an arbitrary patch and continues by finding the closest two neighbors to the currently held vertex with the restriction that they have not been assigned yet, choosing one of them at random. At the -th stage of the algorithm we have accumulated already vertices. Given the last of them, , we choose either or , its two closest neighbors, with probabilities and , where is chosen such as . The nearest neighbor search is performed from within the set of unvisited patches, and it is limited to a window of size around . If there is only one unvisited patch in this region, the heuristics chooses it. When no unvisited patches remain, the first and second nearest neighbor search is performed among all unvisited image patches. The square restriction is designed to reduce computation complexity; however it is important also for assuring that relevant patches are matched. The randomized NN heuristics is summarized in Algorithm LABEL:alg:reordering. An example of a reordered image is presented in Figure 2.
A drawback of the described reordering is it’s greediness nature. Indeed, as can be seen in Figure 2, the last part of the reordered clean image is not smooth. This is due to the fact that in the last stages of Algorithm LABEL:alg:reordering very few unvisited patches remain for the nearest neighbor search. While this may seem troubling, in section 5 we shall explain why this phenomenon has little effect on the final restored results.
3 The reconstruction algorithm
3.1 Constructing the Regularization Term
Our regularizer is constructed as a sum of several smoothness terms. A smoothness term is obtained by applying the matrices , and to the column-stacked image , , and penalizing the result by the robust norm:
In this expression is the permutation matrix that represents the ordering obtained by the TSP solver. is a simple 1D Laplacian, and is a weighting diagonal matrix of the form , where
The weights take into account the distances between the consecutive patches in the ordering. Intuitively we expect that the centers of the closer patches will be more similar. Therefore the weights are chosen to be inversely proportional to the norm of the 1D-Laplacian of the corresponding patches. In other words, the coefficients are calculated using the following formula:
where , and are the ordered patches. The boundary cases (, and ) are handled by setting , and . We note that if would have been defined as in Equation (5), but assuming patches of size pixels (i.e., only the center pixel is the actual patch), then our regularization simplifies to become the -norm. The reason is that in this case hold simply the absolute values of the 1D Laplacian over the ordered pixels. The penalty term itself computes the very same Laplacian and divides by these values. Thus, all those ratios are ’1’-es for non-zero values and ’0’ elsewhere, thus obtaining an -norm. Thus, the term in Equation (3) seeks to sparsify the second derivative of the permuted image in a robust way.
The problem with the coefficients is that the distances between the patches which contain edges or texture are usually relatively high, and therefore these weights are relatively low. The coefficients that appear in Equation (4) are designed to overcome this problem, by magnifying the weights for edge/texture patches,
where is a parameter to be set. In order to identify patches that contain edges or texture, our algorithm calculates the magnitude of image gradient using the central difference method. Namely, , the magnitude of the gradient at location (, ), is calculated using:
where and are the horizontal and vertical gradients at location (, ). If we denote by value of the pixel at location (, ), then the and are given by:
A patch is identified as active if the sum of its gradient magnitudes is above a threshold , i.e:
Finally, the values of are clipped by in order to reduce the condition number of the operator MLP, thereby increasing the rate of convergence of the optimization problem.
3.2 Subimage Accumulation
In the smoothness term in Equation (3) the ordering is associated only with the central pixels of the patches. However, since the permutation originates from full patch ordering, it makes sense to associate the ordering with all the pixels withing the patches, as shown in Figure 3. Therefore, we propose to construct the regularizer as a sum of the smoothness terms for all pixels within the patches:
The operator associates an (i,j)-shifted sub-image of with the ordering . The notion of sub-images and their role here is depicted in Figure 4
. First, the image is padded using mirror reflection of itself withpixels on all sides. Then extracts an sub-image starting from the (i,j)-th location in the padded image. For example: in Figure 2(a) we are referring to the location , while in Figure 2(b) it is .
Actually, accumulating the smoothness terms over patch pixels increases the number of orderings from 1 to , and introduces an implicit spatial prior. This way the similarity is forced between the whole patches rather than only between their central pixels. An example of the output of our reconstruction scheme without subimage accumulation is shown in Figure 5. The resulting image seems very rugged due to the lack of a spatial smoothing prior.
The work reported in  suggests to further increase the number of the orderings by applying the TSP solver 10 times (each time starting it from a random patch ), this way creating 10 different permutation matrices. We explored this option by accumulating the regularization term in Equation (8) over a group of such permutations. For the regularization we employ here, our finding suggests that this approach has a negligible effect on the results.
3.3 Building the Objective Function
In order to build the objective function, we denote by the column stacked version of the corrupted image, by the desired image, and by the regularization term in Equation (8). Then, the inverse problem is formulated as a weighted sum of the negative log-likelihood term , and the regularizer . In order to keep values within the pixel boundaries we add to the objective function two penalties: and . Therefore, the reconstruction algorithm is generally formulated as the following minimization problem:
where is defined as:
Each component of is greater than zero when , and grows with the difference . The parameter controls the strength with which we enforce this penalty.
In order to use a derivative based optimization method for solving the obtained minimization problem, the energy function in Equation (9) is smoothed by replacing the absolute values with , when
In other words, is replaced with its smooth version , and with , where:
The function is smooth and convex (in ), since
In all our simulations we solve the unconstrained smoothed optimization problem using L-BFGS method  implemented in minFunc . The minimization task runs approximately 200-300 iterations. For images it takes around 1-2 minutes, and for images 5-10 minutes. Our simulations ran on Intel i7 core with 16GB RAM. The code of Algorithm 1 and parts of the code of the minFunc are implemented in C. The rest of the code is implemented in Matlab.
4 Experimental Results
4.1 Gaussian Denoising
For the Gaussian denoising task, the smooth unconstrained optimization problem is formulated as:
We run denoising experiments for noise levels . Our scheme is initialized with the output of the BM3D algorithm , i.e. the ordering is calculated using the BM3D output. The simulation parameters are summarized in tables 2 and 1. In table 3 we bring quantitative results of these experiments. For each test we compare the PSNR achieved by our scheme with the one referring to the initial images and show the improvement. Note that we do not bring SSIM measure of quality in this table, simply because the conclusions these values lead to are the same as the ones drawn from the PSNR. Examples of qualitative results are shown in Figure 6. For and higher, we get an improvement in almost all experiments. For medium noise level, for example , our scheme does not succeed to improve the PSNR in most of the experiments, because the global patch ordering forces self-similarity on the image patches, and therefore it tends to eliminate distinctive details. In addition, our algorithm does not perform well on image areas that contain sensitive texture or high amount of edges. Examples of such areas are: striped pants in the Barbara image and friction ridges in the fingerprint image.
Along with the experiments described above we also performed the Gaussian denoising experiment (with ) while replacing the distance by the SSIM index. We calculated the SSIM measure of small patches using the algorithm presented in  with two differences: (i) we used Gaussian weighting function with
for computing the local statistics (mean, variance and covariance), instead ofGaussian used in ; (ii) the local statistics of any pixel in the patch was calculated using only the pixels that belong to the patch. For example, for computing the local statistics of the top-left pixel of the patch, , we used of the Gaussian, i.e. pixels , where . In fact, full Gaussian window was used only for calculating the statistics of the central pixel of the patch. Unfortunately, this scheme led to performance deterioration when compared to the option. We believe that the reason for such behavior is this: SSIM ordering minimizes the distance between patches in terms of their mean, variance and covariance, while our regularizer penalizes for distance between pixels. Thus, in this case, the ordering and the regularizer are not consistent with each other. Clearly, this opens up an opportunity to redefine the regularizer to work in terms of the SSIM measure. We leave this as a future extension of our work.
|/ PSNR||25 / 20.17||50 / 14.15|
|/ PSNR||75 / 10.63||100 / 8.13|
For the debluring task the smooth unconstrained optimization problem is the following:
where is a blur matrix.
We repeat the experiments reported in . The blur parameters for each experiment, PSF (point spread function) and of the noise, are summarized in table 4. The PSFs are normalized so that . Our scheme is initialized with the output of the IDD-BM3D algorithm . The simulation parameters are summarized in tables 5 and 1. In table 6 we bring quantitative results of the deblurring experiments. For each we compare the PSNR achieved by our scheme with one of the initial images and show the improvement. Examples of qualitative results are shown in Figure 7. Our algorithm improved the image quality in most experiments.
For the single image Super-Resolution (SR) task, our problem is:
where blurs the high resolution image with a
Gaussian kernel with standard deviation 1.6, anddownsamples the image by a scaling factor 3 in each pixel. In addition, Gaussian noise with is added to the low resolution image.
We repeat the experiments reported in , including both noiseless and noisy cases. Our scheme is initialized with output of NCSR algorithm . The simulation parameters are summarized in tables 7 and 1. In table 8 are shown the quantitative results of the SR experiments. Qualitative results are shown in Figure 8. Our algorithm improves the image quality almost in all experiments.
4.4 Poisson denoising
The Poisson negative log-likelihood is given by the following formula:
Since are not defined for negative values, we extrapolate for using the second order Taylor series:
Then we construct the extrapolated likelihood term, , as a sum of the functions:
Since the functions penalize the negative values when , the corresponding -th components of the function can be omitted. Therefore, the function is reduced to the function:
The difference between the and is that the summarizes only over the components for which . Thus, the unconstrained optimization problem for the Poisson denoising is formulated by:
where is calculated using the peak value: , and is the maximum pixel value of the clean image.
We run experiments for several noise levels – peak = 4, 2, 1, 0.5, 0.2 and 0.1. Our scheme is initialized with the output of the SPDA algorithm . Since SPDA achieves better results for low peaks using binning, we also use binning in the simulations with peaks: 0.5, 0.2 and 0.1. The simulation parameters are summarized in tables 10 and 9. In table 11 we show quantitative results of the experiments. Examples of qualitative results are shown in Figure 9. For the high peak values (peaks 4 and 2 without binning, and peak 0.5 with binning) our algorithm improves the image quality almost in all experiments.
|Our + bin||18.69||19.68||19.63||21.72||18.92||26.37||27.55||21.79|
|Our + bin||17.83||18.29||18.58||20.16||17.56||24.09||24.88||20.20|
|Our + bin||17.10||17.16||16.03||18.91||16.28||21.72||22.38||18.51|
There is an interesting connection between the approach presented in this paper and the work reported in [1, 2], which introduced the idea of patch-ordering. This connection is easiest to explain in the context of Gaussian denoising. If in our objective function in Equation (9) we replace the robust statistics () regularization with a Euclidean norm (), remove the penalties and , remove the weighting matrix, and do not employ subimage accumulation, we get the following minimization problem
which can be rewritten as
since P is unitary. The solution of this problem is
In other words,
where is a circulant matrix that represents a convolution filter. Therefore, the image denoising task obtained by the objective function in (21) reduces to applying permutation on the noisy image , smoothing the permuted signal with a convolution filter , and obtaining the result by applying the inverse permutation . This solution is similar to the basic idea of the scheme in , as formulated in Equation (2) in part ii@.
If we pad the image by a circular repetition, the matrices are unitary and circulant. In this case the objective function in Equation (25) can be rewritten as the sum of contributions of all subimages,
where the contribution of the subimage is
The approach taken in this work is to optimize this penalty directly (if we would have chosen to use the as a regularizer). In contrast, one could apply a sub-optimal minimization strategy that minimizes the contribution of each subimage independently and averages the obtained values of . In fact, this is closely related to the approach taken in . The result of this sub-optimal approach will be
where minimizes contribution of the subimage
Equation (28) can be rewritten as
where is the convolution filter we have seen before. Therefore, the formula in (30) reduces to the following operations: (i) extract all possible subimages using the operators and apply permutation on each subimage, (ii) smooth the permuted signals with convolutional filter , (iii) apply inverse permutation on the results, (iv) plug each subimage into its original place and average the results. To conclude, the formula in Equation (30) is reminiscent of the denoising strategy presented in Equation (7) of part ii@ in . We should stress, however, that the work in  is not a special case of the method presented in this paper, due to various additional enhancements used in , which are not exploited in this work. These include learning the linear filter, cycle-spinning over the choice of the permutation, and more.
The work reported in , as well as the presented work here, constructs an objective function with a permutation-based regularization. However, the scheme in  uses multiscale decomposition within the regularization, which makes the algorithm quite involved and computationally expensive. As a consequence, our method is simpler than the one proposed in , while remaining very effective. In Table 12 we compare the performance of our scheme with the algorithms presented in [1, 2] for the Gaussian denoising problem. In Table 13 we bring a comparison with  for image debluring. Tables 12 and 13 show that our method outperforms the scheme in , and achieves results that are comparable to the ones reported in .
|/ PSNR||25 / 20.17||50 / 14.15|
|/ PSNR||75 / 10.63||100 / 8.13|
|Image||Method||Test 1||Test 2||Test 3||Test 4||Tests 5||Test 6|
5.2 Choosing the TSP solver
In section 2 we have shown artifact magnification effect when using a deterministic TSP solver. In this section we revisit this phenomena in order to better explain and demonstrate it. Recall that our algorithm starts with some reconstruction result, which is naturally not perfect and thus may contain artifacts. Our regularization relies on a permutation of patches from this image. Thus, a permutation obtained by a more exact TSP solver is in fact creating and magnifying artifacts, since it will assign patches with the same artifacts as neighbors in our ordering. If, on the other hand, one uses the more crude (and somewhat random) ordering that we propose, such similar patches will be separated in the ordering, thus leading to a smoothing out effect of some of these artifacts. In section 2 Figure 1 is showing an example of artifact magnification in the Gaussian denoising test. To complete this presentation, we bring here examples of this effect in other applications. In Figure 10 we compare between the outputs of the proposed method with Lin-Kernighan heuristics and Algorithm LABEL:alg:reordering for the debluring task. In Figures 11 and 12 we do the same comparison for the super-resolution and Poisson denoising tasks respectively. Our conclusion from these experiments is that a more exact TSP solver magnifies artifacts in all applications.
5.3 Patch Reordering for Better Sparsification
In order to better understand the role of the permutation in the proposed algorithm, we compare the tail distribution functions of the Laplacian result with our ordering versus a zig-zag scan. The Laplacian result are calculated by computing the vector , which is the essence of the regularization term we propose. Using all images listed in Table 3
(referring to the Gaussian denoising test) we compute the cumulative distribution function,by applying a cumulative sum on the histogram of the components of . For comparison we calculate this while replacing our with the one obtained by a horizontal zig-zag scan ordering. A comparison of the tail distribution function, for zig-zag scan and our permutation is shown in Figure 13. As can be seen, our ordering leads to better sparsification of the Laplacian result, when applied to clean images. More specifically, the probability for the zig-zag scan exhibits a tendency to have more non-zeros, and heavier tail, i.e. bigger values.
5.4 Poorly Oredered Pixels
As already mentioned in Section 2, due to greedy nature of the NN heuristics in Algorithm LABEL:alg:reordering, the last part of the reordered image is not as smooth as the rest of the ordering. Nevertheless, the reconstruction results of our algorithm, as shown in the previous section, seem to be of good quality, which may be puzzling. We therefore performed several experiments to better understand this behavior, and here we propose an explanation for this phenomenon.
In order to avoid side effects related to the initialization, we start our experiment by computing an oracle-permutation, i.e., applying Algorithm LABEL:alg:reordering on the original clean image (Lena)111We repeated the following line of experiments for several images and the conclusions remain the same.. First we plot a graph of the absolute value of the 1D gradient of the reordered image. As can be seen in Figure 13(a), approximately 15% of the ordering towards the end is not smooth, just as expected. The location of these pixels in the image is shown in Figure 14(b), and as can be seen, these pixels are mostly in edge and texture areas.
We now perform a Gaussian denoising experiment with (), and our interest is in seeing how the MSE of the output pixels depends on their location in the ordering. We divide the permuted image into 50 groups of pixels with 50% overlap between them, and plot a graph of the average MSE versus location in the ordering, where the x-axis corresponds to the center of each group and the y-axis is the MSE for the groups. This graph is shown in Figure 13(b). Indeed, it is clear that the MSE grows as we tend towards the end of the ordering, implying that the last pixels is ill-treated. However, even for these seemly poorly-served pixels, the obtained MSE is significantly lower than the noise level – in this test, the MSE of the last group is , where the noise level is . This suggests that even though the last ordered pixels seem to have poor choice of neighbors, their treatment is still rather effective. So, how come this happens?
The answer resides in the sub-image accumulation proposed in our regularization scheme. If a pixel falls in the last part of the ordering, as shown above, it does not imply that this pixel may not be sufficiently close to its neighbors in the patch-ordering, because the poor neighbor assignments are true only in terms of the ordering applied to the central pixels of the patches. The very same poorly-positioned pixels are highly likely to be assigned with effective neighbors in other orderings, as each pixel participates in such optional permutations. In order to demonstrate this, we check how many pixels fall in the last 15% of all orderings. For the image Lena this count drops to 3.6% of the pixels, and the location of these pixels is shown in Figure 14(c). These pixels are characterized by the fact that they have no relevant neighbors in the image, regardless of the permutation strategy adopted. As such, they are expected to be ill-treated in the restoration procedure.
5.5 Initialization Strategy
The prime drawback of the proposed method is that it relies on a good initialization. In order to demonstrate this we run a Gaussian denoising experiment on Lena with and without BM3D initialization. In this experiment we apply seven rounds of minimization of the penalty function, each followed by a construction of the permutation. The first iteration uses the noisy image itself for building the permutation, while the next iterations rely on the temporary output. We use for the first iteration, for the second, and for the rest. As for the remaining parameters, we used the values listed in table 1. Qualitative and quantitative results of this experiment are shown in Figure 16. Clearly, the result falls short compared to BM3D and other state-of-the-art denoising methods. Thus, our algorithm relies heavily on a good-quality initialization, and further work is required for seeking alternatives to this initialization strategy.