1 Introduction
Image denoising algorithms aim at restoring image information from recorded version contaminated by noise. The noise is generally assumed to be signal independent zero mean additive white Gaussian. Over the past few decades, this problem has been extensively studied and the field has witnessed a continuous growth resulting in a number of highly effective image denoising algorithms.
The majority of these algorithms exploit a basic, yet effective, patchbased approach. These algorithms can be broadly classified into two categories of spatial and transform domain methods. The algorithms in the category of spatialdomain methods operate directly on the image pixels. A number of spatialdomain algorithms have been proposed showing performance improvement for a variety of denoising scenarios (see
[1, 2, 3]). The NLmeans algorithm [4] is one of the most popular spatialdomain algorithms. This algorithm replaces each pixel by a weighted average of all other pixels. The weights are calculated based on the similarity in the neighborhood of each pixel with that of the reference pixel.On the other hand, transformdomain algorithms perform denoising through thresholding of the weak coefficients in some transformed domain [5, 6, 7, 8, 9, 10]. In particular, a sparse representation of the image over a given dictionary is utilized to identify the noise components. Typically, compressed sensing (CS) algorithms are applied to recover the sparse coefficients (see [11, 12, 13, 14, 15]). One of the important denoising algorithms belonging to the transformdomain category is the KSVD [16], which adapts a highly overcomplete dictionary computed via some prior training process. Unfortunately, this process impairs the computational flexibility of this algorithm.
Other denoising approaches combine more than one denoising methodology (see e.g., [17, 18, 19, 20]), imposing additional computational burden. For example, BM3D [21] takes advantage of both the spatial and transformdomain techniques by grouping similar image blocks in 3D arrays and applying collaborative filtering in the transform domain. In general, stateoftheart algorithms are capable of denoising images with high precision. However, in some instances, these algorithms tend to produce oversmoothed images where important information like edges and textural details are lost.
In this paper, we propose a novel image denoising algorithm based on collaborative CS in a sparse transform domain. The proposed algorithm is named collaborative supportagnostic recovery (CSAR). In the proposed algorithm, the sparse coefficients of an image patch are computed and refined via collaboration with similarly structured patches. This collaboration process in computing the supports of the patches results in a more accurate sparse representation of these patches, hence producing an enhanced image denoising performance. The proposed algorithm also lends itself well to computationallysimple implementation, as will be demonstrated in the following sections of this paper.
2 The proposed collaborative supportagnostic recovery (CSAR)
Let be an image matrix. We aim to estimate the latent image matrix from its noisy observations
(1) 
where,
represents the noise matrix whose entries are i.i.d. random variables drawn from a Gaussian distribution with zero mean and variance
. To find the estimated and denoised image , we use the following three main steps.2.1 Formation and grouping of image patches
We form square patches around each pixel in the image where
is selected to be an odd number.
^{1}^{1}1Our algorithm applies to the general case where patches could be rectangular or even linear. However, for simplicity and convenience we focus on the special case of square patches in this paper.Further, to accommodate the border pixels, we pad the image borders with
pixels. This results in a total number of patches as(2) 
where . Note that for computational convenience, we represent patches in (2
) in vectorized form and use the resulting notation in the rest of the paper. So we have
(3) 
where and are vectors of length .
The next step is to group each patch with similar patches as shown in stage 1 of Fig. 1. The aim is to group all patches with similar underlying image structure irrespective of their intensity levels. Importantly, this intensityinvariant grouping requires normalization of the image patches as follows
(4) 
where represents the normalization operator and is the normalized version of . As a result, we have
(5) 
Thus patch and those among all other patches that lie within a distance of, say , from are grouped together. We call these the neighbors of the th patch i.e., . Thus
(6) 
denotes a set of indices of all neighbors of patch and the index itself. Here could be any feasible distance measure, such as the Euclidean distance. Note that by virtue of the definition above, the set of neighbors need not be spatial neighbors and the set of neighbors are not disjoint. The upshot of such grouping is that it yields a higher number of neighbors for each patch which is beneficial for our collaborative approach as described in the following section.
2.2 Collaborative Denoising
It is a wellknown fact that images are sparse in the wavelet domain. We use this property to find sparse representation of each patch as follows
(7) 
where is an overcomplete wavelet dictionary. Moreover, is the sparse representation of i.e., . Let represent an estimate of the sparse vector obtained through a sparse recovery algorithm and let be its support set. Note that in an ideal scenario should hold true for all . This observation motivates us to use the sparse representation of patches to devise a collaborative denoising method. However, note that in reality the supports may not match exactly as is a function of a nonzero as well as . The threshold could be selected such that it guarantees high similarity among the group members. However, the perturbations due to noise would remain and result in a disagreement among the supports of similar patches. Here we would like to stress that this disagreement is a blessing in disguise. Given sufficiently small
, most of the outliers
in the support are there, with high probability, due to noise. This helps us identify and take care of the noisecausing components in the estimate
. One naive approach could be to eliminate the nonzero components of located at and use the resulting sparse vector to form an estimate . However, this could result in destroying useful information especially in high noise cases as some legitimate nonzero locations could be mistaken for noisecausing components. In view of this, we resort to a much moderate approach.In this approach, we utilize active probabilities of the nonzero locations of . The idea is that similar patches will have similar support and the legitimate nonzero locations among these will have high active probabilities. Thus we propose that collaboration among patches take place in the sparse domain as shown in stage 2 of Fig. 1. Specifically, for the th patch, let represents the vector of active probabilities for the estimate . We compute the weighted average
(8) 
as an estimate of the active probability vector of clean . The weighting factor
(9) 
This simple process allows us to gracefully downgrade the contribution of solitary active taps while preserving the values for locations that are common to most of the patches in
. Moreover, by virtue of the law of large numbers, we expect that (
8) will result in a good estimate especially because is large due to the intensityinvariant grouping approach. The derived clean is a valuable piece of information which approximates the a priori information about the active locations of true or clean sparse representation of the th patch . This a priori information could be provided to a sparse recovery algorithm, as shown in stage 2 of Fig. 1, to find an estimate of true (let us call it ) and thus an estimate of true (and denoised) th patch which we denote as(10) 
2.3 Formation of final denoised image
As described in Sec. 2.1, we form overlapping patches. As a result each image pixel is present in patches and therefore has as many estimated values. In order to reconstruct the denoised image , we simply average the estimates of each pixel. In this way, the final image formation adds another level of averaging out impurities. Lastly, we average the denoising results using different odd patch sizes, stage 3 of Fig. 1, that significantly improves the denoising performance.
3 Sparse recovery algorithm Selection
Our denoising algorithm requires estimation of sparse vectors and . Several sparse recovery algorithms exist. However, we need to be careful in our selection. Specifically, the nature of our problem dictates that such algorithm should:

not pose strict conditions on the dictionary matrix ,

be able to estimate parameters such as sparsity and variance of unknown vectors if not provided,

be invariant to the distribution of unknown, and

be capable of utilizing any available a priori info.
Many algorithms exist that offer these features. However, very few have all of the mentioned attributes. Among these algorithms, we are specially interested in SABMP [22] as it is capable of MMSE estimation even if the distribution of the unknown vector is not available. Moreover, the algorithm provides active probabilities along with the estimated sparse vector which is what our denoising algorithm benefits from.
4 Computational complexity of our Image denoising algorithm
The computational complexity of the proposed denoising approach is dominated by the computational complexity of the sparse recovery algorithm, which fortunately has a low computational complexity as compared to many similar algorithms. Given the dimensions of our problem, the computational complexity of estimating one through SABMP is of order where is the expected number of nonzeros (usually a very small number). Finally, for all patches and iterations for different patch sizes, the complexity will scale to an order of .
SNR [dB] /  Cameraman  Lena  Barbara  House  Peppers  Living Room  Boat  

5/103  CSAR  14.88/0.12  16.61/0.26  15.20/0.22  16.79/0.17  16.07/0.24  17.70/0.25  17.34/0.22 
BM3D  14.73/0.10  16.37/0.19  14.87/0.11  16.07/0.11  15.80/0.14  16.64/0.14  16.95/0.09  
0/58  CSAR  16.57/0.24  19.32/0.50  16.99/0.43  19.29/0.32  18.06/0.44  19.81/0.45  19.49/0.41 
BM3D  16.57/0.22  18.94/0.43  16.86/0.34  19.26/0.28  17.60/0.35  18.60/0.29  17.97/0.24  
5/33  CSAR  18.02/0.42  24.95/0.77  21.13/0.75  24.39/0.50  21.86/0.72  23.24/0.74  21.97/0.67 
BM3D  17.59/0.40  23.91/0.71  20.55/0.67  22.98/0.48  20.68/0.61  21.21/0.58  19.68/0.50  
10/18  CSAR  22.60/0.58  26.90/0.87  27.34/0.92  30.34/0.60  26.78/0.87  32.46/0.91  24.06/0.86 
BM3D  22.32/0.57  25.04/0.86  26.40/0.88  28.37/0.60  24.53/0.81  29.24/0.86  22.47/0.76  
15/10  CSAR  27.45/0.71  28.97/0.91  35.22/0.97  37.78/0.70  32.47/0.94  36.92/0.96  25.53/0.94 
BM3D  27.07/0.71  27.45/0.90  31.84/0.95  36.12/0.67  29.93/0.91  32.86/0.93  24.50/0.89  
20/6  CSAR  33.55/0.83  33.03/0.94  39.84/0.98  42.25/0.78  38.68/0.97  41.55/0.98  26.80/0.97 
BM3D  31.84/0.79  32.33/0.92  35.49/0.97  39.34/0.73  32.87/0.96  36.79/0.97  25.45/0.94  
25/3  CSAR  39.78/0.91  33.91/0.96  44.55/0.99  46.70/0.85  43.09/0.99  46.27/0.99  28.12/0.98 
BM3D  37.08/0.87  33.01/0.94  39.49/0.98  42.82/0.79  36.97/0.98  41.01/0.98  27.30/0.97 
5 Simulation Results and Discussions
In this section, we compare the proposed algorithm with two stateoftheart algorithms, namely, NLmeans [4] and BM3D [21]. Comparisons with NLmeans and BM3D validate the superior performance of CSAR and prove that our algorithm is even robust to situations where these cannot perform well.
For the experiments, we used various grayscale standard test images. For a more challenging competition, an SNR range including very high noise levels were used providing higher chances of confusing signal components with noise. The entries of dictionary were derived from wavelet as well as DCT basis. Square patch sizes of 3, 5, 7 and 9, i.e., , were used and the denoising results were averaged.
Fig. 2 compares the performance of denoising the peppers image by proposed CSAR with BM3D and NLmeans algorithms. The peppers image is specifically selected for its detailed rich nature making the comparison more interesting. It is obvious that the proposed algorithm outperforms the other two algorithms across the considered SNR range. Apart from outperforming in terms of PSNR, the SSIM performance of CSAR is also much better than other competing algorithms.
The comparison of denoising Cameraman is provided in Fig. 3. This experimental results taken at SNR = 5 dB depict that our algorithm outperforms stateoftheart algorithms. Another comparison of Mandrill at SNR = 0 dB and 5 dB is illustrated in Fig. 4. These figures emphasize the importance of preserving feature rich portions, as done by CSAR, which are more likely to get destroyed in the presence of noise.
Specifically, we show in Fig. 3 that our results are not blurred at high noise of SNR = 5 dB, while in Fig. 4 we show that we are good at preserving the details. For instance in Fig. 4, note that the face details are blurred out both at SNR = 0 and 5 dB in BM3D but exist in CSAR denoised image. This degradation due to blurring or removal of feature rich components can have critical consequences e.g. detecting tumors in biomedical applications, that can be life threatening if detections go wrong. Detailed results are provided in Table 1 for a number of test images widely used in the denoising literature. These extensive results demonstrate the superiority and efficacy of our approach over images of different types.
Further, the results of proposed CSAR algorithm and the competing BM3D algorithm were compared over a large number of standard test images using a wide range of noise levels. For this purpose, we show all the original standard test images in Fig. 5 used for the extensive simulations. The noisy and the resulting denoised images using BM3D and proposed CSAR algorithms are shown in Fig. 6 to Fig. 15. We show in these figures that our proposed algorithm is capable of both preserving smooth regions of the image as well as the details in the image, which is in fact one of the most challenging tasks while denoising since many denoising algorithms tend to blur out the details. Also as we have compared the results using a wide range of noise levels, this validates that our algorithm is superior to the stateoftheart algorithm BM3D and is better in terms of both objective and subjective measures.
6 Conclusion
In this paper, we have proposed a novel sparserecoverybased denoising algorithm. We deploy a patchbased collaborative scheme via enhanced similar patch hunt. The likelihood that a tap is active is computed and refined through collaboration yielding an enhanced sparse estimate, hence improving the isolation of the noisedominated taps. Results obtained under various experimental setups demonstrate the superiority of the proposed algorithm when benchmarked against selected stateoftheart algorithms. An interesting future direction is to identify smooth and nonsmooth regions to tailor our collaborative framework for further improvements.
References

[1]
J.H. Chang and Y.C. Wang, “Propagated image filtering,” in
IEEE Conference on Computer Vision and Pattern Recognition
, pp. 10–18, June 2015.  [2] C. Kervrann and J. Boulanger, “Optimal spatial adaptation for patchbased image denoising,” IEEE Transactions on Image Processing, vol. 15, pp. 2866–2878, Oct 2006.
 [3] A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 490–530, 2005.
 [4] A. Buades, B. Coll, and J.M. Morel, “A nonlocal algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 60–65, June 2005.
 [5] J.L. Starck, E. Candes, and D. Donoho, “The curvelet transform for image denoising,” IEEE Transactions on Image Processing, vol. 11, pp. 670–684, Jun 2002.
 [6] H. qiao Li, S.Q. Wang, and C. zhi Deng, “New image denoising method based wavelet and curvelet transform,” in WASE International Conference on Information Engineering, vol. 1, pp. 136–139, July 2009.
 [7] D. Gnanadurai and V. Sadasivam, “Image denoising using double density wavelet transform based adaptive thresholding technique,” International Journal of Wavelets, Multiresolution and Information Processing, vol. 03, pp. 141–152, 2005.

[8]
A. Rajwade, A. Rangarajan, and A. Banerjee, “Image denoising using the higher order singular value decomposition,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 849–862, April 2013.  [9] N. Pierazzo, M. Rais, J. Morel, and G. Facciolo, “Da3d: Fast and data adaptive dual domain denoising,” in IEEE International Conference on Image Processing, pp. 432–436, Sept 2015.
 [10] H. Liu, R. Xiong, J. Zhang, and W. Gao, “Image denoising via adaptive softthresholding based on nonlocal samples,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 484–492, June 2015.
 [11] G. Yu, G. Sapiro, and S. Mallat, “Image modeling and enhancement via structured sparse model selection,” in 17th IEEE International Conference on Image Processing, pp. 1641–1644, Sept 2010.
 [12] W. Dong, G. Shi, and X. Li, “Nonlocal image restoration with bilateral variance estimation: A lowrank approach,” IEEE Transactions on Image Processing, vol. 22, pp. 700–711, Feb 2013.
 [13] M. Lebrun, A. Buades, and J.M. Morel, “Implementation of the NonLocal Bayes (NLBayes) Image Denoising Algorithm,” Image Processing On Line, vol. 3, pp. 1–42, 2013.
 [14] G. Shikkenawis and S. Mitra, “2d orthogonal locality preserving projection for image denoising,” IEEE Transactions on Image Processing, vol. 25, pp. 262–273, Jan 2016.
 [15] C. Metzler, A. Maleki, and R. Baraniuk, “Bm3damp: A new image recovery algorithm based on bm3d denoising,” in IEEE International Conference on Image Processing, pp. 3116–3120, Sept 2015.
 [16] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3736–3745, 2006.
 [17] N. Pierazzo, M. Lebrun, M. Rais, J. Morel, and G. Facciolo, “Nonlocal dual image denoising,” in IEEE International Conference on Image Processing, pp. 813–817, Oct 2014.
 [18] C. Knaus and M. Zwicker, “Progressive image denoising,” IEEE Transactions on Image Processing, vol. 23, pp. 3114–3125, July 2014.
 [19] H. Talebi and P. Milanfar, “Global image denoising,” IEEE Transactions on Image Processing, vol. 23, pp. 755–768, Feb 2014.
 [20] T. Dai, C.B. Song, J.P. Zhang, and S.T. Xia, “Pmpa: A patchbased multiscale products algorithm for image denoising,” in IEEE International Conference on Image Processing, pp. 4406–4410, Sept 2015.
 [21] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3d transformdomain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, pp. 2080–2095, Aug 2007.
 [22] M. Masood and T. AlNaffouri, “Sparse reconstruction using distribution agnostic bayesian matching pursuit,” IEEE Transactions on Signal Processing, vol. 61, pp. 5298–5309, Nov 2013.
Comments
There are no comments yet.