DeepAI
Log In Sign Up

Patch-Based Low-Rank Minimization for Image Denoising

06/28/2015
by   Haijuan Hu, et al.
0

Patch-based sparse representation and low-rank approximation for image processing attract much attention in recent years. The minimization of the matrix rank coupled with the Frobenius norm data fidelity can be solved by the hard thresholding filter with principle component analysis (PCA) or singular value decomposition (SVD). Based on this idea, we propose a patch-based low-rank minimization method for image denoising, which learns compact dictionaries from similar patches with PCA or SVD, and applies simple hard thresholding filters to shrink the representation coefficients. Compared to recent patch-based sparse representation methods, experiments demonstrate that the proposed method is not only rather rapid, but also effective for a variety of natural images, especially for texture parts in images.

READ FULL TEXT VIEW PDF
07/06/2018

From Rank Estimation to Rank Approximation: Rank Residual Constraint for Image Denoising

Inspired by the recent advances of Generative Adversarial Networks (GAN)...
03/11/2021

An unsupervised deep learning framework for medical image denoising

Medical image acquisition is often intervented by unwanted noise that co...
01/07/2015

Weighted Schatten p-Norm Minimization for Image Denoising with Local and Nonlocal Regularization

This paper presents a patch-wise low-rank based image denoising method w...
09/12/2022

Low rank prior and l0 norm to remove impulse noise in images

Patch-based low rank is an important prior assumption for image processi...
05/27/2017

Global hard thresholding algorithms for joint sparse image representation and denoising

Sparse coding of images is traditionally done by cutting them into small...
12/20/2014

The local low-dimensionality of natural images

We develop a new statistical model for photographic images, in which the...
09/12/2019

Rethinking the CSC Model for Natural Images

Sparse representation with respect to an overcomplete dictionary is ofte...

I Introduction

Image denoising is a classical image processing problem, but it still remains very active nowadays with the massive and easy production of digital images. We mention below some important works among the vast literature which deals with image denoising.

One category of denoising methods concerns transform-based methods, for example [1, 2]. The main idea is to calculate wavelet coefficients of images, shrink the coefficients and finally reconstruct images by inverse transform. These methods apply fixed transform dictionaries to whole images. However, fixed dictionaries do not generally represent whole images very well due to the complexity of natural images. Many image details are lost while being denoised.

Another category is related to patch-based methods first proposed in [3], which explores the non-local self-similarity of natural images. Inspired by this “patch-based” idea, the authors of K-SVD [4] and BM3D [5] proposed using dictionaries to represent small local patches instead of whole images so that sparsity of coefficients can be increased, where the dictionaries are fixed or adaptive, and compact or overcomplete. These methods greatly improve the traditional methods [1, 2], leading to very good performance. Since these works many similar methods have been proposed to improve the denoising process, such as LPG-PCA [6], ASVD [7], PLOW [8], SAIST [9], NCSR [10], GLIDE [11], and WNNM [12]. However, many proposed methods are computationally complex. For example, K-SVD uses overcomplete dictionaries for sparse representation, which is time-consuming. BM3D and LPG-PCA iterate the denoising process twice; SAIST and WNNM iterate about 10 times. The computational cost is directly proportional to the number of iterations.

At the same time, the low-rank matrix approximation has been widely studied and applied to image processing [13, 14, 15]. Many low-rank models have no explicit solution. However, the paper [13] proves that the nuclear norm minimization with the Frobenius norm data fidelity can be solved by a soft thresholding filter. (See also the paper [12] where an alternative proof is given.) Furthermore, with the help of Eckart-Young theorem [16], the paper [17] demonstrates that the solution of the exact low-rank matrix minimization problem ( norm) can be obtained by a hard thresholding filter.

Inspired by the above theories, in this paper, a patch-based low-rank minimization (PLR) method is proposed for image denoising. First, similar patches are stacked together to construct similarity matrices. Then each similarity matrix is denoised by minimizing the matrix rank coupled with the Frobenius norm data fidelity. The minimizer can be obtained by a hard thresholding filter with principle component analysis (PCA) or singular value decomposion (SVD). The proposed method is rather rapid, since we use compact dictionaries which are more computationally efficient than over-completed dictionaries, and we do not iterate. Moreover, experiments show that the proposed method is as good as the state-of-the-art methods, such as K-SVD [4], BM3D [5], LPG-PCA [6], ASVD [7], PLOW [8], SAIST [9], and WNNM [12].

The rest of the paper is organized as follows. In Section II, we introduce our method. The experimental results are shown in Section III. Finally, this paper is concluded in Section IV.

Ii Patch-Based Low-Rank Minimization

The noise model is:

where is the original image, is the noisy one, and is the Gaussian noise with mean

and standard deviation

. The images are with size . Without loss of generality, we suppose that .

Ii-a Proposed Algorithm

Divide the noisy image into overlapped patches of size . Denote the set of all these patches as .

For each patch , called reference patch, consider all the overlapped patches contained in its neighborhood111The reference patch is located at the center of the neighborhood, if the parities of and are the same; otherwise, the reference patch is located as near as possible to the center of the neighborhood. (the total number of such patches is patches). Then choose the most similar patches (including the reference patch itself) to the reference patch among the patches. The similarity is determined by the -norm distance.

Next, for each reference patch, its similar patches are reshaped as vectors, and stacked together to form a matrix of size

, called similarity matrix. The similarity matrix is denoted as , where columns of , i.e. , are vectored similar patches. Then all the patches in the matrix are denoised together using the hard thresholding method with the principal component (PC) basis, or equivalently, with the singular value decomposition (SVD) basis derived from the matrix ; the detailed process will be given afterward. For convenience, we assume that the mean of the patches in , denoted by , is 0. In practice, we subtract from to form the matrix , and add

to the final estimation

of each patch.

Since the patches are overlapped, every pixel is finally estimated as the average of repeated estimates.

The process of denoising the matrix

is shown as follows. Firstly, we derive adaptive basis using PCA. The PC basis is the set of the eigenvectors of

. Write the eigenvalue decomposition

222We assume that the matrix has full rank, and it has no identical eigenvalues, which are generally true in practice.

(1)

with

where denotes the -th column of and denotes the diagonal matrix with on the diagonal. The PC basis is the set of the columns of , that is, .

The original patches in the similarity matrix are estimated as follows:

(2)

where

(3)

being the threshold. Or equivalently, the matrix composed of estimated patches (2) can be written as

(4)

with

(5)

Note that

(6)

after a simple calculation. Thus can be interpreted as the standard deviation of the basis coefficients.

We could also consider the singular value decomposition (SVD) of :

(7)

where

is chosen as the same orthogonal matrix in (

1), is a diagonal matrix, and (of size ) has orthogonal columns such that with

the identity matrix. Then the denoised matrix (

4) is equal to

(8)

where is a diagonal matrix, with the diagonal of obtained by the hard thresholding operator

(9)

In fact, the equality of (4) and (8) can be demonstrated as follows. By the equations (1) and (7), we have , and . Furthermore, by the equations (5) and (9), we get . Thus it follows that .

Ii-B Low-Rank Minimization

Theorem II.1 stated below is an unconstrained version of the Eckart-Young theorem [16], and comes from Theorem 2(ii) in [17]. According to Theorem II.1, it easily follows that

(10)

where the minimum is taken over all the matrices having the same size as , and is the Frobenius norm. Hence the denoised matrix is the solution of the exact low-rank minimization problem.

Theorem II.1

The following low-rank minimization problem

(11)

has the solution333Strictly speaking, if none of the singular values of equals with , the solution is unique, which is generally true in practice.

(12)

where is the SVD of , and is the hard thresholding operator

Ii-C Choice of the Threshold

The choice of the threshold in (3) is crucial for the proposed algorithm. We study it by minimizing the mean squared error of estimated values of vectored patches in a similarity matrix . Denote

where and are the vectored patches of the true image and the noise corresponding to respectively.

By or (4), it can be easily obtained that

(13)

Assume that the PC basis only depends on the true value vectors and hence is independent of . Then

(14)

Let

(15)

Then by (6), we obtain

(16)

Thus, from (13), (14), and (15), it follows that

(17)

After a simple calculation, the optimal value for is

Since by (16), the optimal value of the threshold in (3) is . In practice, we find that is a good choice.

Iii Experimental Results

In this section, we compare the performance of our PLR method with those of state-of-the-art methods, including the highly competitive method WNNM [12] proposed very recently. Standard gray images are utilized to test the performance of methods. For the simulation, the level of noise is supposed to be known, otherwise there are methods to estimate it; see e.g. [18]. For each image and each level of noise, all the methods are applied to the same noisy images.

For our algorithm, the patch size is set to , the size of neighborhoods for selecting similar patches is set to , and the number of similar patches in a similarity matrix is chosen as . Image boundaries are handled by assuming symmetric boundary conditions. For the sake of computational efficiency, the moving step from one reference patch to its neighbors both horizontally and vertically is chosen as the size of patches, that is, 7. For other comparison algorithms, we utilize the original codes released by theirs authors.

In Table I, we compare the PSNR (Peak Signal-to-Noise Ratio) values of our PLR method with other methods. The PSNR value is defined by

where is the original image, and the restored one. As can be seen in Table I, our method is generally better than K-SVD [4], LPG-PCA [6] and PLOW [8], and sometimes even better than BM3D [5]. Furthermore, for the visual comparisons, our method is also good. For example, as can be seen in Fig.1, our method preserves the texture parts in Lena and Barbara the best among all the methods.

To have a clear comparison of complexities of different methods, we compare the average CPU time to remove noise with for the testing images of size : Peppers, House and Cameraman. All the codes are written in M-files and run in the platform of MATLAB R2011a on a 3.40GHz Intel Core i7 CPU processor. We do not include BM3D for comparison since the original code of BM3D contains MEX-files. The running time is displayed in second in Table II. The comparisons clearly show that the proposed method is much faster than the others.

Original Noisy BM3D[5] WNNM [12] PLR

Fig. 1: Compare denoised images Lena and Barbara by our method and other methods for . From left to right, the images are original images, noisy images, images denoised by BM3D, WNNM, and our PLR method. To make the differences clearer, the second row and the bottom row display parts of Lena images and Barbara images extracted from the first row and third row respectively.
Image Lena Barbara Peppers Boats Bridge House Cam
K-SVD[4] 35.50 34.82 34.23 33.62 30.91 35.96 33.74
LPGPCA[6] 35.72 35.46 34.05 33.61 30.86 36.16 33.69
ASVD[7] 35.58 35.58 33.55 33.26 27.76 36.46 31.62
PLOW[8] 35.29 34.52 33.56 32.94 29.88 36.22 33.15
PLR 35.50 34.28 33.76 30.78 36.57 33.73
BM3D[5] 35.90
SAIST[9] 35.87 35.69 34.76 33.87 31.03 36.52 34.28

WNNM[12]
36.02 35.92 34.94 34.05 31.16 36.94 34.44
K-SVD[4] 32.38 31.12 30.78 30.37 27.03 33.07 30.01
LPGPCA[6] 32.61 31.69 30.50 30.26 26.84 33.10 29.77
ASVD[7] 33.21 32.96 30.56 31.79 25.51 33.53 29.33
PLOW[8] 32.70 31.48 30.52 30.36 26.56 33.56 29.59
PLR 33.03 32.12 30.90 30.64 27.20 33.36 30.12
BM3D[5] 33.03 32.07 27.14
SAIST[9] 33.07 32.43 31.28 30.78 27.20 33.80 30.40
WNNM[12] 33.10 32.49 31.53 30.98 27.29 34.01 30.75

TABLE I: PSNR values for removing noise for our PLR and other methods. Cam is the Cameramen image
K-SVD LPG-PCA ASVD PLOW PLR SAIST WNNM
210 138 337 43 2 25 134
TABLE II: Running time in second for our PLR and other methods to remove noise with images of size

Iv Conclusion

In this paper, a patch-based low-rank minimization method for image denoising is proposed, which stacks similar patches into similarity matrices, and denoises each similarity matrix by seeking the minimizer of the matrix rank coupled with the Frobenius norm data fidelity. The minimizer can be obtained by a hard threshoding filter with principle component basis or left singular vectors. The proposed method is not only rapid, but also effective compared to recently reported methods.

References

  • [1] D. Donoho and J. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, vol. 81, no. 3, pp. 425–455, 1994.
  • [2] R. Coifman and D. Donoho, “Translation-invariant denoising,” Wavelets and statistics, vol. 103, pp. 125–150, 1995.
  • [3] A. Buades, B. Coll, and J. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul., vol. 4, no. 2, pp. 490–530, 2005.
  • [4] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process., vol. 15, no. 12, pp. 3736–3745, 2006.
  • [5] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process., vol. 16, no. 8, pp. 2080–2095, 2007.
  • [6] L. Zhang, W. Dong, D. Zhang, and G. Shi, “Two-stage image denoising by principal component analysis with local pixel grouping,” Pattern Recognition, vol. 43, no. 4, pp. 1531–1549, 2010.
  • [7] Y. He, T. Gan, W. Chen, and H. Wang, “Adaptive denoising by singular value decomposition,” IEEE Signal Process. Lett., vol. 18, no. 4, pp. 215–218, 2011.
  • [8] P. Chatterjee and P. Milanfar, “Patch-based near-optimal image denoising,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1635–1649, 2012.
  • [9]

    W. Dong, G. Shi, and X. Li, “Nonlocal image restoration with bilateral variance estimation: a low-rank approach,”

    IEEE Trans. Image Process., vol. 22, no. 2, pp. 700–711, 2013.
  • [10] W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration.” IEEE Trans. Image Process., vol. 22, no. 4, pp. 1620–1630, 2013.
  • [11] H. Talebi and P. Milanfar, “Global image denoising,” IEEE Trans. Image Process., vol. 23, no. 2, pp. 755–768, 2014.
  • [12] S. Gu, L. Zhang, W. Zuo, and X. Feng, “Weighted nuclear norm minimization with application to image denoising,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014.
  • [13] J.-F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM J. Optimiz., vol. 20, no. 4, pp. 1956–1982, 2010.
  • [14] H. Schaeffer and S. Osher, “A low patch-rank interpretation of texture,” SIAM J. Imaging Sci., vol. 6, no. 1, pp. 226–262, 2013.
  • [15] Y. Peng, J. Suo, Q. Dai, and W. Xu, “Reweighted low-rank matrix recovery and its application in image restoration,” IEEE Trans. Cybern., vol. 14, no. 12, pp. 2418–2430, 2014.
  • [16] C. Eckart and G. Young, “The approximation of one matrix by another of lower rank,” Psychometrika, vol. 1, no. 3, pp. 211–218, 1936.
  • [17] J.-B. Hiriart-Urruty and H. Y. Le, “From eckart and young approximation to moreau envelopes and vice versa,” RAIRO-Operations Research, vol. 47, no. 03, pp. 299–310, 2013.
  • [18] I. M. Johnstone and B. W. Silverman, “Wavelet threshold estimators for data with correlated noise,” J. Roy. Stat. Soc. B, vol. 59, no. 2, pp. 319–351, 1997.