From Rank Estimation to Rank Approximation: Rank Residual Constraint for Image Denoising

07/06/2018 ∙ by Zhiyuan Zha, et al. ∙ Nanjing University Bell Labs University of Macau 2

Inspired by the recent advances of Generative Adversarial Networks (GAN) in deep learning, we propose a novel rank minimization approach, termed rank residual constraint (RRC), for image denoising in the optimization framework. Different from GAN, where a discriminative model is trained jointly with a generative model, in image denoising, since the labels are not available, we build an unsupervised mechanism, where two generative models are employed and jointly optimized. Specifically, by integrating the image nonlocal self-similarity prior with the proposed RRC model, we develop an iterative algorithm for image denoising. We first present a recursive based nonlocal means approach to obtain a good reference of the original image patch groups, and then the rank residual of image patch groups between this reference and the noisy image is minimized to achieve a better estimate of the desired image. In this manner, both the reference and the estimated image in each iteration are improved gradually and jointly; in the meantime, we progressively approximate the underlying low-rank matrix (constructed by image patch groups) via minimizing the rank residual, which is different from existing low-rank based approaches that estimate the underlying low-rank matrix directly from the corrupted observation. We further provide a theoretical analysis on the feasibility of the proposed RRC model from the perspective of group-based sparse representation. Experimental results demonstrate that the proposed RRC model outperforms many state-of-the-art denoising methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 9

page 10

page 11

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Low-rank matrix estimation aims to recover the underlying low-rank matrix from its degraded observation, which has a variety of applications in computer vision and machine learning

1 ; 9 ; 6 ; 5 ; 16 ; 4 ; 3 ; 2 ; 10 ; 8 ; 17 ; 53 . For instance, the Netflix customer data matrix is regarded as low rank because the customers’ choices are mostly affected by a few common factors 53 . The video clip is captured by a static camera satisfies the "low rank + sparse" structure so that the background modeling can be conducted 16 ; 17 . As the matrix formed by nonlocal similar patches in a natural image is of low rank, a flurry of image completion problems based on low rank models have been proposed, such as image alignment 2 , video denoising 4 , shadow removal 1 and reconstruction of occluded/corrupted face images 3 .

One typical low-rank matrix estimation method is the low-rank matrix factorization 6 ; 3 ; 11 , which factorizes the observed matrix Y into the product of two matrices that can be used to reconstruct the desired matrix with certain fidelity. Another parallel research is the rank minimization methods 15 ; 9 ; 5 ; 16 ; 7 ; 22 ; 20 ; 23 ; 21 ; 19 ; 17 , with the nuclear norm minimization (NNM) 15 ; 5 being the most representative approach. The nuclear norm of a matrix X, denoted by

, is the summation of its singular values, i.e.,

, with representing the singular value of X. NNM aims to recover the underlying low rank matrix X from its degraded observation matrix Y, while minimizing . However, NNM usually tends to over-shrink the rank components, and thus limits its capability and flexibility.

To improve the flexibility of NNM, most recently, Gu 7

proposed the weighted nuclear norm minimization (WNNM) model, which heuristically set the weight being inverse to the singular values. Compared with NNM, WNNM assigns different weights to different singular values such that the matrix rank estimation becomes more accurate. Similar case also exists in the truncated nuclear norm

22 and the partial sum minimization 52 .

One common property of the aforementioned low-rank models is only to estimate the low-rank matrix from the corrupted observation and this may lead to an inaccurate result in real applications, such as image inverse problems. By contrast, in this paper, we propose a novel method, called rank residual constraint (RRC), for the rank minimization problem. Different from existing low-rank based methods, such as the well-known WNNM and NNM, we progressively approximate or approach the underlying low-rank matrix via minimizing the rank residual. By integrating the image nonlocal self-similarity (NSS) prior with the proposed RRC model, we develop an iterative algorithm for image denoising. In a nutshell, given the corrupted image y, in each iteration, we construct a reference low-rank matrix (for each image patch group) by developing a recursive like algorithm based nonlocal means 25 , and approximate our recovered matrix to this reference matrix via the proposed RRC model. It is notice that, the reference matrix and the recovered matrix are improved gradually and jointly in each iteration. Fig. 1 depicts that the reconstructed image from our proposed algorithm can progressively approximate the ground truth, by taking the widely used House

image as an example, which is corrupted by zero-mean Gaussian noise with standard deviation

=100. It can be observed that the singular values of the recovered matrix approaches the singular values of the ground truth progressively and so does the recovered image ( Fig. 1 (f-h) ).

Note that the significantly difference between the proposed RRC model and the existing low-rank based methods (e.g., WNNM and NNM) is that we analyze the rank minimization problem from a different perspective. Therefore, the proposed RRC is not a replacement of the existing low-rank based methods, such as WNNM and NNM. In our RRC model, we analyze the rank minimization problem from the point of mathematical approximation theory, namely, via minimizing the rank residual, the singular values of the recovered matrix progressively approaches the singular values of the reference matrix. Rather than similar to the traditional low-rank based methods estimated the low-rank matrix directly from the corrupted observation.

The flowchart of the proposed RRC model for image denoising is illustrated in Fig. 2. Moreover, we provide a theoretical analysis on the feasibility of the proposed RRC model from the perspective of the group-based sparse representation 26 ; 30 ; 27 ; 29 ; 31 , which is detailed in Section 4.

Figure 1: Illustration of the proposed image denoising method via rank residual constraint (RRC). The House image is corrupted by zero-mean Gaussian noise with standard deviation =100. (b-d) The singular values of the image patch group (with reconstructed image in the cyan box at the bottom row) from ground truth image (red), noisy image (blue), image recovered by NNM (green) and the recovered matrix (cyan) at the , and iterations of our algorithm. (f-h) Reconstructed images at the , and iterations using the proposed RRC model. It can be observed that the singular values of progressively approach the ground truth and the reconstructed image is getting close to the original image.
Figure 2: The flowchart of the proposed RRC model for image denoising.

The rest of this paper is organized as follows. Section 2 develops the RRC model based on the rank minimization scenario. Section 3 drives the algorithm to solve the RRC model for image denoising by integrating the image NSS prior. Section 4 derives a theoretical analysis of the proposed RRC model in terms of group-based sparse representation. Section 5 presents the experimental results for image denoising and Section 6 concludes the paper.

2 Rank Minimization via Rank Residual Constraint

In this section, we first analyze the existing weakness of traditional NNM model and then propose the rank residual constraint model to improve the rank estimation performance.

Nuclear Norm Minimization

According to 5 , nuclear norm is the tightest convex relaxation of the original rank minimization problem. Given a data matrix , the goal of NNM is to find a matrix of rank , by solving

(1)

where denotes the Frobenius norm and is the regularization parameter. Candès 9

proved that the low-rank matrix can be perfectly recovered from the degraded/corrupted data matrix with high probability by solving an NNM problem. Despite the theoretical guarantee of the singular value thresholding (SVT) algorithm

15 , it has been observed that the recovery performance of such a convex relaxation will degrade in the presence of noise, and the solution can seriously deviate from the original solution of rank minimization problem 17 . More specifically, NNM tends to over-shrink the rank of the matrix. Taking the image Lena in Fig. 3(a) as an example, we add Gaussian noise with standard deviation =100 to the clean image and perform NNM to recover a denoised image in Fig. 3(c). We randomly extract a patch from the noisy image in Fig. 3

(b) and search 60 similar patches to generate a group. These patches (after vectorization) in this group are then stacked into a data matrix (please refer to Section 

3 for details of constructing the group). Since all the patches in this group have similar structures, the constructed data matrix is of low rank. Based on this, we plot the singular values of the patch group in the noisy image, NNM recovered image, and the original image in Fig. 3(d). As can be seen, the solution of NNM (green line) is severely deviated (over-shrink) from the ground truth (red line).

Figure 3: Analyzing the matrix rank by image denoising.

Rank Residual Constraint

As demonstrated in Fig. 3, due to the influence of noise, it is difficult to estimate the matrix rank precisely using NNM. More specifically, in Fig. 3(d), the singular values of the observed matrix are seriously deviated from the singular values of the original matrix. However, in low-rank matrix estimation, we wish that the singular values of the recovered matrix X and the singular values of the original matrix are as close as possible. Explicitly, we define the rank residual by

(2)

where and are the singular values of X and , respectively. It can be seen that the rank estimation of the matrix X largely depends on the level of this rank residual.

However, in real applications, the original matrix is not available, and thus we desire a good estimate of it, denoted by . Via introducing this and defining with being the singular values of , we propose the rank residual constraint (RRC) model below,

(3)

where denotes some type of norm for regularization analyzed in Section 3. We will describe how to estimate and solve Eq. (3) below. Specifically, we apply the proposed RRC model to image denoising in the following section.

3 Image Denoising via Rank Residual Constraint

Image denoising 25 ; 26 ; 30 ; 27 ; 55 ; 56 ; 29 ; 54 ; 28 is not only an important problem in image processing, but also an ideal test bench to measure different statistical image models. Mathematically, image denoising aims to recover the latent clean image x from its noisy observation , where is usually assumed to be zero-mean Gaussian noise with standard deviation . Owing to the ill-posed nature of image denoising, it is critical to exploit the prior knowledge that characterizes the statistical features of the image.

The well-known nonlocal self-similarity (NSS) prior 25 ; 26 ; 30 ; 27 ; 29 ; 31 , which investigates the repetitiveness of textures and structures of natural images within nonlocal regions, implies that many similar patches can be searched given a reference patch. To be concrete, a noisy (vectorized) image is divided into overlapping patches of size , and each patch is denoted by a vector . For the patch , its similar patches are selected from a surrounding (searching) window with pixels to form a set . After this, these patches in are stacked into a matrix , i.e., This matrix consisting of patches with similar structures is thus called a group, where denotes the patch in the group. Then we have , where and are the corresponding group matrices of the original image and noise, respectively. Since all patches in each data matrix have similar structures, the constructed data matrix is of low rank. By adopting the proposed RRC model in Eq. (3), the low rank matrix can be estimated by solving the following optimization problem,

(4)

where , with and representing the singular values of and , respectively. is a good estimate of the original image patch group . In order to achieve a high performance for image denoising, we hope that the rank residual of each group is small enough.

Determine

Let us come back to Eq. (4). Obviously, one important issue of our RRC based image denoising is the determination of . Hereby, we perform some experiments to investigate the statistical property of , where denotes the set of and we use the original image x to construct . In these experiments, two typical images Fence and Parrot are corrupted by Gaussian noise with standard deviations =20 and =50 respectively, to generate the noisy image y. Fig. 4 shows the fitting results of empirical distributions of the rank residual on these two images. It can be observed that both empirical distributions can be reasonably well approximated by a Laplacian distribution, which is usually modeled by an -norm. Therefore, Eq. (4) can now be written as

(5)
Figure 4: The distributions of the rank residual for image Fence with =20 (a) and image Parrot with =50 (b).

Estimate

In Eq. (4), after determining , we also need to estimate , as the original image is not available in real applications. A variety of algorithms exist to estimate . For example, if we have many example images that are similar to the original image x, we could search for similar patches to construct the matrix from the example image set 32 ; 33 . However, under many practical situations, the example image set is simply unavailable. In this paper, inspired by the fact that natural images often contain repetitive structures 34 , we search for nonlocal similar patches to the given patch directly in the noisy image and use the method similar to nonlocal means 25 to obtain the reference matrix by

(6)

where is the total number of similar patches and is the weight, which is inversely proportional to the distance between patches and , i.e, , where is a predefined constant and is a normalization factor. It is worth noting that Eq. (6) is a recursive like algorithm based on nonlocal means 25 . About how to conduct this reference matrix , please refer to Fig. 2 for a demonstration.

Iterative Shrinkage Algorithm to Solve the Proposed RRC Model

We now develop an efficient algorithm to solve Eq. (5). In order to do so, we first introduce the following lemma and theorem.

Lemma 1

35 The minimization problem

(7)

has a closed-form solution

(8)

where ; denotes the element-wise (Hadamard) product, and are vectors of the same dimension.

Theorem 1

36 (von Neumann) For any two matrices , , where calculates the trace of the ensured matrix; and are the ordered singular value matrices of A and B with the same order, respectively.

We now provide the solution of Eq. (5) by the following theorem.

Theorem 2

Let

be the SVD (singular value decomposition) of

with , , be the SVD of with . The optimal solution to the problem in Eq. (5) is , where and the diagonal element is solved by

(9)
Proof 1

Supposing that the SVD of are , and , respectively, where , and are ordered singular value matrices with the same order. Recalling Eq. (5) and from Theorem 1, we have

(10)

where the equality holds only when and . Therefore, Eq. (5) is minimized when and , and the optimal solution of is obtained by solving

(11)

where , and are the singular value of , and , respectively.

Thereby, the minimization problem in Eq. (5) can be simplified by minimizing the problem in Eq. (11).

For fixed , and , based on Lemma 1, the closed-form solution of Eq. (11) is

(12)

Provided the solution of in Eq. (12), the estimated group matrix can be reconstructed by .Then the denoised image can be reconstructed by aggregating all the group matrices .

In practical applications, we would perform the above denoising procedure several iterations to achieve better results. In the iteration, the iterative regularization strategy 37 is used to update y by

(13)

where representing the step-size. The standard deviation of the noise in the iteration is adjusted by , where is a constant. The parameter that balances the fidelity term and the regularization term should also be adaptively determined in each iteration, and inspired by 38 , of each group matrix is set to

(14)

where

denotes the estimated variance of

, and are small constants.

The complete description of the proposed RRC based image denoising approach to solve the problem in Eq. (5) is exhibited in Algorithm 1, corresponding to the flowchart shown in Fig. 2.

0:  Noisy image y.
1:  Initialize , , , , , , , , , and .
2:  for  to Max-Iter  do
3:     Iterative Regularization .
4:     for Each patch in  do
5:        Find similar patches to construct matrix .
6:        Perform .
7:        Estimate the reference matrix by Eq. (6).
8:        Perform .
9:        Update by Eq. (14).
10:        Estimate by Eq. (12).
11:        Get the estimation: .
12:     end for
13:     Aggregate to form the denoised image .
14:  end for
15:  Output: The final denoised image .
Algorithm 1 The Proposed RRC for Image Denoising.

4 Analyzing the RRC model Using Group Sparse Representation

In this section, we provide a mathematical explanation of the proposed RRC model from the perspective of the group-based sparse representation (GSR) 26 ; 30 ; 27 ; 29 ; 31 . To this end, an adaptive dictionary for each group is introduced. Based on this designed dictionary, we bridge the gap between the proposed RRC model and GSR model. More specifically, we prove that the proposed RRC model is equivalent to a GSR model, i.e., group sparsity residual constraint (GSRC) model 40 ; 57 ; 49 ; 58 .

4.1 Group-based Sparse Representation

We first give a brief introduction to the GSR model 31 . We extract group matrices from a clean image x. Similar to patch-based sparse representation, e.g., K-SVD 51 , given a dictionary , each group can be sparsely represented by solving

(15)

where is the group sparse coefficient for each group and the -norm is imposed on each column of , which also holds true for the following derivation with -norm on matrix.

In image denoising, the goal is to use the GSR model to recover the group matrix from the noisy observation by solving

(16)

Once is obtained, the clean image can be reconstructed.

However, under the noisy environment, it is challenging to estimate the true group sparse coefficients from directly. In other words, the group sparse coefficient obtained from Eq. (16) is expected to be close to the true group sparse coefficient in Eq. (15). Therefore, the quality of image denoising largely depends on the group sparsity residual, which is defined by the difference between and ,

(17)

Similar to the RRC model, in real applications, is not available and we thus employ an estimate of it, denoted by . Given and the dictionary , the group sparse coefficient for each group is solved by

(18)

Following this, in order to reduce the group sparsity residual and enhance the accuracy of , we define the group sparse residual constraint (GSRC) model below,

(19)

We will prove that this GSRC model equals to the proposed RRC model under the following adaptive dictionary.

4.2 Adaptive Dictionary Learning

Hereby, an adaptive dictionary learning method is designed, that is, for each group , its adaptive dictionary can be learned from its noisy observation .

Specifically, we apply the SVD to ,

(20)

where , , is a diagonal matrix whose non-zero elements are represented by ; are the columns of and , respectively.

We define each dictionary atom of the adaptive dictionary for each group , i.e.,

(21)

Till now, an adaptive dictionary has been learned for each group .

4.3 Prove the Equivalence of RRC and GSRC

Now, let us recall the classical -norm GSR problem in Eq. (19) and the adaptive dictionary defined in Eq. (21). In order to prove that RRC is equivalent to GSRC, we first introduce the following Lemma.

Lemma 2

Let , , and is constructed by Eq. (21). We have

(22)
Proof 2

From in Eq. (21) and the unitary property of and ,

(23)

Based on Lemma 1 and Theorem 2, we have the following theorem.

Theorem 3

The equivalence of the RRC model in Eq. (5) and the GSRC model in Eq. (19) is satisfied under the adaptive dictionary in Eq. (21).

Proof 3

On the basis of Lemma 2, we have

(24)

where and . , and denote the vectorization of the matrix , and , respectively.

Following this, based on Lemma 1, we have

(25)

Obviously, according to Eqs. (20) and  (21),

(26)

where represent the element in the group sparse coefficient and , respectively.

Therefore, based on the adaptive dictionary in Eq. (21) and Theorem 2, we have proved that Eq. (25) is equivalent to Eq. (12). We thus have that RRC is equivalent to GSRC, i.e.,

(27)

Obviously, according to the above analysis, we bridge the gap between the proposed RRC model and GSR model. It is worth noting that the dictionary can be learned in various manners and the proposed adaptive dictionary learning approach is just one example. Although the designed adaptive dictionary learning seems to translate the sparse representation into the rank minimization problem, the main difference between sparse representation and the rank minimization models is that sparse representation has a dictionary learning process while the rank minimization problem does not, to the best of our knowledge. This is also the key difference between our RRC model and the NCSR method 40 . There are extensive researches on the sparsity residual model for image processing and we have witnessed great successes of these models 40 ; 57 ; 49 ; 58 . Therefore, encouraged by this and since we have proved the equivalence between the proposed RRC model and the GSRC model based on the designed dictionary, we are confident on the feasibility of the RRC model for image processing, which will be further validated by extensive experiments on image denoising in the following section.

Figure 5: The 12 test images: Lena, Leaves, Monarch, Airplane, House, Parrot, Starfish, Fence, Foreman, J.Bean, Barbara, Plants.

5 Experimental Results

In this section, we conduct experiments to validate the performance of the proposed RRC model and compare it with leading denoising methods, including BM3D 26 , EPLL 28 , Plow 39 , NCSR 40 , PID 41 , PGPD 29 , LINC 42 , aGMM 43 and NNM. The parameter settings of the proposed RRC model are as follows. The size of each patch is set to , , and for , , and , respectively. The searching window for similar patches is set to ; . The parameters () are set to (0.1, 0.9, 0.9, 60, 0.001), (0.1, 0.8, 0.9, 60, 0.001), (0.1, 0.8, 0.9, 70, 0.0006), (0.1, 0.8, 1, 80, 0.0006), (0.1, 0.8, 1, 90, 0.0005) and (0.1, 0.8, 1, 100, 0.002) for , , , , and , respectively. Throughout the numerical experiments, we choose the following stopping criterion for the proposed RRC denoising algorithm, where is a small constant. The source code of the proposed RRC for image denoising can be downloaded at: https://drive.google.com/open?id=1XfW6_lsv0p7LzU7Wjzve9YNLuG3uZvei.

Figure 6: Denoising results of Monarch with =100. (a) Original image; (b) Noisy image; (c) NNM (PSNR=21.03dB); (d) BM3D (PSNR=22.52dB); (e) EPLL (PSNR =22.24dB); (f) Plow (PSNR = 21.83dB); (g) NCSR (PSNR = 22.10dB); (h) PID (PSNR =22.59dB); (i) PGPD (PSNR =22.56dB); (j) LINC (PSNR =22.13dB); (k) aGMM (PSNR =22.42dB); (l) RRC (PSNR =22.76dB).
Images NNM BM3D EPLL Plow NCSR PID PGPD LINC aGMM WNNM RRC NNM BM3D EPLL Plow NCSR PID PGPD LINC aGMM WNNM RRC
Airplane 27.62 28.49 28.54 28.03 28.34 28.69 28.63 28.53 28.42 28.82 28.63 25.16 25.76 25.96 25.64 25.63 26.09 25.98 26.04 25.83 26.32 26.13
0.7441 0.8631 0.8628 0.8532 0.8660 0.8734 0.8646 0.8632 0.8647 0.8717 0.8716 0.6839 0.8044 0.7922 0.7698 0.8066 0.8163 0.8059 0.8021 0.7990 0.8121 0.8172
Barbara 28.08 29.08 27.58 28.99 28.68 29.07 28.93 29.53 27.88 29.67 29.51 25.66 26.42 24.86 26.42 26.13 26.58 26.27 26.27 25.37 26.83 26.78
0.7924 0.8618 0.8209 0.8597 0.8524 0.8670 0.8565 0.8780 0.8129 0.8790 0.8736 0.7004 0.7698 0.6943 0.7663 0.7572 0.7802 0.7613 0.7612 0.7021 0.7925 0.7872
Fence 27.43 28.19 27.22 27.59 28.13 28.20 28.13 28.23 27.31 28.61 28.25 25.22 25.92 24.57 25.49 25.77 25.94 25.94 25.89 24.57 26.42 25.97
0.7785 0.8326 0.8150 0.8182 0.8298 0.8318 0.8255 0.8286 0.8021 0.8382 0.8246 0.6988 0.7621 0.7162 0.7496 0.7476 0.7557 0.7573 0.7535 0.7010 0.7777 0.7561
Foreman 30.24 32.75 31.70 32.45 32.61 33.09 32.83 32.93 32.31 32.99 33.26 28.69 30.36 29.20 29.60 30.41 30.63 30.45 30.33 29.80 30.75 30.87
0.7216 0.8823 0.8617 0.8698 0.8846 0.8923 0.8818 0.8894 0.8766 0.8853 0.8952 0.6983 0.8445 0.8051 0.7976 0.8559 0.8585 0.8410 0.8534 0.8270 0.8508 0.8611
House 29.85 32.09 31.24 31.67 32.01 32.10 32.24 32.26 31.79 32.58 32.30 28.00 29.69 28.79 28.99 29.61 29.58 29.93 29.87 29.28 30.23 29.92
0.7118 0.8480 0.8338 0.8383 0.8479 0.8503 0.8471 0.8485 0.8435 0.8495 0.8527 0.6780 0.8122 0.7845 0.7699 0.8160 0.8140 0.8125 0.8180 0.8002 0.8226 0.8247
J.Bean 29.77 31.97 31.55 31.61 31.99 31.96 31.99 31.82 32.50 32.38 32.33 27.77 29.26 28.73 28.66 29.24 29.29 29.20 29.01 29.46 29.24 29.38
0.7572 0.9357 0.9240 0.9204 0.9435 0.9462 0.9317 0.9449 0.9413 0.9408 0.9482 0.7293 0.9006 0.8677 0.8430 0.9134 0.9131 0.8934 0.9085 0.8911 0.9046 0.9125
Leaves 27.17 27.81 27.19 27.00 28.04 27.87 27.99 27.99 27.53 28.61 28.35 24.22 24.68 24.39 24.28 24.94 25.01 25.03 25.11 24.42 25.58 25.30
0.8780 0.9278 0.9197 0.9057 0.9311 0.9315 0.9300 0.9339 0.9273 0.9414 0.9366 0.8250 0.8680 0.8638 0.8354 0.8787 0.8817 0.8794 0.8925 0.8673 0.9015 0.8910
Lena 28.29 29.46 29.18 29.16 29.32 29.59 29.60 29.82 29.38 29.44 29.67 26.15 26.90 26.68 26.70 26.94 27.09 27.15 26.94 26.85 27.25 27.17
0.7543 0.8584 0.8477 0.8493 0.8580 0.8650 0.8622 0.8668 0.8548 0.8595 0.8672 0.6966 0.7920 0.7732 0.7691 0.8009 0.7988 0.7990 0.7976 0.7820 0.8020 0.8073
Monarch 27.63 28.36 28.36 27.77 28.38 28.63 28.49 28.74 28.27 29.13 28.79 25.30 25.82 25.78 25.41 25.73 26.21 26.00 25.88 25.82 26.27 26.22
0.7980 0.8822 0.8789 0.8714 0.8829 0.8909 0.8853 0.8970 0.8831 0.8999 0.8954 0.7428 0.8200 0.8124 0.7910 0.8252 0.8338 0.8269 0.8314 0.8164 0.8369 0.8361
Parrot 28.97 30.33 30.00 29.88 30.20 30.67 30.30 30.64 30.26 30.78 30.50 26.77 27.88 27.53 27.26 27.67 28.26 27.91 28.23 27.80 28.16 28.03
0.7337 0.8705 0.8569 0.8617 0.8705 0.8780 0.8681 0.8744 0.8671 0.8740 0.8765 0.6952 0.8273 0.7998 0.7872 0.8310 0.8365 0.8246 0.8386 0.8174 0.8321 0.8371
Plants 29.09 30.70 30.43 30.41 30.19 30.86 30.73 30.67 30.50 30.94 30.90 27.05 28.11 27.83 27.75 27.65 28.31 28.25 27.96 28.00 28.25 28.32
0.7141 0.8373 0.8278 0.8270 0.8273 0.8395 0.8370 0.8393 0.8314 0.8450 0.8459 0.6545 0.7669 0.7479 0.7327 0.7589 0.7679 0.7669 0.7636 0.7561 0.7745 0.7789
Starfish 27.10 27.65 27.52 27.02 27.69 27.35 27.67 27.52 27.61 28.02 27.95 24.58 25.04 25.05 24.71 25.06 24.80 25.11 24.81 25.09 25.32 25.34
0.7725 0.8289 0.8248 0.8075 0.8283 0.8180 0.8277 0.8195 0.8263 0.8378 0.8304 0.6887 0.7433 0.7392 0.7175 0.7440 0.7293 0.7457 0.7326 0.7419 0.7529 0.7589
Average 28.44 29.74 29.21 29.30 29.63 29.84 29.79 29.89 29.48 30.17 30.04 26.21 27.15 26.61 26.74 27.06 27.31 27.27 27.20 26.86 27.55 27.45
0.7630 0.8690 0.8562 0.8569 0.8685 0.8737 0.8681 0.8736 0.8609 0.8768 0.8765 0.7076 0.8093 0.7830 0.7774 0.8113 0.8155 0.8095 0.8127 0.7918 0.8217 0.8223
Images NNM BM3D EPLL Plow NCSR PID PGPD LINC aGMM WNNM RRC NNM BM3D EPLL Plow NCSR PID PGPD LINC aGMM WNNM RRC
Airplane 23.15 23.99 24.03 23.67 23.76 24.08 24.15 23.81 23.95 24.20 24.10 21.75 22.89 22.78 22.30 22.60 22.82 23.02 22.42 22.67 22.93 22.93
0.5493 0.7488 0.7168 0.6589 0.7547 0.7556 0.7492 0.7475 0.7248 0.7570 0.7637 0.4897 0.7036 0.6523 0.5698 0.7107 0.7083 0.6947 0.6931 0.6571 0.7075 0.7209
Barbara 23.58 24.53 23.00 24.30 24.06 24.67 24.39 24.03 23.09 24.79 24.62 22.01 23.20 21.89 22.86 22.70 23.37 23.11 22.39 21.92 23.27 23.37
0.5691 0.6798 0.5848 0.6548 0.6616 0.6879 0.6729 0.6613 0.5882 0.6964 0.6825 0.5026 0.6092 0.5135 0.5647 0.5960 0.6179 0.6039 0.5773 0.5163 0.6172 0.6243
Fence 23.22 24.22 22.46 23.57 23.75 24.20 24.18 23.81 22.70 24.53 24.32 21.62 22.92 21.10 22.17 22.23 23.00 22.87 22.34 21.50 23.69 23.08
0.5890 0.6962 0.6076 0.6586 0.6742 0.6857 0.6872 0.6750 0.6098 0.7108 0.6924 0.5044 0.6362 0.5252 0.5727 0.6009 0.6313 0.6226 0.6184 0.5386 0.6753 0.6407
Foreman 26.18 28.07 27.24 27.15 28.18 28.40 28.39 28.11 27.67 28.49 28.83 24.79 26.51 25.91 25.55 26.55 26.96 26.81 26.55 26.20 27.41 27.27
0.5524 0.7933 0.7467 0.7067 0.8171 0.8186 0.7965 0.8162 0.7676 0.8099 0.8259 0.5160 0.7489 0.6949 0.6329 0.7833 0.7888 0.7452 0.7826 0.7129 0.7817 0.7969
House 25.56 27.51 26.70 26.52 27.16 27.35 27.81 27.56 27.11 28.46 27.98 23.66 25.87 25.21 24.72 25.49 25.75 26.17 26.11 25.55 26.68 26.38
0.5439 0.7645 0.7251 0.6733 0.7749 0.7723 0.7709 0.7850 0.7419 0.7924 0.7950 0.4918 0.7203 0.6695 0.5874 0.7397 0.7349 0.7195 0.7550 0.6854 0.7540 0.7655
J.Bean 25.23 27.22 26.57 26.23 27.15 27.06 27.07 26.62 27.09 27.20 27.17 23.73 25.80 25.16 24.55 25.61 25.55 25.66 24.88 25.58 25.64 25.71
0.5796 0.8573 0.8019 0.7422 0.8792 0.8730 0.8503 0.8669 0.8243 0.8637 0.8749 0.5341 0.8181 0.7429 0.6574 0.8472 0.8386 0.7999 0.8339 0.7628 0.8188 0.8443
Leaves 21.79 22.49 22.03 22.02 22.60 22.61 22.61 22.45 21.96 23.13 22.92 19.57 20.90 20.26 20.43 20.84 20.77 20.95 20.49 20.29 21.56 21.22
0.7265 0.8072 0.7921 0.7512 0.8234 0.8145 0.8121 0.8247 0.7867 0.8439 0.8377 0.6345 0.7482 0.7163 0.6814 0.7622 0.7405 0.7469 0.7499 0.7106 0.7946 0.7811
Lena 24.08 25.17 24.75 24.64 25.02 25.16 25.30 25.12 25.02 25.38 25.33 22.30 23.87 23.46 23.19 23.63 23.91 24.02 23.67 23.73 24.07 24.14
0.5647 0.7288 0.6968 0.6723 0.7415 0.7350 0.7356 0.7358 0.7101 0.7413 0.7498 0.5093 0.6739 0.6345 0.5895 0.6906 0.6874 0.6780 0.6845 0.6487 0.6912 0.7100
Monarch 23.06 23.91 23.73 23.34 23.67 24.22 24.00 23.91 23.85 24.16 24.24 21.03 22.52 22.24 21.83 22.10 22.59 22.56 22.13 22.42 22.87 22.76
0.6206 0.7557 0.7395 0.6917 0.7648 0.7736 0.7642 0.7714 0.7454 0.7755 0.7782 0.5596 0.7021 0.6771 0.6102 0.7109 0.7160 0.7029 0.7076 0.6823 0.7280 0.7312
Parrot 24.54 25.94 25.56 25.15 25.45 26.28 25.98 26.20 25.72 26.33 26.22 22.84 24.60 24.08 23.65 23.94 24.85 24.52 24.48 24.26 24.86 24.83
0.5567 0.7771 0.7399 0.6859 0.7892 0.7979 0.7775 0.7988 0.7555 0.7930 0.8028 0.5197 0.7345 0.6844 0.6096 0.7518 0.7671 0.7251 0.7721 0.6979 0.7529 0.7729
Plants 24.80 26.25 25.90 25.57 25.75 26.30 26.33 25.90 26.05 26.26 26.40 22.27 24.98 24.65 24.14 24.46 24.99 25.06 24.36 24.75 24.88 24.91
0.5107 0.7006 0.6720 0.6255 0.7007 0.7011 0.7009 0.6998 0.6805 0.7103 0.7172 0.4789 0.6525 0.6129 0.5531 0.6587 0.6566 0.6472 0.6495 0.6210 0.6557 0.6680
Starfish 22.52 23.27 23.17 22.82 23.18 22.89 23.23 22.74 23.22 23.25 23.32 20.97 22.10 21.92 21.48 21.91 21.63 22.08 21.10 21.95 22.05 21.98
0.5617 0.6670 0.6502 0.6192 0.6685 0.6422 0.6638 0.6416 0.6525 0.6659 0.6741 0.4979 0.6053 0.5799 0.5403 0.6062 0.5760 0.6018 0.5635 0.5813 0.6176 0.6081
Average 23.98 25.21 24.60 24.58 24.98 25.27 25.29 25.02 24.78 25.52 25.45 22.21 23.85 23.22 23.07 23.50 23.85 23.90 23.41 23.40 24.19 24.05
0.5770 0.7480 0.7061 0.6784 0.7541 0.7548 0.7484 0.7520 0.7156 0.7633 0.7662 0.5199 0.6961 0.6420 0.5974 0.7049 0.7053 0.6906 0.6989 0.6512 0.7161 0.7220
Table 1: PSNR in dB (top entry in each cell) and SSIM (bottom entry) results of different denoising methods.

We evaluate the competing methods on 12 widely used test images shown in Fig. 5, i.e., Lena, Leaves, Monarch, Airplane, House, Parrot, Starfish, Fence, Foreman, J.Bean, Barbara and Plants. Here, we present the denoising results at four noise levels, i.e., ={30, 50, 75, 100}. The PSNR and SSIM results under these noise levels for all methods are shown in Table 1. It can be seen that the proposed RRC algorithm outperforms the other competing methods in most cases in terms of PSNR. The average gains of the proposed RRC over BM3D, EPLL, Plow, NCSR, PID, PGPD, LINC, aGMM and NNM methods are as much as 0.25dB, 0.84dB, 0.82dB, 0.45dB, 0.18dB, 0.18dB, 0.37dB, 0.62dB and 1.54dB, respectively. It is clear that the proposed RRC significantly outperforms the representative rank minimization method, namely, NNM. One can also observe that the proposed RRC achieves higher SSIM results than other competing methods. In particular, under high noise level =100, the proposed RRC consistently outperforms the other competing methods for all test images. The only exception is the image J.Bean for which NCSR is slightly (0.0029) better than the proposed RRC method on SSIM. The visual comparisons of different denoising methods are shown in Figs. 6-7. Obviously, NNM generates the worst perceptual result. One can observe that EPLL, Plow, NCSR, PGPD and aGMM still suffer from some undesirable artifacts, while BM3D, PID and LINC tend to over-smooth the image. By contrast, the proposed RRC not only removes most of the visual artifacts, but also preserves large scale sharp edges and small-scale image details.

Figure 7: Denoising results of Leaves with =100. (a) Original image; (b) Noisy image; (c) NNM (PSNR=19.57dB); (d) BM3D (PSNR=20.90dB); (e) EPLL (PSNR =20.26dB); (f) Plow (PSNR = 20.43dB); (g) NCSR (PSNR = 20.84dB); (h) PID (PSNR =20.77dB); (i) PGPD (PSNR =20.95dB); (j) LINC (PSNR =20.49dB); (k) aGMM (PSNR =20.29dB); (l) RRC (PSNR =21.22dB).

We also compare the proposed RRC with WNNM 16 method, which is a well-known rank minimization method that delivers state-of-the-art denoising results. The PSNR/SSIM results are shown in the last two columns of Table 1. It can be seen that though the PSNR results of RRC is slightly (0.2dB) lower than WNNM, the SSIM results of the proposed RRC is higher (0.01) than WNNM when the noise level . It is well known that SSIM often considers the human visual system and leads to more accurate results. The visual comparison of RRC and WNNM with one exemplar image is shown in Fig. 8, where we can observe that more details are recovered by RRC than WNNM. Such experimental findings clearly demonstrate that the proposed RRC model is a stronger prior for the class of photographic images containing large variations in edges/textures.

The proposed RRC model is a traditional based algorithm. The running time of RRC is faster than NCSR, and about twice long as NNM, and it is very close to WNNM.

Figure 8: Denoising results of WNNM and RRC at = 100. (a) Original House image; (b) Noisy image; (c) WNNM (PSNR=26.68dB, SSIM = 0.7540); (d) RRC (PSNR= 26.38dB, SSIM =0.7655).

6 Conclusion

We have proposed a new method, called rank residual constraint, to reinterpret the rank minimization problem from the perspective of matrix approximation. Via minimizing the rank residual, we have developed a high performance low-rank matrix estimation algorithm. Based on the group-based sparse representation model, a mathematical explanation on the feasibility of the RRC model has been derived. We have applied the proposed RRC model to image denoising by exploiting the image nonlocal self-similarity (NSS) prior. Experimental results have demonstrated that the proposed RRC model not only leads to visible quantitative improvements over many state-of-the-art methods, but also preserves the image local structures and suppresses undesirable artifacts.

References

  • (1)

    Candès E J, Li X, Ma Y, et al. Robust principal component analysis?[J]. Journal of the ACM (JACM), 2011, 58(3): 11.

  • (2) Peng Y, Ganesh A, Wright J, et al. RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images[J]. IEEE transactions on pattern analysis and machine intelligence, 2012, 34(11): 2233-2246.
  • (3) Liu G, Lin Z, Yu Y. Robust Subspace Segmentation by Low-Rank Representation[C]// International Conference on Machine Learning. DBLP, 2010:663-670.
  • (4)

    Ji H, Liu C, Shen Z, et al. Robust video denoising using low rank matrix completion[C]// Computer Vision and Pattern Recognition. IEEE, 2010:1791-1798.

  • (5) Fazel M. Matrix rank minimization with applications[D]. PhD thesis, Stanford University, 2002.
  • (6) Eriksson A, Hengel A V D. Efficient computation of robust low-rank matrix approximations in the presence of missing data using the L1 norm[C]// Computer Vision and Pattern Recognition. IEEE, 2010:771-778.
  • (7) Gu S, Zhang L, Zuo W, et al. Weighted Nuclear Norm Minimization with Application to Image Denoising[C]// Computer Vision and Pattern Recognition. IEEE, 2014:2862-2869.
  • (8)

    Wright J, Ganesh A, Rao S, et al. Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization[C]// Neural Networks for Signal Processing X, 2000. Proceedings of the 2000 IEEE Signal Processing Society Workshop. IEEE, 2009:289-298 vol.1.

  • (9) Candès E J, Recht B. Exact Matrix Completion via Convex Optimization[J]. Foundations of Computational Mathematics, 2009, 9(6):717.
  • (10) Salakhutdinov R, Srebro N. Collaborative Filtering in a Non-Uniform World: Learning with the Weighted Trace Norm[J]. Advances in Neural Information Processing Systems, 2010:2056-2064.
  • (11) Zhao Q, Meng D, Xu Z, et al. L1 -norm low-rank matrix factorization by variational Bayesian method.[J]. IEEE Transactions on Neural Networks & Learning Systems, 2017, 26(4):825-839.
  • (12) Cai J F, Cand, S, E J, et al. A Singular Value Thresholding Algorithm for Matrix Completion[J]. Siam Journal on Optimization, 2008, 20(4):1956-1982.
  • (13) Gu S, Xie Q, Meng D, et al. Weighted Nuclear Norm Minimization and Its Applications to Low Level Vision[J]. International Journal of Computer Vision, 2017, 121(2):183-208.
  • (14) Xie Y, Gu S, Liu Y, et al. Weighted Schatten, -Norm Minimization for Image Denoising and Background Subtraction[J]. IEEE Transactions on Image Processing, 2016, 25(10):4842-4857.
  • (15)

    Nie F, Huang H, Ding C. Low-rank matrix recovery via efficient schatten p-norm minimization[C]// Twenty-Sixth AAAI Conference on Artificial Intelligence. AAAI Press, 2012:655-661.

  • (16) Liu L, Huang W, Chen D R. Exact minimum rank approximation via Schatten p p mathContainer Loading Mathjax -norm minimization [J]. Journal of Computational & Applied Mathematics, 2014, 267(6):218-227.
  • (17) Lu C, Zhu C, Xu C, et al. Generalized Singular Value Thresholding[C]//AAAI. 2015: 1805-1811.
  • (18) Hu Y, Zhang D, Ye J, et al. Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(9):2117-2130.
  • (19) Lu C, Tang J, Yan S, et al. Generalized Nonconvex Nonsmooth Low-Rank Minimization[J]. 2014:4130-4137.
  • (20) Buades A, Coll B, Morel J M. A Non-Local Algorithm for Image Denoising[C]// Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. IEEE, 2005:60-65 vol. 2.
  • (21) Dabov K , Foi A , Katkovnik V , et al. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2007, 16(8):2080.
  • (22) Mairal J, Bach F, Ponce J, et al. Non-local sparse models for image restoration[C]// IEEE, International Conference on Computer Vision. IEEE, 2010:2272-2279.
  • (23) Zoran D, Weiss Y. From learning models of natural image patches to whole image restoration[C]//Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011: 479-486.
  • (24) Xu J, Zhang L, Zuo W, et al. Patch group based nonlocal self-similarity prior learning for image denoising[C]//Proceedings of the IEEE international conference on computer vision. 2015: 244-252.
  • (25) Dong W, Shi G, Ma Y, et al. Image Restoration via Simultaneous Sparse Coding: Where Structured Sparsity Meets Gaussian Scale Mixture[J]. International Journal of Computer Vision, 2015, 114(2-3):217-232.
  • (26) Zhang J, Zhao D, Gao W. Group-based sparse representation for image restoration[J]. IEEE Transactions on Image Processing, 2014, 23(8): 3336-3351.
  • (27)

    Li Y, Dong W, Shi G, et al. Learning parametric distributions for image super-resolution: Where patch matching meets sparse coding[C]//Proceedings of the IEEE International Conference on Computer Vision. 2015: 450-458.

  • (28) Yue H, Sun X, Yang J, et al. Image denoising by exploring external and internal correlations[J]. IEEE Transactions on Image Processing, 2015, 24(6): 1967-1982.
  • (29) Buades A, Coll B, Morel J M. A review of image denoising algorithms, with a new one[J]. Multiscale Modeling & Simulation, 2005, 4(2): 490-530.
  • (30) Daubechies I, Defrise M, De Mol C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint[J]. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 2004, 57(11): 1413-1457.
  • (31) Mirsky L. A trace inequality of John von Neumann[J]. Monatshefte für mathematik, 1975, 79(4): 303-306.
  • (32) Osher S, Burger M, Goldfarb D, et al. An iterative regularization method for total variation-based image restoration[J]. Multiscale Modeling & Simulation, 2005, 4(2): 460-489.
  • (33) Chang S G, Yu B, Vetterli M. Adaptive wavelet thresholding for image denoising and compression[J]. IEEE transactions on image processing, 2000, 9(9): 1532-1546.
  • (34) Chatterjee P, Milanfar P. Patch-based near-optimal image denoising[J]. IEEE Transactions on Image Processing, 2012, 21(4): 1635.
  • (35) Dong W, Zhang L, Shi G, et al. Nonlocally centralized sparse representation for image restoration[J]. IEEE Transactions on Image Processing, 2013, 22(4): 1620-1630.
  • (36) Knaus C, Zwicker M. Progressive image denoising[J]. IEEE transactions on image processing, 2014, 23(7): 3114-3125.
  • (37)

    Niknejad M, Rabbani H, Babaie-Zadeh M. Image restoration using Gaussian mixture models with spatially constrained patch clustering[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3624-3636.

  • (38) Luo E, Chan S H, Nguyen T Q. Adaptive image denoising by mixture adaptation[J]. IEEE transactions on image processing, 2016, 25(10): 4489-4503.
  • (39) Zha Z, Liu X, Zhou Z, et al. Image denoising via group sparsity residual constraint[C]//Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017: 1787-1791.
  • (40) Aharon M, Elad M, Bruckstein A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation[J]. IEEE Transactions on signal processing, 2006, 54(11): 4311.
  • (41) Oh T H, Tai Y W, Bazin J C, et al. Partial sum minimization of singular values in robust PCA: Algorithm and applications[J]. IEEE transactions on pattern analysis and machine intelligence, 2016, 38(4): 744-758.
  • (42) Zheng Y, Liu G, Sugimoto S, et al. Practical low-rank matrix approximation under robust l 1-norm[C]//Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012: 1410-1417.
  • (43) Zhang K, Zuo W, Chen Y, et al. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising[J]. IEEE Transactions on Image Processing, 2017, 26(7): 3142-3155.
  • (44) Mao X, Shen C, Yang Y B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections[C]//Advances in neural information processing systems. 2016: 2802-2810.
  • (45) Tai Y, Yang J, Liu X, et al. Memnet: A persistent memory network for image restoration[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 4539-4547.
  • (46) Liu H, Xiong R, Zhang J, et al. Image denoising via adaptive soft-thresholding based on non-local samples[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 484-492.
  • (47) Zhao C, Ma S, Zhang J, et al. Video compressive sensing reconstruction via reweighted residual sparsity[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2017, 27(6): 1182-1195.