Multi-band Weighted l_p Norm Minimization for Color and Multispectral Image Denoising

01/14/2019 ∙ by Yanchi Su, et al. ∙ Jilin University 0

Low rank matrix approximation (LRMA) has drawn increasing attention in recent years, due to its wide range of applications in computer vision and machine learning. However, LRMA, achieved by nuclear norm minimization (NNM), tends to over-shrink the rank components with the same threshold and ignore the differences between rank components. To address this problem, we propose a flexible and precise model named multi-band weighted l_p norm minimization (MBWPNM). The proposed MBWPNM not only gives more accurate approximation with a Schatten p-norm, but also considers the prior knowledge where different rank components have different importance. We analyze the solution of MBWPNM and prove that MBWPNM is equivalent to a non-convex l_p norm subproblems under certain weight condition, whose global optimum can be solved by a generalized soft-thresholding algorithm. We then adopt the MBWPNM algorithm to color and multispectral image denoising. Extensive experiments on additive white Gaussian noise removal and realistic noise removal demonstrate that the proposed MBWPNM achieves a better performance than several state-of-art algorithms.



There are no comments yet.


page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image noise is an undesirable by-product of image capture and severely damages the quality of acquired images. Removing noise is a vital work in various computer vision tasks, e.g., recognition (He et al., 2016) and segmentation (Shi & Malik, 2000). Image denoising aims to recover the clean image x from its noisy observation , where n is generally assumed to be additive white Gaussian noise (AWGN). Numerous effective methods have been proposed in the past decades, and they can be categorized into filter based methods (Dabov et al., 2008; Buades et al., 2005), sparse coding based methods (Elad & Aharon, 2006; Dong et al., 2013), low-rankness based methods (Gu et al., 2014, 2017), convolutional neural nets (CNNs) based methods (Zhang et al., 2017; Chen & Pock, 2017; Mao et al., 2016; Zhang et al., 2018), etc. In this paper, we focus on the method of LRMA with regularization.

LRMA problem can be solved by the low rank matrix factorization and the rank minimization. The latter restores the data matrix through attaching a rank constraint on the estimated matrix, formulated by the following objective function:


where denotes the Frobenius norm, and is the given rank. However, rank minimization is NP hard and is tough to solve directly. As the tightest convex relaxation of the rank function (Fazel, 2002)

, the nuclear norm minimization (NNM) has been generally served as an ideal approximation for regularization term. The nuclear norm is defined as the sum of the singular values, i.e.,

, where represents the i-th singular value of a given matrix . Therefore, the NNM model aims to find a low rank matrix from the degraded observation which satisfies the following energy function:


where is a parameter to balance the data fidelity term and regularization. A soft-thresholding operation (Cai et al., 2010) is presented to solve NNM with the theoretical guarantees, that Candes and Recht (Candès & Recht, 2009) proved that low rank matrix can be efficiently recovered by dealing with NNM problems. However, most of the NNM based models treat all singular values equally, this ignores the prior knowledge in many image inverse problems, such as major edge and texture information expressed by larger singular values. To address this problem, Zhang et al. introduced a Truncated Nuclear Norm (TNN) regularization (2012) to improve the ability of preserving the textures while Sun et al. (2013) proposed a Capped Nuclear Norm (CNN) regularizer. A similar regularizer, namely Partial Sum Nuclear Norm (PSNN), has been developed in (Oh et al., 2016), which only minimizing the smallest singular values, where is the number of singular values and is the rank of matrix. Meanwhile, to improve the flexibility of the nuclear norm, Gu et al. generalized NNM to the weighted NNM (WNNM)(2014) with singular values attached to different weights, defined as:



is the weight vector of

. Indeed, TNN, CNN and PSNN can be considered as the special case of WNNM, fixing weight vector with 0 and 1. Moreover, some efforts surrounded WNNM show an outstanding experimental performance in various image processing applications such as image denoising (Gu et al., 2014), background subtraction (Gu et al., 2017), image deblurring (Ren et al., 2016), inpainting (Yair & Michaeli, 2018). In particular, a multil-channel optimization model for real color image denoising under the WNNM framework (MCWNNM) (Xu et al., 2017) is proposed, which shows that by considering different contribution of the R, G and B channels based on their noise levels, both noise characteristics and channel correlation can be effectively exploited.

Additionally, the standard NNM tends to over-shrink the singular values with the same threshold, resulting in a deviation far from the original solution of rank minimization problem. The problem, obtained suboptimal solution by WNNM under certain conditions, is also discovered in (Lu et al., 2015). More specifically, Lu et al. provided a simple proof and a counterexample, which is better compared with WNNM. Therefore, Schatten -norm is defined as the norm of the singular values with and proposed (Nie et al., 2012; Xie et al., 2016b) to impose low rank regularization, achieving a more accurate recovery of the data matrix while requiring only a weaker restricted isometry property (Liu et al., 2014). As shown in the experimental section, the results also demonstrate that the Schatten -norm based model performs significantly better than the WNNM.

Inspired by the norm and MCWNNM(Xu et al., 2017), in this paper, we propose a new model for color and multispectral image denoising via non-convex multi-band weighted norm minimization (MBWPNM). Introducing Schatten -norm for weighted nuclear norm makes the problem much more troublesome as well as the MCWNNM does not has an analytical solution. Consequently, for the weighted norm regularization term of the low rank model, we find that it is equivalent to a non-convex norm subproblems. We further apply a generalized soft-thresholding algorithm (GST)(Zuo et al., 2013)to solve the low rank model with weighted norm. Meanwhile, rigorous mathematical proof of the equivalence and complexity analysis are presented in the later section. The contributions of our work are summarized as follows:

  • We propose a new model:MBWPNM, and present an efficient optimization algorithm to solve MBWPNM.

  • We adopt the MBWPNM model to color and multispectral image denoising, and validate robustness of our model using synthetic and real images. The experimental results indicate that the MBWPNM not only outperforms many state-of-art denoising algorithm both quantitatively and qualitatively, but also results in a competitive speed than MCWNNM.

2 Related Work

As a generalization to the weighted nuclear norm minimization (WNNM) model, the multi-channel weighted nuclear norm minimization (MCWNNM) model is defined as:


with , where

is standard deviation in each channel and

is the identity matrix. By introducing an augmented variable

, the MCWNNM model can be reformulated as a linear equality-constrained problem with two variables and its augmented Lagrangian function is:


where is the augmented Lagrangian multiplier and is the penalty parameter. According to (Xu et al., 2017), the problem (5) can be solved by alternating direction method of multipliers (ADMM)(Boyd et al., 2011) framework.

The ADMM is an algorithm that solves convex optimization problems by breaking them into smaller pieces, each of which are then easier to handle. In particular, there is widespread current interest applying ADMM in many of problems involving norms across statistics, machine learning and signal processing. However, on one side, because of multiple iterations, it is often time-consuming to get a relatively accurate solution using ADMM; on the other side, preferences such as play an important role for convergence. Theoretically, the MBWPNM also can be solved by ADMM but its cost is higher than MCWNNM due to the complexity. In this paper, we present a more effective algorithm, which achieves better results with less time.

3 Multi-band Weighted norm minimization

3.1 Problem Formulation

Given a matrix , the proposed optimization model under the MCWNNM framework, which aims to find a matrix closing to as much as possible, is defined as:


involving two terms: a -norm data fidelity and a weighted Schatten -norm regularization. Then the weighted Schatten -norm of matrix with power is


where is a non-negative weight vector, is the -th singular value of , and . Note that MCWNNM is a special case of MBWPNM when power is set to 1. In the next subsection, we will launch a detailed discussion of the optimization above MBWPNM.

3.2 Optimization of MBWPNM

We first give the following theorem before analyzing and discussing the optimization of MBWPNM.

Theorem 1.

(Horn et al., 1990) Let be given and let , the following inequalities hold for the decreasingly ordered singular values of and :


Introducing the Schatten -norm makes the problem (6) more complicated to solve directly than the original MCWNNM model. Changing a way of thinking, we consider the following object function similar to problem (6):

Theorem 2.

Problem (6) is equivalent to problem (9), that they share the same solution.


The proof can be found in the supplementary material. ∎

Thus, the original problem (6) has been converted to problem (9), which can be solved more easily. We then have the Theorem 3 and 4.

Theorem 3.

(Hom & Johnson, 1991) Let , let , and let , denote the non-increasingly ordered singular values of and , respectively. Then

Theorem 4.

Let the SVD of be with . Then an optimal solution to (9) is with , where is given by solving the problem below:


The proof can be found in the supplementary material. ∎

Obviously, problem (11) will be more trivial to handle if the additional order constraint (i.e. ) can be abandoned. Therefore, we take into account the more general condition that problem (11) is decomposed into independent subproblems:


which has been explored in (Lu et al., 2015; Zuo et al., 2013). To obtain the solution of each subproblem (12), the generalized soft-thresholding (GST) algorithm (Zuo et al., 2013)is adopted. Specifically, given and , there exists a specific threshold:


If is the global minimum; otherwise, for any has one unique minimum satisfying the problem (12), which can be obtained by solving the following equation:


The complete description of the GST algorithm is shown in Algorithm 1; please refer to (Zuo et al., 2013) for more details about the GST algorithm.

1:  Input:
2:  Output:
4:  if  then
6:  else
8:     for  to  do
11:     end for
13:  end if
14:  Return
Algorithm 1 Generalized Soft-Thresholding(GST)

3.3 Optimal Solution under Non-Descending Weights

Generally, the larger singular values usually contain the major edge and texture information, which implies that the small singular values should be penalized more than the large ones. Getting back to the optimization of problem (11), it is meaningful to assign the non-descending weights to the non-ascending singular values for most practical applications in low level vision. Based on such considerations, we have the Theorem 5.

Theorem 5.

The optimal solutions of all the independent subproblems in (12) satisfy the following inequality:


For , we have the following inequality:


after substitution i.e.


Summing them together and simplification, it reduces to



According to Theorem 5, solving problem (11) is equivalent to solving all the independent subproblems in (12) with the non-descending weights. So far, the original problem (6) undergo a series of transformations:(6)(9)(11)(12) before it can be effectively solved. Finally, the proposed algorithm of MBWPNM is summarized in Algorithm 2.

1:  Input: in non-descending order
2:  Output: Matrix
4:  for  to  do
6:  end for
8:  Return
Algorithm 2 MBWPNM via GST

4 Color and Multispectral Image Denoising

4.1 The Denoising Algorithm

Similar to MCWNNM model, MBWPNM is adopted to the matrix of color image nonlocal similar patches for noise removal, which are concatenated from RGB channel. Specifically, given a degraded color image , each local patch of size is stretched to a patch vector, denoted by , where are the patches in R,G,B channels, respectively. For a local patch , we search for the most similar patches across the image (in practice, in a large enough local window) by the block matching method proposed in (Dabov et al., 2008). Then, by stacking those nonlocal similar patches into a matrix column by column, we have , where and are the corresponding clean and noise patch matrices. Applying MBWPNM algorithm to , the whole image can be estimated through aggregating all the denoised patches .

Compared with RGB color image, multispectral image(MSI) is actually a more channels image. So, it is natural to apply MBWPNM model to the area of MSI denoising. For a given 3-D patch cube, which is stacked by patches at the same position of MSI over all band, there are also many patch cubes similar to it. The low rank property of the image matrix is explored by rearranging respectively those nonlocal similar patch cubes as a 1-D vector and stacking those vectors into a matrix column by column. Figure 1. shows the detailed process. (Chang et al., 2017) pointed out that the spectral and non-local similarity information from mutually complementary priors can be jointly utilized in this way and effectively improve the performance of denoising. It is also reasonable that using a weight matrix adjusts the contributions of multiple bands (channels) based on their different noise levels in real scene, where , is the identity matrix and is the number of bands.

In the application of color image and MSI denoising, theoretically, the larger the singular values, the less they should be shrunk. Therefore, the weight assigned to the singular value is inversely proportional to it, and we let


where is a constant, is the number of similar patches and is to avoid dividing by zero. We adopt the iterative regularization scheme to restore clean image, and then the image can be reconstructed by aggregating all the denoised patches together. The MBWPNM based color image and MSI denoising algorithm is summarized in Algorithm 3.

Figure 1: The flowchart of the low rank property representation.
1:  Input: Noisy Image noise level
2:  Output: Denoised Image
3:  Initialization:
4:  for  to  do
5:     Iterative regularization
6:     for each local patch  do
7:        Find no local similar patches to form matrix
8:        Apply the MBWPNM model to Algorithm 2;
9:        Get the estimated
10:     end for
11:     Aggregate to form the image
12:  end for
13:  Return Denoised Image
Algorithm 3 Image Denoising by MBWPNM

4.2 Complexity Analysis

For a matrix of size , where is the number of bands (channels), denotes the width and height of local patch in Algorithm 3, and is the number of similar patches, the cost of SVD in each iteration(step 6) is . The GST algorithm (step 3 in Algorithm 2) costs , where is the number of iterations in the GST Algorithm. Therefore, the overall cost is , where is the number of iterations in Algorithm 3, and denotes the total number of patches. Specially, an amount of flops are required for the color image denoising.

5 Experimental Results

5.1 Experimental Setting

Firstly, we compare MBWPNM method with several state-of-art image denoising methods both simulated and real noisy condition, including CBM3D(Dabov et al., 2007a), NCSR(Dong et al., 2013), EPLL(Zoran & Weiss, 2011), Guided(Xu et al., 2018b), DnCNN(Zhang et al., 2017), FDDNet(Zhang et al., 2018), MCWNNM(Xu et al., 2017). And then we also compare it with 9 MSI denoising algorithms: 1-D sparse representation-based methods (SDS(Lam et al., 2012), ANLM(Manjón et al., 2010)), 2-D low-rank matrix recovery methods (LRMR(Zhang et al., 2014), NMF(Ye et al., 2015)

), state-of-art tensor methods (BM3D

(Dabov et al., 2007b), LRTA(Renard et al., 2008), BM4D(Maggioni et al., 2013), ISTreg(Xie et al., 2016a), LLRT(Chang et al., 2017)).

In simulated experiments, the noise levels in each band (channel) have been known. We set the noise level as the root mean square (RMS):


for the method that requires a single parameter to input such as CBM3D and LLRT. In the real cases, we use the method to estimate (Liu et al., 2008) the noise level for each channel.

5.2 Color Image Denoising

For a fair comparison, we keep the parameter settings of both MCWNNM and MBWPNM to achieve their best performance. More specifically, we set the patch size as , the number of non-local similar patches as , the window size for searching similar patches as , and the number of iterations is set as in simulated experiments.

Figure 2: The PSNR results of changing power p under different noise level on 18 color images from Master dataset(Zhang et al., 2011)

5.2.1 Simulated Color Images Denoising

Intuitively, the different choice of power has different influence on shrinkage of singular value. How to choose a suitable setting of power for each noise level is a little complicated. So, we add AWGN to each of the R, G, B channels on the 18 color sub-images from Master dataset(Zhang et al., 2011), and then make a test with different power under different noise level. The PSNR results are shown in Figure 2, while the best PSNR result for each group is highlighted in red. In this test, we set the value of power changing from 0.05 to 1 with interval 0.05, and select 6 noise level (5,15,25), (10,20,30), (15,25,35), (20,30,40), (25,35,45), (30,40,50). In the first two subfigures of Figure 2, we can observe that the best values of are 1 when the noise level is low. And the best values of are reduced to 0.95 in the subsequent two subfigures. With the increase of noise level, the pollution of singular values is much more severe. Consequently, as shown in the last two subfigures of Figure 2, the smaller values of (0.8 and 0.55 for the latter two, respectively) are more suitable for higher noise level. In summary, the optimal value of , by and large, is inversely proportional to the noise level. Sticking to this principle, we did the following experiments.

(5,30,15) 29.46 30.77 31.68 33.35 33.29 32.99 33.34
(25,5,30) 29.38 30.16 30.82 32.56 32.48 32.15 33.02
(30,10,50) 29.06 27.95 29.05 30.65 30.44 29.81 30.71
(40,20,30) 29.14 28.29 28.95 31.06 30.98 29.27 29.43
Table 1: Different simulated results (PSNR) by different methods.

38.25 38.02 37.00 40.82 37.05 41.20 41.21
Canon 5D III ISO=3200 35.85 34.76 33.88 37.19 33.91 37.25 37.10
34.12 34.91 33.83 36.92 33.86 36.48 37.14
33.10 33.51 33.28 35.32 33.31 35.54 35.49
Nikon D600 ISO=3200 35.57 34.13 33.77 36.62 33.81 37.03 37.06
40.77 35.44 34.93 38.68 34.98 39.56 39.56
36.83 35.98 35.47 38.88 35.50 39.26 39.19
Nikon D800 ISO=1600 40.19 36.39 35.71 40.66 35.75 41.45 41.44
37.64 35.34 34.81 39.20 34.83 39.54 39.52
39.72 33.63 33.26 37.92 33.30 38.94 39.10
Nikon D800 ISO=3200 36.74 33.13 32.89 36.62 32.94 37.40 37.10
40.96 33.43 32.91 37.64 32.94 39.42 39.47
34.63 30.09 29.63 33.01 29.65 34.85 34.92
Nikon D800 ISO=6400 32.95 30.35 29.97 32.93 30.00 33.97 33.96
33.61 30.12 29.87 32.96 29.88 33.97 34.09
Average 36.73 33.95 33.41 37.02 33.45 37.72 37.76
Time 6 1434 424 48 4 192 41
Table 2: PSNR (dB) and ACT (s) of different methods on 15 cropped real noisy images(Nam et al., 2016).
(a) Ground Truth
(b) Noisy Image
(c) CBM3D
(d) NCSR
(e) EPLL
(f) DnCNN
(g) FFDNet
Figure 3: Denoising results on image kodim08 in Kodak PhotoCD Dataset by different methods (noise level ).(a) Ground Truth. (b) Noisy Image:17.46dB. (c) CBM3D,PSNR=26.83dB. (d) NCSR,PSNR=25.07dB. (e) EPLL,PSNR=26.02dB. (f) DnCNN,PSNR=28.32dB. (g) FFDNet,PSNR=28.18dB. (h) MCWNNM,PSNR=26.98dB. (i) MBWPNM,PSNR=28.83dB. The images are better to be zoomed in on screen.

We evaluate the completing methods on 24 color images from Kodak PhotoCD Dataset111 Zero mean additive white Gaussian noise with four noise levels are added to those test images to generate the degraded observations. The averaged PSNR results are shown in Table 1 and the best result is highlighted in bold. According to the discussion of power , we choose for each test in Table 1, respectively. On average, we can see that MBWPNM achieves the highest PSNR values on 2 of 4 noise levels. Compared with MCWNNM, our method achieves dB, dB, dB and dB improvements, respectively.

In Figure 3, we compare the visual quality of the denoised image by the competing algorithms. One can see that methods CBM3D, DnCNN and FFDNet over-smooth in the roof area of image kodim08, while NCSR, EPLL and MCWNNM remain noise or generate more color artifacts. MBWPNM recovers the image well in visual quality than other methods. More visual comparisons can be found in the supplementary material.

5.2.2 Real Color Images Denoising

Different from AWGN, the real-world noise is signal dependent, and cannot be modeled by an explicit distribution. To demonstrate the robustness of our method, we compare MBWPNM with the competing methods(Dong et al., 2013; Zhang et al., 2018; Xu et al., 2017; Dabov et al., 2007a; Zoran & Weiss, 2011; Xu et al., 2018b) on three representative datasets.

The first dataset is provided in(Nam et al., 2016), which includes 11 indoor scenes, 500 JPEG images per scene. The ground truth noise-free images are generated by computing the mean image of each scene, and author of (Nam et al., 2016) cropped 15 smaller images of size for experiments because of original image with resolution . These images are shown in supplementary material. Quantitative comparisons, including PSNR result and averaged computational time(ACT), are listed in Table 2. The best results are highlighted in bold. MBWPNM achieves the highest averaged PSNR in all competing methods. It achieves dB improvement over MCWNNM method in average and outperforms the benchmark CBM3D method by dB in average. One can see that FFDNet based on convolutional neural nets (CNNs) has no longer performed as well as simulated experiments. This point can be confirmed in the following tests. More visual comparisons can be found in the supplementary material.

The other two datasets are provided in (Xu et al., 2018a) and (Abdelhamed et al., 2018), which both are much more comprehensive than the first one (Xu et al., 2018a). The detailed description of these datasets is listed in supplementary file. The PSNR and ACT results of the competing algorithms are reported in Table 3. We can see again that MBWPNM achieves much better performance than the other competing methods. Because of limited space, more visual comparisons can be found in the supplementary file.

Comparison on speed. We compare the average computational time (second) of different methods, which is shown in Table 2 and 4. All experiments are implemented in Matlab on a PC with 3.2GHz CPU and 8GB RAM. The fastest result is highlighted in bold. We can see that FFDNet is the fastest on three datasets and it needs about 4 seconds, while the MBWPNM generally costs about one fifth time compared with MCWNNM. Noted that CBM3D and FFDNet are implemented with compiled C++ mex-function, while NCSR, EPLL, Guided, MCWNNM and MBWPNM are implemented purely in Matlab.

Dataset 2 PSNR 38.11 36.49 35.97 38.37 35.98 38.57 38.59
Time 6 1174 427 52 4 171 38
Dataset 3 PSNR 37.20 30.21 27.48 32.55 27.69 37.82 37.83
Time 8 1284 426 48 4 527 99
Table 3: PSNR (dB) and ACT (s) of different methods on datasets.

5.3 MSI Denoising

Different from color image denoising, in order to give an overall evaluation for the spatial and spectral quality, four quantitative picture quality indices (PQI) are employed: PSNR, SSIM(Wang et al., 2004), ERGAS(Wald, 2002) and SAM(Yuhas et al., 1993). The larger PSNR and SSIM are, and the smaller ERGAS and SAM are, the better the recovered MSIs are.

The Columbia Multispectral Database (CAVE)(Yasuma et al., 2010)

is utilized in our simulated experiment. The noisy MSIs are generated by adding AWGN with different variance to each of the bands. Specifically, for the number of noise variances, we set them starting from one with the tolerance of 1, 2, 3 and 4.

We choose for each test. For each noise setting, all of the four PQI are shown in supplementary material. On averaged PSNR, MBWPNM achieves dB, dB, dB, dB improvements over LLRT, while achieves the highest PSNR values in all competing methods. Generally, in four comparative tests, MBWPNM achieves the best performance on 11 of 16 quantitative assessments.

6 Conclusion

In this paper, we propose a multi-band weighted norm minimization (MBWPNM) model for color image and MSI denoising, which preserves the power of MCWNNM and is flexible to providing different rank components with different treatments in the practical applications. To solve the MBWPNM model, its equivalent form is deduced and it can be efficiently solved via a generalized soft-thresholding (GST) algorithm. We also proved that, when the weights are in a non-descending order, the solution obtained by the GST algorithm is still global optimum. We then applied the proposed MBWPNM algorithm to color image and MSI denoising. The experimental results on synthetic and real datasets demonstrat that the MBWPNM model achieves significant performance gains over several state-of-art methods, such as CBM3D and BM4D. In the future, we will extend the MBWPNM to other applications in computer vision problems.


  • Abdelhamed et al. (2018) Abdelhamed, A., Lin, S., and Brown, M. S. A high-quality denoising dataset for smartphone cameras. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pp. 1692–1700, 2018.
  • Boyd et al. (2011) Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine learning, 3(1):1–122, 2011.
  • Buades et al. (2005) Buades, A., Coll, B., and Morel, J.-M. A non-local algorithm for image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pp. 60–65. IEEE, 2005.
  • Cai et al. (2010) Cai, J.-F., Candès, E. J., and Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010.
  • Candès & Recht (2009) Candès, E. J. and Recht, B. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717, 2009.
  • Chang et al. (2017) Chang, Y., Yan, L., and Zhong, S. Hyper-laplacian regularized unidirectional low-rank tensor recovery for multispectral image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4260–4268, 2017.
  • Chen & Pock (2017) Chen, Y. and Pock, T. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1256–1272, 2017.
  • Dabov et al. (2007a) Dabov, K., Foi, A., Katkovnik, V., and Egiazarian, K. Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space. In Proceedings of the IEEE International Conference on Image Processing., volume 1, pp. I–313. IEEE, 2007a.
  • Dabov et al. (2007b) Dabov, K., Foi, A., Katkovnik, V., and Egiazarian, K. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8):2080–2095, 2007b.
  • Dabov et al. (2008) Dabov, K., Foi, A., Katkovnik, V., and Egiazarian, K. Image restoration by sparse 3d transform-domain collaborative filtering. In Image Processing: Algorithms and Systems VI, volume 6812, pp. 681207. International Society for Optics and Photonics, 2008.
  • Dong et al. (2013) Dong, W., Zhang, L., Shi, G., and Li, X. Nonlocally centralized sparse representation for image restoration. IEEE Transactions on Image Processing, 22(4):1620–1630, 2013.
  • Elad & Aharon (2006) Elad, M. and Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15(12):3736–3745, 2006.
  • Fazel (2002) Fazel, M. Matrix rank minimization with applications. PhD thesis, PhD thesis, Stanford University, 2002.
  • Gu et al. (2014) Gu, S., Zhang, L., Zuo, W., and Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2862–2869, 2014.
  • Gu et al. (2017) Gu, S., Xie, Q., Meng, D., Zuo, W., Feng, X., and Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. International Journal of Computer Vision, 121(2):183–208, 2017.
  • He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
  • Hom & Johnson (1991) Hom, R. A. and Johnson, C. R. Topics in matrix analysis. Cambridge UP, New York, 1991.
  • Horn et al. (1990) Horn, R. A., Horn, R. A., and Johnson, C. R. Matrix analysis. Cambridge university press, 1990.
  • Lam et al. (2012) Lam, A., Sato, I., and Sato, Y. Denoising hyperspectral images using spectral domain statistics. In Proceedings of the 21st International Conference on Pattern Recognition, pp. 477–480. IEEE, 2012.
  • Liu et al. (2008) Liu, C., Szeliski, R., Kang, S. B., Zitnick, C. L., and Freeman, W. T. Automatic estimation and removal of noise from a single image. IEEE Transactions on Pattern Analysis & Machine Intelligence, 30(2):299–314, 2008.
  • Liu et al. (2014) Liu, L., Huang, W., and Chen, D.-R. Exact minimum rank approximation via schatten p-norm minimization. Journal of Computational and Applied Mathematics, 267:218–227, 2014.
  • Lu et al. (2015) Lu, C., Zhu, C., Xu, C., Yan, S., and Lin, Z. Generalized singular value thresholding. In

    AAAI Conference on Artificial Intelligence

    , pp. 1805–1811, 2015.
  • Maggioni et al. (2013) Maggioni, M., Katkovnik, V., Egiazarian, K., and Foi, A. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Transactions on Image Processing, 22(1):119–133, 2013.
  • Manjón et al. (2010) Manjón, J. V., Coupé, P., Martí-Bonmatí, L., Collins, D. L., and Robles, M. Adaptive non-local means denoising of mr images with spatially varying noise levels. Journal of Magnetic Resonance Imaging, 31(1):192–203, 2010.
  • Mao et al. (2016) Mao, X., Shen, C., and Yang, Y.-B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Advances in Neural Information Processing Systems, pp. 2802–2810, 2016.
  • Nam et al. (2016) Nam, S., Hwang, Y., Matsushita, Y., and Joo Kim, S. A holistic approach to cross-channel image noise modeling and its application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1683–1691, 2016.
  • Nie et al. (2012) Nie, F., Huang, H., and Ding, C. H. Low-rank matrix recovery via efficient schatten p-norm minimization. In AAAI Conference on Artificial Intelligence, 2012.
  • Oh et al. (2016) Oh, T.-H., Tai, Y.-W., Bazin, J.-C., Kim, H., and Kweon, I. S. Partial sum minimization of singular values in robust pca: Algorithm and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(4):744–758, 2016.
  • Ren et al. (2016) Ren, W., Cao, X., Pan, J., Guo, X., Zuo, W., and Yang, M.-H. Image deblurring via enhanced low-rank prior. IEEE Transactions on Image Processing, 25(7):3426–3437, 2016.
  • Renard et al. (2008) Renard, N., Bourennane, S., and Blanc-Talon, J. Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Geoscience and Remote Sensing Letters, 5(2):138–142, 2008.
  • Shi & Malik (2000) Shi, J. and Malik, J. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000.
  • Sun et al. (2013) Sun, Q., Xiang, S., and Ye, J.

    Robust principal component analysis via capped norms.

    In Proceedings of the 19th International Conference on Knowledge Discovery and Data Mining, pp. 311–319. ACM, 2013.
  • Wald (2002) Wald, L. Data fusion: definitions and architectures: fusion of images of different spatial resolutions. Presses des MINES, 2002.
  • Wang et al. (2004) Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
  • Xie et al. (2016a) Xie, Q., Zhao, Q., Meng, D., Xu, Z., Gu, S., Zuo, W., and Zhang, L. Multispectral images denoising by intrinsic tensor sparsity regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1692–1700, 2016a.
  • Xie et al. (2016b) Xie, Y., Gu, S., Liu, Y., Zuo, W., Zhang, W., and Zhang, L. Weighted schatten -norm minimization for image denoising and background subtraction. IEEE Transactions on Image Processing, 25(10):4842–4857, 2016b.
  • Xu et al. (2017) Xu, J., Zhang, L., Zhang, D., and Feng, X. Multi-channel weighted nuclear norm minimization for real color image denoising. In Proceedings of the IEEE International Conference on Computer Vision, volume 2, 2017.
  • Xu et al. (2018a) Xu, J., Li, H., Liang, Z., Zhang, D., and Zhang, L. Real-world noisy image denoising: A new benchmark. arXiv preprint arXiv:1804.02603, 2018a.
  • Xu et al. (2018b) Xu, J., Zhang, L., and Zhang, D. External prior guided internal prior learning for real-world noisy image denoising. IEEE Transactions on Image Processing, 27(6):2996–3010, 2018b.
  • Yair & Michaeli (2018) Yair, N. and Michaeli, T. Multi-scale weighted nuclear norm image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3165–3174, 2018.
  • Yasuma et al. (2010) Yasuma, F., Mitsunaga, T., Iso, D., and Nayar, S. K. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum. IEEE Transactions on Image Processing, 19(9):2241–2253, 2010.
  • Ye et al. (2015) Ye, M., Qian, Y., and Zhou, J. Multitask sparse nonnegative matrix factorization for joint spectral–spatial hyperspectral imagery denoising. IEEE Transactions on Geoscience and Remote Sensing, 53(5):2621–2639, 2015.
  • Yuhas et al. (1993) Yuhas, R. H., Boardman, J. W., and Goetz, A. F. Determination of semi-arid landscape endmembers and seasonal trends using convex geometry spectral unmixing techniques. In 4th Annual JPL Airborne Geoscience Workshop., Volume 1:205–208, 1993.
  • Zhang et al. (2012) Zhang, D., Hu, Y., Ye, J., Li, X., and He, X. Matrix completion by truncated nuclear norm regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2192–2199. IEEE, 2012.
  • Zhang et al. (2014) Zhang, H., He, W., Zhang, L., Shen, H., and Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Transactions on Geoscience and Remote Sensing, 52(8):4729–4743, 2014.
  • Zhang et al. (2017) Zhang, K., Zuo, W., Chen, Y., Meng, D., and Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
  • Zhang et al. (2018) Zhang, K., Zuo, W., and Zhang, L. Ffdnet: Toward a fast and flexible solution for cnn based image denoising. IEEE Transactions on Image Processing, 2018.
  • Zhang et al. (2011) Zhang, L., Wu, X., Buades, A., and Li, X.

    Color demosaicking by local directional interpolation and nonlocal adaptive thresholding.

    Journal of Electronic Imaging, 20(2):023016, 2011.
  • Zoran & Weiss (2011) Zoran, D. and Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the IEEE International Conference on Computer Vision, pp. 479–486. IEEE, 2011.
  • Zuo et al. (2013) Zuo, W., Meng, D., Zhang, L., Feng, X., and Zhang, D. A generalized iterated shrinkage algorithm for non-convex sparse coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 217–224, 2013.