1 Introduction
Image noise is an undesirable byproduct of image capture and severely damages the quality of acquired images. Removing noise is a vital work in various computer vision tasks, e.g., recognition (He et al., 2016) and segmentation (Shi & Malik, 2000). Image denoising aims to recover the clean image x from its noisy observation , where n is generally assumed to be additive white Gaussian noise (AWGN). Numerous effective methods have been proposed in the past decades, and they can be categorized into filter based methods (Dabov et al., 2008; Buades et al., 2005), sparse coding based methods (Elad & Aharon, 2006; Dong et al., 2013), lowrankness based methods (Gu et al., 2014, 2017), convolutional neural nets (CNNs) based methods (Zhang et al., 2017; Chen & Pock, 2017; Mao et al., 2016; Zhang et al., 2018), etc. In this paper, we focus on the method of LRMA with regularization.
LRMA problem can be solved by the low rank matrix factorization and the rank minimization. The latter restores the data matrix through attaching a rank constraint on the estimated matrix, formulated by the following objective function:
(1) 
where denotes the Frobenius norm, and is the given rank. However, rank minimization is NP hard and is tough to solve directly. As the tightest convex relaxation of the rank function (Fazel, 2002)
, the nuclear norm minimization (NNM) has been generally served as an ideal approximation for regularization term. The nuclear norm is defined as the sum of the singular values, i.e.,
, where represents the ith singular value of a given matrix . Therefore, the NNM model aims to find a low rank matrix from the degraded observation which satisfies the following energy function:(2) 
where is a parameter to balance the data fidelity term and regularization. A softthresholding operation (Cai et al., 2010) is presented to solve NNM with the theoretical guarantees, that Candes and Recht (Candès & Recht, 2009) proved that low rank matrix can be efficiently recovered by dealing with NNM problems. However, most of the NNM based models treat all singular values equally, this ignores the prior knowledge in many image inverse problems, such as major edge and texture information expressed by larger singular values. To address this problem, Zhang et al. introduced a Truncated Nuclear Norm (TNN) regularization (2012) to improve the ability of preserving the textures while Sun et al. (2013) proposed a Capped Nuclear Norm (CNN) regularizer. A similar regularizer, namely Partial Sum Nuclear Norm (PSNN), has been developed in (Oh et al., 2016), which only minimizing the smallest singular values, where is the number of singular values and is the rank of matrix. Meanwhile, to improve the flexibility of the nuclear norm, Gu et al. generalized NNM to the weighted NNM (WNNM)(2014) with singular values attached to different weights, defined as:
(3) 
where
is the weight vector of
. Indeed, TNN, CNN and PSNN can be considered as the special case of WNNM, fixing weight vector with 0 and 1. Moreover, some efforts surrounded WNNM show an outstanding experimental performance in various image processing applications such as image denoising (Gu et al., 2014), background subtraction (Gu et al., 2017), image deblurring (Ren et al., 2016), inpainting (Yair & Michaeli, 2018). In particular, a multilchannel optimization model for real color image denoising under the WNNM framework (MCWNNM) (Xu et al., 2017) is proposed, which shows that by considering different contribution of the R, G and B channels based on their noise levels, both noise characteristics and channel correlation can be effectively exploited.Additionally, the standard NNM tends to overshrink the singular values with the same threshold, resulting in a deviation far from the original solution of rank minimization problem. The problem, obtained suboptimal solution by WNNM under certain conditions, is also discovered in (Lu et al., 2015). More specifically, Lu et al. provided a simple proof and a counterexample, which is better compared with WNNM. Therefore, Schatten norm is defined as the norm of the singular values with and proposed (Nie et al., 2012; Xie et al., 2016b) to impose low rank regularization, achieving a more accurate recovery of the data matrix while requiring only a weaker restricted isometry property (Liu et al., 2014). As shown in the experimental section, the results also demonstrate that the Schatten norm based model performs significantly better than the WNNM.
Inspired by the norm and MCWNNM(Xu et al., 2017), in this paper, we propose a new model for color and multispectral image denoising via nonconvex multiband weighted norm minimization (MBWPNM). Introducing Schatten norm for weighted nuclear norm makes the problem much more troublesome as well as the MCWNNM does not has an analytical solution. Consequently, for the weighted norm regularization term of the low rank model, we find that it is equivalent to a nonconvex norm subproblems. We further apply a generalized softthresholding algorithm (GST)(Zuo et al., 2013)to solve the low rank model with weighted norm. Meanwhile, rigorous mathematical proof of the equivalence and complexity analysis are presented in the later section. The contributions of our work are summarized as follows:

We propose a new model:MBWPNM, and present an efficient optimization algorithm to solve MBWPNM.

We adopt the MBWPNM model to color and multispectral image denoising, and validate robustness of our model using synthetic and real images. The experimental results indicate that the MBWPNM not only outperforms many stateofart denoising algorithm both quantitatively and qualitatively, but also results in a competitive speed than MCWNNM.
2 Related Work
As a generalization to the weighted nuclear norm minimization (WNNM) model, the multichannel weighted nuclear norm minimization (MCWNNM) model is defined as:
(4) 
with , where
is standard deviation in each channel and
is the identity matrix. By introducing an augmented variable
, the MCWNNM model can be reformulated as a linear equalityconstrained problem with two variables and its augmented Lagrangian function is:(5) 
where is the augmented Lagrangian multiplier and is the penalty parameter. According to (Xu et al., 2017), the problem (5) can be solved by alternating direction method of multipliers (ADMM)(Boyd et al., 2011) framework.
The ADMM is an algorithm that solves convex optimization problems by breaking them into smaller pieces, each of which are then easier to handle. In particular, there is widespread current interest applying ADMM in many of problems involving norms across statistics, machine learning and signal processing. However, on one side, because of multiple iterations, it is often timeconsuming to get a relatively accurate solution using ADMM; on the other side, preferences such as play an important role for convergence. Theoretically, the MBWPNM also can be solved by ADMM but its cost is higher than MCWNNM due to the complexity. In this paper, we present a more effective algorithm, which achieves better results with less time.
3 Multiband Weighted norm minimization
3.1 Problem Formulation
Given a matrix , the proposed optimization model under the MCWNNM framework, which aims to find a matrix closing to as much as possible, is defined as:
(6) 
involving two terms: a norm data fidelity and a weighted Schatten norm regularization. Then the weighted Schatten norm of matrix with power is
(7) 
where is a nonnegative weight vector, is the th singular value of , and . Note that MCWNNM is a special case of MBWPNM when power is set to 1. In the next subsection, we will launch a detailed discussion of the optimization above MBWPNM.
3.2 Optimization of MBWPNM
We first give the following theorem before analyzing and discussing the optimization of MBWPNM.
Theorem 1.
(Horn et al., 1990) Let be given and let , the following inequalities hold for the decreasingly ordered singular values of and :
(8) 
Introducing the Schatten norm makes the problem (6) more complicated to solve directly than the original MCWNNM model. Changing a way of thinking, we consider the following object function similar to problem (6):
(9) 
Proof.
The proof can be found in the supplementary material. ∎
Thus, the original problem (6) has been converted to problem (9), which can be solved more easily. We then have the Theorem 3 and 4.
Theorem 3.
(Hom & Johnson, 1991) Let , let , and let , denote the nonincreasingly ordered singular values of and , respectively. Then
(10) 
Theorem 4.
Let the SVD of be with . Then an optimal solution to (9) is with , where is given by solving the problem below:
(11) 
Proof.
The proof can be found in the supplementary material. ∎
Obviously, problem (11) will be more trivial to handle if the additional order constraint (i.e. ) can be abandoned. Therefore, we take into account the more general condition that problem (11) is decomposed into independent subproblems:
(12) 
which has been explored in (Lu et al., 2015; Zuo et al., 2013). To obtain the solution of each subproblem (12), the generalized softthresholding (GST) algorithm (Zuo et al., 2013)is adopted. Specifically, given and , there exists a specific threshold:
(13) 
If is the global minimum; otherwise, for any has one unique minimum satisfying the problem (12), which can be obtained by solving the following equation:
(14) 
The complete description of the GST algorithm is shown in Algorithm 1; please refer to (Zuo et al., 2013) for more details about the GST algorithm.
3.3 Optimal Solution under NonDescending Weights
Generally, the larger singular values usually contain the major edge and texture information, which implies that the small singular values should be penalized more than the large ones. Getting back to the optimization of problem (11), it is meaningful to assign the nondescending weights to the nonascending singular values for most practical applications in low level vision. Based on such considerations, we have the Theorem 5.
Theorem 5.
The optimal solutions of all the independent subproblems in (12) satisfy the following inequality:
(15) 
Proof.
For , we have the following inequality:
(16) 
after substitution i.e.
(17) 
Summing them together and simplification, it reduces to
(18) 
Thus ∎
According to Theorem 5, solving problem (11) is equivalent to solving all the independent subproblems in (12) with the nondescending weights. So far, the original problem (6) undergo a series of transformations:(6)(9)(11)(12) before it can be effectively solved. Finally, the proposed algorithm of MBWPNM is summarized in Algorithm 2.
4 Color and Multispectral Image Denoising
4.1 The Denoising Algorithm
Similar to MCWNNM model, MBWPNM is adopted to the matrix of color image nonlocal similar patches for noise removal, which are concatenated from RGB channel. Specifically, given a degraded color image , each local patch of size is stretched to a patch vector, denoted by , where are the patches in R,G,B channels, respectively. For a local patch , we search for the most similar patches across the image (in practice, in a large enough local window) by the block matching method proposed in (Dabov et al., 2008). Then, by stacking those nonlocal similar patches into a matrix column by column, we have , where and are the corresponding clean and noise patch matrices. Applying MBWPNM algorithm to , the whole image can be estimated through aggregating all the denoised patches .
Compared with RGB color image, multispectral image(MSI) is actually a more channels image. So, it is natural to apply MBWPNM model to the area of MSI denoising. For a given 3D patch cube, which is stacked by patches at the same position of MSI over all band, there are also many patch cubes similar to it. The low rank property of the image matrix is explored by rearranging respectively those nonlocal similar patch cubes as a 1D vector and stacking those vectors into a matrix column by column. Figure 1. shows the detailed process. (Chang et al., 2017) pointed out that the spectral and nonlocal similarity information from mutually complementary priors can be jointly utilized in this way and effectively improve the performance of denoising. It is also reasonable that using a weight matrix adjusts the contributions of multiple bands (channels) based on their different noise levels in real scene, where , is the identity matrix and is the number of bands.
In the application of color image and MSI denoising, theoretically, the larger the singular values, the less they should be shrunk. Therefore, the weight assigned to the singular value is inversely proportional to it, and we let
(19) 
where is a constant, is the number of similar patches and is to avoid dividing by zero. We adopt the iterative regularization scheme to restore clean image, and then the image can be reconstructed by aggregating all the denoised patches together. The MBWPNM based color image and MSI denoising algorithm is summarized in Algorithm 3.
4.2 Complexity Analysis
For a matrix of size , where is the number of bands (channels), denotes the width and height of local patch in Algorithm 3, and is the number of similar patches, the cost of SVD in each iteration(step 6) is . The GST algorithm (step 3 in Algorithm 2) costs , where is the number of iterations in the GST Algorithm. Therefore, the overall cost is , where is the number of iterations in Algorithm 3, and denotes the total number of patches. Specially, an amount of flops are required for the color image denoising.
5 Experimental Results
5.1 Experimental Setting
Firstly, we compare MBWPNM method with several stateofart image denoising methods both simulated and real noisy condition, including CBM3D(Dabov et al., 2007a), NCSR(Dong et al., 2013), EPLL(Zoran & Weiss, 2011), Guided(Xu et al., 2018b), DnCNN(Zhang et al., 2017), FDDNet(Zhang et al., 2018), MCWNNM(Xu et al., 2017). And then we also compare it with 9 MSI denoising algorithms: 1D sparse representationbased methods (SDS(Lam et al., 2012), ANLM(Manjón et al., 2010)), 2D lowrank matrix recovery methods (LRMR(Zhang et al., 2014), NMF(Ye et al., 2015)
), stateofart tensor methods (BM3D
(Dabov et al., 2007b), LRTA(Renard et al., 2008), BM4D(Maggioni et al., 2013), ISTreg(Xie et al., 2016a), LLRT(Chang et al., 2017)).In simulated experiments, the noise levels in each band (channel) have been known. We set the noise level as the root mean square (RMS):
(20) 
for the method that requires a single parameter to input such as CBM3D and LLRT. In the real cases, we use the method to estimate (Liu et al., 2008) the noise level for each channel.
5.2 Color Image Denoising
For a fair comparison, we keep the parameter settings of both MCWNNM and MBWPNM to achieve their best performance. More specifically, we set the patch size as , the number of nonlocal similar patches as , the window size for searching similar patches as , and the number of iterations is set as in simulated experiments.
5.2.1 Simulated Color Images Denoising
Intuitively, the different choice of power has different influence on shrinkage of singular value. How to choose a suitable setting of power for each noise level is a little complicated. So, we add AWGN to each of the R, G, B channels on the 18 color subimages from Master dataset(Zhang et al., 2011), and then make a test with different power under different noise level. The PSNR results are shown in Figure 2, while the best PSNR result for each group is highlighted in red. In this test, we set the value of power changing from 0.05 to 1 with interval 0.05, and select 6 noise level (5,15,25), (10,20,30), (15,25,35), (20,30,40), (25,35,45), (30,40,50). In the first two subfigures of Figure 2, we can observe that the best values of are 1 when the noise level is low. And the best values of are reduced to 0.95 in the subsequent two subfigures. With the increase of noise level, the pollution of singular values is much more severe. Consequently, as shown in the last two subfigures of Figure 2, the smaller values of (0.8 and 0.55 for the latter two, respectively) are more suitable for higher noise level. In summary, the optimal value of , by and large, is inversely proportional to the noise level. Sticking to this principle, we did the following experiments.
CBM3D  NCSR  EPLL  DnCNN  FFDNet  MCWNNM  MBWPNM  

(5,30,15)  29.46  30.77  31.68  33.35  33.29  32.99  33.34 
(25,5,30)  29.38  30.16  30.82  32.56  32.48  32.15  33.02 
(30,10,50)  29.06  27.95  29.05  30.65  30.44  29.81  30.71 
(40,20,30)  29.14  28.29  28.95  31.06  30.98  29.27  29.43 
Camera Settings  CBM3D  NCSR  EPLL  Guided  FFDNet  MCWNNM  MBWPNM 

38.25  38.02  37.00  40.82  37.05  41.20  41.21  
Canon 5D III ISO=3200  35.85  34.76  33.88  37.19  33.91  37.25  37.10 
34.12  34.91  33.83  36.92  33.86  36.48  37.14  
33.10  33.51  33.28  35.32  33.31  35.54  35.49  
Nikon D600 ISO=3200  35.57  34.13  33.77  36.62  33.81  37.03  37.06 
40.77  35.44  34.93  38.68  34.98  39.56  39.56  
36.83  35.98  35.47  38.88  35.50  39.26  39.19  
Nikon D800 ISO=1600  40.19  36.39  35.71  40.66  35.75  41.45  41.44 
37.64  35.34  34.81  39.20  34.83  39.54  39.52  
39.72  33.63  33.26  37.92  33.30  38.94  39.10  
Nikon D800 ISO=3200  36.74  33.13  32.89  36.62  32.94  37.40  37.10 
40.96  33.43  32.91  37.64  32.94  39.42  39.47  
34.63  30.09  29.63  33.01  29.65  34.85  34.92  
Nikon D800 ISO=6400  32.95  30.35  29.97  32.93  30.00  33.97  33.96 
33.61  30.12  29.87  32.96  29.88  33.97  34.09  
Average  36.73  33.95  33.41  37.02  33.45  37.72  37.76 
Time  6  1434  424  48  4  192  41 
We evaluate the completing methods on 24 color images from Kodak PhotoCD Dataset^{1}^{1}1http://r0k.us.graphics/kodak/. Zero mean additive white Gaussian noise with four noise levels are added to those test images to generate the degraded observations. The averaged PSNR results are shown in Table 1 and the best result is highlighted in bold. According to the discussion of power , we choose for each test in Table 1, respectively. On average, we can see that MBWPNM achieves the highest PSNR values on 2 of 4 noise levels. Compared with MCWNNM, our method achieves dB, dB, dB and dB improvements, respectively.
In Figure 3, we compare the visual quality of the denoised image by the competing algorithms. One can see that methods CBM3D, DnCNN and FFDNet oversmooth in the roof area of image kodim08, while NCSR, EPLL and MCWNNM remain noise or generate more color artifacts. MBWPNM recovers the image well in visual quality than other methods. More visual comparisons can be found in the supplementary material.
5.2.2 Real Color Images Denoising
Different from AWGN, the realworld noise is signal dependent, and cannot be modeled by an explicit distribution. To demonstrate the robustness of our method, we compare MBWPNM with the competing methods(Dong et al., 2013; Zhang et al., 2018; Xu et al., 2017; Dabov et al., 2007a; Zoran & Weiss, 2011; Xu et al., 2018b) on three representative datasets.
The first dataset is provided in(Nam et al., 2016), which includes 11 indoor scenes, 500 JPEG images per scene. The ground truth noisefree images are generated by computing the mean image of each scene, and author of (Nam et al., 2016) cropped 15 smaller images of size for experiments because of original image with resolution . These images are shown in supplementary material. Quantitative comparisons, including PSNR result and averaged computational time(ACT), are listed in Table 2. The best results are highlighted in bold. MBWPNM achieves the highest averaged PSNR in all competing methods. It achieves dB improvement over MCWNNM method in average and outperforms the benchmark CBM3D method by dB in average. One can see that FFDNet based on convolutional neural nets (CNNs) has no longer performed as well as simulated experiments. This point can be confirmed in the following tests. More visual comparisons can be found in the supplementary material.
The other two datasets are provided in (Xu et al., 2018a) and (Abdelhamed et al., 2018), which both are much more comprehensive than the first one (Xu et al., 2018a). The detailed description of these datasets is listed in supplementary file. The PSNR and ACT results of the competing algorithms are reported in Table 3. We can see again that MBWPNM achieves much better performance than the other competing methods. Because of limited space, more visual comparisons can be found in the supplementary file.
Comparison on speed. We compare the average computational time (second) of different methods, which is shown in Table 2 and 4. All experiments are implemented in Matlab on a PC with 3.2GHz CPU and 8GB RAM. The fastest result is highlighted in bold. We can see that FFDNet is the fastest on three datasets and it needs about 4 seconds, while the MBWPNM generally costs about one fifth time compared with MCWNNM. Noted that CBM3D and FFDNet are implemented with compiled C++ mexfunction, while NCSR, EPLL, Guided, MCWNNM and MBWPNM are implemented purely in Matlab.
Index  CBM3D  NCSR  EPLL  Guided  FFDNet  MCWNNM  MBWPNM  

Dataset 2  PSNR  38.11  36.49  35.97  38.37  35.98  38.57  38.59 
Time  6  1174  427  52  4  171  38  
Dataset 3  PSNR  37.20  30.21  27.48  32.55  27.69  37.82  37.83 
Time  8  1284  426  48  4  527  99 
5.3 MSI Denoising
Different from color image denoising, in order to give an overall evaluation for the spatial and spectral quality, four quantitative picture quality indices (PQI) are employed: PSNR, SSIM(Wang et al., 2004), ERGAS(Wald, 2002) and SAM(Yuhas et al., 1993). The larger PSNR and SSIM are, and the smaller ERGAS and SAM are, the better the recovered MSIs are.
The Columbia Multispectral Database (CAVE)(Yasuma et al., 2010)
is utilized in our simulated experiment. The noisy MSIs are generated by adding AWGN with different variance to each of the bands. Specifically, for the number of noise variances, we set them starting from one with the tolerance of 1, 2, 3 and 4.
We choose for each test. For each noise setting, all of the four PQI are shown in supplementary material. On averaged PSNR, MBWPNM achieves dB, dB, dB, dB improvements over LLRT, while achieves the highest PSNR values in all competing methods. Generally, in four comparative tests, MBWPNM achieves the best performance on 11 of 16 quantitative assessments.
6 Conclusion
In this paper, we propose a multiband weighted norm minimization (MBWPNM) model for color image and MSI denoising, which preserves the power of MCWNNM and is flexible to providing different rank components with different treatments in the practical applications. To solve the MBWPNM model, its equivalent form is deduced and it can be efficiently solved via a generalized softthresholding (GST) algorithm. We also proved that, when the weights are in a nondescending order, the solution obtained by the GST algorithm is still global optimum. We then applied the proposed MBWPNM algorithm to color image and MSI denoising. The experimental results on synthetic and real datasets demonstrat that the MBWPNM model achieves significant performance gains over several stateofart methods, such as CBM3D and BM4D. In the future, we will extend the MBWPNM to other applications in computer vision problems.
References

Abdelhamed et al. (2018)
Abdelhamed, A., Lin, S., and Brown, M. S.
A highquality denoising dataset for smartphone cameras.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 1692–1700, 2018.  Boyd et al. (2011) Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine learning, 3(1):1–122, 2011.
 Buades et al. (2005) Buades, A., Coll, B., and Morel, J.M. A nonlocal algorithm for image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pp. 60–65. IEEE, 2005.
 Cai et al. (2010) Cai, J.F., Candès, E. J., and Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010.
 Candès & Recht (2009) Candès, E. J. and Recht, B. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717, 2009.
 Chang et al. (2017) Chang, Y., Yan, L., and Zhong, S. Hyperlaplacian regularized unidirectional lowrank tensor recovery for multispectral image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4260–4268, 2017.
 Chen & Pock (2017) Chen, Y. and Pock, T. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1256–1272, 2017.
 Dabov et al. (2007a) Dabov, K., Foi, A., Katkovnik, V., and Egiazarian, K. Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminancechrominance space. In Proceedings of the IEEE International Conference on Image Processing., volume 1, pp. I–313. IEEE, 2007a.
 Dabov et al. (2007b) Dabov, K., Foi, A., Katkovnik, V., and Egiazarian, K. Image denoising by sparse 3d transformdomain collaborative filtering. IEEE Transactions on Image Processing, 16(8):2080–2095, 2007b.
 Dabov et al. (2008) Dabov, K., Foi, A., Katkovnik, V., and Egiazarian, K. Image restoration by sparse 3d transformdomain collaborative filtering. In Image Processing: Algorithms and Systems VI, volume 6812, pp. 681207. International Society for Optics and Photonics, 2008.
 Dong et al. (2013) Dong, W., Zhang, L., Shi, G., and Li, X. Nonlocally centralized sparse representation for image restoration. IEEE Transactions on Image Processing, 22(4):1620–1630, 2013.
 Elad & Aharon (2006) Elad, M. and Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15(12):3736–3745, 2006.
 Fazel (2002) Fazel, M. Matrix rank minimization with applications. PhD thesis, PhD thesis, Stanford University, 2002.
 Gu et al. (2014) Gu, S., Zhang, L., Zuo, W., and Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2862–2869, 2014.
 Gu et al. (2017) Gu, S., Xie, Q., Meng, D., Zuo, W., Feng, X., and Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. International Journal of Computer Vision, 121(2):183–208, 2017.
 He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
 Hom & Johnson (1991) Hom, R. A. and Johnson, C. R. Topics in matrix analysis. Cambridge UP, New York, 1991.
 Horn et al. (1990) Horn, R. A., Horn, R. A., and Johnson, C. R. Matrix analysis. Cambridge university press, 1990.
 Lam et al. (2012) Lam, A., Sato, I., and Sato, Y. Denoising hyperspectral images using spectral domain statistics. In Proceedings of the 21st International Conference on Pattern Recognition, pp. 477–480. IEEE, 2012.
 Liu et al. (2008) Liu, C., Szeliski, R., Kang, S. B., Zitnick, C. L., and Freeman, W. T. Automatic estimation and removal of noise from a single image. IEEE Transactions on Pattern Analysis & Machine Intelligence, 30(2):299–314, 2008.
 Liu et al. (2014) Liu, L., Huang, W., and Chen, D.R. Exact minimum rank approximation via schatten pnorm minimization. Journal of Computational and Applied Mathematics, 267:218–227, 2014.

Lu et al. (2015)
Lu, C., Zhu, C., Xu, C., Yan, S., and Lin, Z.
Generalized singular value thresholding.
In
AAAI Conference on Artificial Intelligence
, pp. 1805–1811, 2015.  Maggioni et al. (2013) Maggioni, M., Katkovnik, V., Egiazarian, K., and Foi, A. Nonlocal transformdomain filter for volumetric data denoising and reconstruction. IEEE Transactions on Image Processing, 22(1):119–133, 2013.
 Manjón et al. (2010) Manjón, J. V., Coupé, P., MartíBonmatí, L., Collins, D. L., and Robles, M. Adaptive nonlocal means denoising of mr images with spatially varying noise levels. Journal of Magnetic Resonance Imaging, 31(1):192–203, 2010.
 Mao et al. (2016) Mao, X., Shen, C., and Yang, Y.B. Image restoration using very deep convolutional encoderdecoder networks with symmetric skip connections. In Advances in Neural Information Processing Systems, pp. 2802–2810, 2016.
 Nam et al. (2016) Nam, S., Hwang, Y., Matsushita, Y., and Joo Kim, S. A holistic approach to crosschannel image noise modeling and its application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1683–1691, 2016.
 Nie et al. (2012) Nie, F., Huang, H., and Ding, C. H. Lowrank matrix recovery via efficient schatten pnorm minimization. In AAAI Conference on Artificial Intelligence, 2012.
 Oh et al. (2016) Oh, T.H., Tai, Y.W., Bazin, J.C., Kim, H., and Kweon, I. S. Partial sum minimization of singular values in robust pca: Algorithm and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(4):744–758, 2016.
 Ren et al. (2016) Ren, W., Cao, X., Pan, J., Guo, X., Zuo, W., and Yang, M.H. Image deblurring via enhanced lowrank prior. IEEE Transactions on Image Processing, 25(7):3426–3437, 2016.
 Renard et al. (2008) Renard, N., Bourennane, S., and BlancTalon, J. Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Geoscience and Remote Sensing Letters, 5(2):138–142, 2008.
 Shi & Malik (2000) Shi, J. and Malik, J. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000.

Sun et al. (2013)
Sun, Q., Xiang, S., and Ye, J.
Robust principal component analysis via capped norms.
In Proceedings of the 19th International Conference on Knowledge Discovery and Data Mining, pp. 311–319. ACM, 2013.  Wald (2002) Wald, L. Data fusion: definitions and architectures: fusion of images of different spatial resolutions. Presses des MINES, 2002.
 Wang et al. (2004) Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
 Xie et al. (2016a) Xie, Q., Zhao, Q., Meng, D., Xu, Z., Gu, S., Zuo, W., and Zhang, L. Multispectral images denoising by intrinsic tensor sparsity regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1692–1700, 2016a.
 Xie et al. (2016b) Xie, Y., Gu, S., Liu, Y., Zuo, W., Zhang, W., and Zhang, L. Weighted schatten norm minimization for image denoising and background subtraction. IEEE Transactions on Image Processing, 25(10):4842–4857, 2016b.
 Xu et al. (2017) Xu, J., Zhang, L., Zhang, D., and Feng, X. Multichannel weighted nuclear norm minimization for real color image denoising. In Proceedings of the IEEE International Conference on Computer Vision, volume 2, 2017.
 Xu et al. (2018a) Xu, J., Li, H., Liang, Z., Zhang, D., and Zhang, L. Realworld noisy image denoising: A new benchmark. arXiv preprint arXiv:1804.02603, 2018a.
 Xu et al. (2018b) Xu, J., Zhang, L., and Zhang, D. External prior guided internal prior learning for realworld noisy image denoising. IEEE Transactions on Image Processing, 27(6):2996–3010, 2018b.
 Yair & Michaeli (2018) Yair, N. and Michaeli, T. Multiscale weighted nuclear norm image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3165–3174, 2018.
 Yasuma et al. (2010) Yasuma, F., Mitsunaga, T., Iso, D., and Nayar, S. K. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum. IEEE Transactions on Image Processing, 19(9):2241–2253, 2010.
 Ye et al. (2015) Ye, M., Qian, Y., and Zhou, J. Multitask sparse nonnegative matrix factorization for joint spectral–spatial hyperspectral imagery denoising. IEEE Transactions on Geoscience and Remote Sensing, 53(5):2621–2639, 2015.
 Yuhas et al. (1993) Yuhas, R. H., Boardman, J. W., and Goetz, A. F. Determination of semiarid landscape endmembers and seasonal trends using convex geometry spectral unmixing techniques. In 4th Annual JPL Airborne Geoscience Workshop., Volume 1:205–208, 1993.
 Zhang et al. (2012) Zhang, D., Hu, Y., Ye, J., Li, X., and He, X. Matrix completion by truncated nuclear norm regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2192–2199. IEEE, 2012.
 Zhang et al. (2014) Zhang, H., He, W., Zhang, L., Shen, H., and Yuan, Q. Hyperspectral image restoration using lowrank matrix recovery. IEEE Transactions on Geoscience and Remote Sensing, 52(8):4729–4743, 2014.
 Zhang et al. (2017) Zhang, K., Zuo, W., Chen, Y., Meng, D., and Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
 Zhang et al. (2018) Zhang, K., Zuo, W., and Zhang, L. Ffdnet: Toward a fast and flexible solution for cnn based image denoising. IEEE Transactions on Image Processing, 2018.

Zhang et al. (2011)
Zhang, L., Wu, X., Buades, A., and Li, X.
Color demosaicking by local directional interpolation and nonlocal adaptive thresholding.
Journal of Electronic Imaging, 20(2):023016, 2011.  Zoran & Weiss (2011) Zoran, D. and Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the IEEE International Conference on Computer Vision, pp. 479–486. IEEE, 2011.
 Zuo et al. (2013) Zuo, W., Meng, D., Zhang, L., Feng, X., and Zhang, D. A generalized iterated shrinkage algorithm for nonconvex sparse coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 217–224, 2013.
Comments
There are no comments yet.