NGMeet
Matlab Code for: "Non-local Meets Global: An Integrated Paradigm for Hyperspectral Denoising. Arvix. Dec 2018"
view repo
Non-local low-rank tensor approximation has been developed as a state-of-the-art method for hyperspectral image (HSI) denoising. Unfortunately, with more spectral bands for HSI, while the running time of these methods significantly increases, their denoising performance benefits little. In this paper, we claim that the HSI underlines a global spectral low-rank subspace, and the spectral subspaces of each full band patch groups should underlie this global low-rank subspace. This motivates us to propose a unified spatial-spectral paradigm for HSI denoising. As the new model is hard to optimize, we further propose an efficient algorithm for optimization, which is motivated by alternating minimization. This is done by first learning a low-dimensional projection and the related reduced image from the noisy HSI. Then, the non-local low-rank denoising and iterative regularization are developed to refine the reduced image and projection, respectively. Finally, experiments on synthetic and both real datasets demonstrate the superiority against the other state-of-the-arts HSI denoising methods.
READ FULL TEXT VIEW PDFMatlab Code for: "Non-local Meets Global: An Integrated Paradigm for Hyperspectral Denoising. Arvix. Dec 2018"
Matlab Code for: "Non-local Meets Global: An Integrated Paradigm for Hyperspectral Denoising. Arvix. Dec 2018"
We apply tensor decoding scheme to decode SCI cameras.
Recent decades have witnessed the development of hyperspectral imaging techniques [4, 38, 18]. The hyperspectral imaging system is able to cover the wavelength region from 0.4 to 2.5 at a nominal spectral resolution of 10 nm. With the wealth of available spectral information, hyperspectral images (HSI) have the high spectral diagnosis ability to distinguish precise details even between the similar materials [2, 31], providing the potential advantages of application in remote sensing [32, 33], medical diagnosis [20]
[27, 33], quality control [17] and so on. Due to instrumental noise, HSI is often corrupted by Gaussian noise, which significantly influences the subsequent applications. As a preprocessing, HSI denoising is a fundamental step prior to HSI exploitation [6, 41, 43].For HSI denoising, the spatial non-local similarity and global spectral correlation are the most two important properties. The spatial non-local similarity suggests that similar patches inside a HSI can be grouped and denoised together. The related methods [9, 12, 13, 28, 36, 45] denoise the HSIs via group matching of full band patches (FBPs, stacked by patches at the same location of HSI over all bands) and low-rank denoising of each non-local FBP group (NLFBPG). These methods have achieved state-of-the-art performance. However, they still face a crucial problem. For HSIs, the higher spectral dimension means the higher discriminant ability [2], thus more spectrums are desired. As the spectral number increases, the size of NLFBPG also becomes larger, leading to significantly more computations for the subsequent low-rank matrix/tensor approximations.
The HSIs have strong spectral correlation, which is modeled as low-rank property [1, 5, 41] and have also been widely adopted to the HSI denoising. However, only spectral low-rank regularization cannot remove the noise efficiently. One promising improvement is to project the original noisy HSI onto the low-dimensional spectral subspace, and denoise the projected HSI via spatial based methods [10, 29, 46, 47]. Unfortunately, these two-stage methods are significantly influenced by the quality of projection and the efficiency of spatial denoising. All of them fail to capture a clean projection matrix, which makes the restored HSI still be noisy.
To alleviate the aforementioned problems, this paper introduces a unified HSI denoising paradigm to integrate the spatial non-local similarity and global spectral low-rank property simultaneously. We start from the point that the HSI should underlies a low-dimensional spectral subspace, which has been widely accepted in hyperspectral imaging [16, 23], compressive sensing [3, 44], unmixing [2] and dimension reduction [1] tasks. Inspired by this fact, the whole NLFBPGs should also underlie a common low-dimensional spectral subspace. Thus, we first learn a global spectral low-rank projection, and subsequently exploit the spatial non-local similarity of projected HSI after the projection. The computational cost of non-local processing in our paradigm will almost keep the same with more spectral bands, and the global spectral low-rank property will also be enhanced. The contributions are summarized as follows:
We introduce a unified paradigm to exploit the spatial non-local and global spectral low-rank properties simultaneously. We transfer the non-local denoising to the reduced image and improve the computational efficiency against the increase of spectral band number;
The resulting new model for image denoising is hard to optimize, as it involves with both complex constraint and regularization. We further propose an efficient problem for optimization, which is inspired by alternating minimization;
Finally, the proposed method is not only the best compared with other state-of-the-art methods in simulated experiment, where Gaussian noise are added manually; but also achieves the most appealing recovered images for real datasets.
Notations We follow the tensor notation in [24], the tensor and matrix are represented as Euler script letters, i.e. and boldface capital letter, i.e. , respectively. For a -order tensor , the mode- unfolding operator is denoted as . We have , in which is the inverse operator of unfolding operator. The Frobenius norm of is defined by . The mode- product of a tensor and a matrix is defined as , where and .
Since denoising is an ill-posed problem, proper regulations based on the HSI prior knowledge is necessary [15, 35]. The mainstream of HSI denoising methods can be grouped into two categories: spatial non-local based methods and spectral low-rank based methods.
HSIs illustrate the strong spatial non-local similarity. After the non-local low-rank modeling was first introduced to HSI denoising in [28], the flowchart of the non-local based methods become fixed: FBPs grouping and low-rank tensor approximation. Almost all the researchers focused on the low-rank tensor modeling of NLFBPGs, such as tucker decomposition [28], sparsity regularized tucker decomposition [36], Laplacian scale mixture low-rank modeling [13], and weighted low-rank tensor recovery [8] to exploit the spatial non-local similarity and spectral low-rank property simultaneously. However, with the increase of spectral number, the computational burden also increases significantly, blocking the application of these methods to the real high-spectrum HSIs.
Chang et.al [9] claimed that the spectral low-rank property of NLFBPGs is weak and proposed a unidirectional low-rank tensor recovery to explore the non-local similarity. It saved much computational burden and achieved the state-of-the-art performance in the HSI denoising. This reflects the fact that previous non-local low-rank based methods have not yet efficiently utilized the spectral low-rank property. How to balance the importance between spectral low-rank and spatial non-local similarity still remains a problem.
The global spectral low-rank property of HSI has been widely accepted and applied to the subsequent applications [1, 5]. As pointed out in [1]
, the intrinsic dimension of the spectral subspace is far less than the spectral dimension of the original image. By vectorizing each band of the HSI and reshaping the original 3-D HSI into a 2-D matrix, various low-rank approximation methods such as principal components analysis (PCA)
[5], robust PCA [11, 37, 41], low-rank matrix factorization [3, 39] have been directly adopted to denoise the HSI. However, these methods only explore the spectral prior of the HSI, ignoring the spatial prior information. Instantly, many conventional spatial regularizers such as total variation [22], low-rank tensor regularization [25, 30] are adopted to explore the spatial prior of HSI combined with spectral low-rank property.A remedy is a two-stage method combining the spatial regularizer and spectral low-rank property together. This is done by firstly mapping the original HSI into the low-dimensional spectral subspace, and then denoise the mapped image via existing spatial denoising methods, e.g., wavelets [10, 29], BM3D [47] and HOSVD [46]. These two-stage methods provide a new sight to denoise the HSI in the transferred spectral space, which is very fast. However, these methods fail to combine the best of both worlds, and the extracted subspace is still corrupted by the noise.
In this section, we propose a unified HSI denoising paradigm to integrate spatial non-local similarity and global spectral low-rank property. We first learn a low-dimensional projection and the related reduced image from the noisy HSI. Then the reduced image and the projection are updated by spatial non-local denoising and iteration regularization, respectively. The overview of the proposed paradigm is in Figure 1.
Assuming that the clean HSI is corrupted by the additive Gaussian noise
(with zero mean and variance
), then the noisy HSI is generated by(1) |
First, to capture the spectral low-rank property in Section 2.2, we are motivated to use a low-rank representation of the clean HSI , i.e. , where , is a projection matrix capturing the common subspace of different spectrum, and is the reduced image. Second, to utilize the spatial low-rank property, we add a non-local low-rank regularizer on the reduced image . As a result, the proposed non-local meets global (NGmeet) denoising paradigm is presented as
(2) |
where controls the contribution of spatial non-local regularization, the projection matrix is required to be orthogonal, and the clean HSI is recovered by .
The objective (2) is very hard to optimize, due to both the orthogonal constraint on and complex regularization on . An algorithm based on alternating minimization to approximately solve the objective function is proposed in Section 3.2.
The orthogonal constraint is very important here. First, it encourages the representation held in to be more distinguish with each other. This helps to keep noise out of and further allows a closed-form solution for computing (Section 3.2.1
). Besides, it preserves the distribution of noise, which allows us to estimate the remained noise-level in reduced image and reuse the-state-of-art Gaussian based non-local method for spatial denoising (Section
3.2.2).However, before going to optimization details, we first look into (2), and see the insights why the proposed method can beat all previous spectral low-rank methods [10, 47].
Recall that, in (2), the first item tries to exploit the spectral low-rank property and decompose the noisy into the coarse spectral low-rank projection and reduced image . Here, both and have physical meaning in the field of remote sensing [2]. Specifically, -th column of , denoted as , is regarded as the -th signature (known as endmember) of HSI, and the corresponding coefficient image is regarded as the abundance map.
Previous methods are mostly two-stage ones, they do not iterative refine the projection matrix they found, e.g. FastHyDe [45]. However, we model the spatial and spectral low-rank properties simultaneously, which enables iterative refinement of the projection matrix . To demonstrate the necessity of iterative refinement, we calculated the projection and reduced image from noisy WDC with noise variance 50. The reference and are from the original clean WDC. Figure 2 presents the comparison on signatures and the corresponding coefficient image before and after our refinement. From the figure, it can be observed that the projection atom and reduced image obtained by the spectral denoising method are still suffering from the noise, while the proposed method can produce much cleaner signatures and coefficient images.
As discussed in Section 3.1, the objective (2) is very hard to optimize. In this section, we are motivated to use alternating minimization for optimization (Algorithm 1). , stand for the input noisy image and output denoised image of the -th iteration, respectively. As will be shown in the sequel, Algorithm 1 tries to find a closed-form solution for (step 3) and reuses the-state-of-art spatial denosing method for computing (steps 4-6), which together make the algorithm very efficient. Besides, as will be refined during the iteration, iterative regularization [14] is adopt to boost the denosing performance (step 7).
In this stage, we identify the projection matrix with the given and from (2), which leads to
(3) |
However, this problem is hard without simple closed-form solution. Instead, since is obtained from iterative regularization, of which the noisy-level is decreased. Thus, we proposed to relax (3) as
(4) |
which has the closed-form solution (Proposition 3.1). Thus, only a SVD on the folding matrix of is required, which can be efficiently computed.
Let be the rank- SVD of . The solution to (4) is given by the close-form as and .
Note that we have from Section 3.2.1. Using in (2), the objective in this stage becomes:
(5) |
where is a non-local denoising regularizer. Formulation (5) appears in many denoising models, e.g. WNNM [19]. Specifically, to solve this regularizer, we need to first group similar patches, then denoise each patch group tensors and finally assemble the final estimated .
However, all these model assume the noise on
follow univariate Gaussian distribution. If such assumption fails, the resulting performance can deteriorate significantly. Here, we have the following Proposition
3.2. Therefore, the noise distribution is preserved from to , which enables us to use the existing spatial denoising methods. In this paper, we use WNNM [19] to denoise each patch group tensor, as it is widely used and gives the-state-of-art denoising performance.Assume the noisy HSI is from (1), then the noise on the reduced image , where , still follows Gaussian distribution with zero mean and variance .
Finally, to use WNNM, we need to estimate the noise level in , whose noise level is changed during the iteration. From Proposition 3.2, we know the noisy level of is the same as , thus we propose to estimate it via
(6) |
where is the a scaling factor controlling the re-estimation of noise variance, and stands for the averaging process of the tensor elements. The denoised group tensors are denoted as , which can be directly used to reconstruct the denoised reduced image . The output denoised image of -th iteration is .
Iteration regularization has been widely used to boost the denoising performance [9, 14, 19, 36]. Here we also introduce the it into our model (Algorithm 1) to refine the noisy projection . As shown in (4), the projection is significantly influenced by the noise intensity of input noisy image . Hence we update the next input noisy image as
where is to trade-off the denoised image and original noisy image . The estimation of can benefit from the lower noise variance of the input .
Besides, is also updated with the iteration. We initialize by HySime [1]. When the noisy image is corrupted by heavy noise, the estimated
will be small. Fortunately, the larger singular values obtained from the noisy image are less contaminated by the noise, and help to keep noise out of the reduced image. With the iteration, We increase
by(7) |
where is a constant value. Therefore, has the ability to capture more useful information with more iterations.
stage A | stage B | |
---|---|---|
NGmeet | ||
LLRT | — | |
KBR | — |
Following the procedure of Algorithm 1, the main time complexity of each iteration includes stage A-SVD , stage B.non-local low-rank denoising of each . Table 1 presents the time complexity comparison between NGmeet and other non-local HSI denoising method. LLRT and KBR only need stage B to complete the denoising. As can be seen, the proposed NGmeet costs additional complexity in stage A, however, will be at least times faster in stage B.
spectral low-rank methods | spatial low-rank methods | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Image | Index | LRTA | LRTV | MTS-NMF | NAIL-RMA | PARA-FAC | Fast-HyDe | TDL | KBR | LLRT | NGmeet | |
PSNR | 44.12 | 41.47 | 44.27 | 28.51 | 38.01 | 46.72 | 45.58 | 46.20 | 47.14 | 47.87 | ||
CAVE | 10 | SSIM | 0.969 | 0.949 | 0.972 | 0.941 | 0.921 | 0.985 | 0.983 | 0.980 | 0.989 | 0.990 |
SAM | 7.90 | 16.54 | 8.49 | 14.52 | 13.86 | 6.62 | 6.07 | 8.94 | 4.65 | 4.72 | ||
PSNR | 38.68 | 35.32 | 37.18 | 35.11 | 37.58 | 41.21 | 39.67 | 41.52 | 42.53 | 43.11 | ||
30 | SSIM | 0.913 | 0.818 | 0.855 | 0.775 | 0.888 | 0.945 | 0.942 | 0.942 | 0.974 | 0.972 | |
SAM | 12.86 | 33.32 | 14.97 | 32.43 | 17.37 | 14.06 | 12.54 | 19.43 | 8.23 | 7.46 | ||
PSNR | 35.49 | 32.27 | 33.40 | 32.11 | 30.06 | 38.05 | 36.51 | 39.41 | 40.09 | 40.45 | ||
50 | SSIM | 0.858 | 0.719 | 0.730 | 0.638 | 0.571 | 0.889 | 0.888 | 0.922 | 0.950 | 0.951 | |
SAM | 16.53 | 43.65 | 19.06 | 22.85 | 38.35 | 20.08 | 18.23 | 21.31 | 11.48 | 9.80 | ||
PSNR | 31.21 | 27.97 | 27.96 | 27.90 | 24.29 | 33.41 | 31.90 | 33.78 | 36.25 | 37.21 | ||
100 | SSIM | 0.735 | 0.529 | 0.493 | 0.453 | 0.256 | 0.746 | 0.734 | 0.851 | 0.910 | 0.927 | |
SAM | 22.67 | 54.85 | 26.33 | 55.66 | 51.83 | 30.72 | 28.51 | 26.41 | 18.17 | 16.23 | ||
PSNR | 38.49 | 38.71 | 40.64 | 41.46 | 33.39 | 42.220 | 41.46 | 40.09 | 41.95 | 43.17 | ||
PaC | 10 | SSIM | 0.975 | 0.979 | 0.988 | 0.987 | 0.866 | 0.990 | 0.988 | 0.984 | 0.989 | 0.992 |
SAM | 4.90 | 3.29 | 2.76 | 3.46 | 9.05 | 2.99 | 3.06 | 2.86 | 2.75 | 2.61 | ||
PSNR | 32.07 | 32.76 | 35.45 | 34.17 | 30.92 | 35.98 | 34.43 | 34.39 | 35.04 | 36.97 | ||
30 | SSIM | 0.908 | 0.920 | 0.958 | 0.941 | 0.845 | 0.962 | 0.949 | 0.947 | 0.957 | 0.971 | |
SAM | 7.88 | 5.76 | 4.17 | 6.54 | 9.28 | 5.09 | 5.11 | 4.28 | 4.86 | 4.30 | ||
PSNR | 29.11 | 29.45 | 32.51 | 30.71 | 29.24 | 33.32 | 31.31 | 31.05 | 32.00 | 34.29 | ||
50 | SSIM | 0.836 | 0.850 | 0.921 | 0.886 | 0.846 | 0.936 | 0.904 | 0.892 | 0.918 | 0.948 | |
SAM | 9.20 | 8.60 | 5.50 | 8.83 | 11.40 | 6.55 | 6.14 | 5.40 | 6.55 | 5.18 | ||
PSNR | 25.13 | 26.22 | 28.17 | 25.76 | 23.68 | 29.90 | 27.49 | 27.80 | 28.63 | 30.61 | ||
100 | SSIM | 0.655 | 0.729 | 0.808 | 0.728 | 0.598 | 0.873 | 0.789 | 0.793 | 0.833 | 0.890 | |
SAM | 10.17 | 12.76 | 8.40 | 12.93 | 20.22 | 8.68 | 7.67 | 6.95 | 7.68 | 6.86 | ||
PSNR | 38.94 | 36.64 | 37.26 | 42.57 | 32.38 | 43.06 | 41.83 | 40.58 | 41.89 | 43.72 | ||
WDC | 10 | SSIM | 0.974 | 0.968 | 0.975 | 0.989 | 0.914 | 0.991 | 0.989 | 0.986 | 0.990 | 0.993 |
SAM | 5.602 | 4.653 | 4.429 | 3.637 | 8.087 | 3.070 | 3.680 | 3.090 | 3.700 | 2.830 | ||
PSNR | 32.91 | 32.42 | 34.65 | 35.87 | 31.56 | 37.390 | 34.84 | 34.75 | 36.30 | 37.90 | ||
30 | SSIM | 0.917 | 0.909 | 0.953 | 0.958 | 0.898 | 0.971 | 0.953 | 0.951 | 0.967 | 0.975 | |
SAM | 8.331 | 5.991 | 5.557 | 7.011 | 9.009 | 5.140 | 6.400 | 5.240 | 5.460 | 4.640 | ||
PSNR | 30.35 | 30.12 | 32.49 | 32.56 | 29.49 | 34.61 | 31.89 | 31.61 | 33.48 | 35.14 | ||
50 | SSIM | 0.864 | 0.849 | 0.922 | 0.919 | 0.837 | 0.948 | 0.910 | 0.900 | 0.938 | 0.955 | |
SAM | 9.43 | 7.09 | 6.71 | 9.22 | 13.64 | 6.57 | 7.94 | 6.63 | 6.43 | 5.83 | ||
PSNR | 26.84 | 27.23 | 28.94 | 27.85 | 23.01 | 31.05 | 27.66 | 28.23 | 29.88 | 31.45 | ||
100 | SSIM | 0.734 | 0.740 | 0.830 | 0.805 | 0.550 | 0.894 | 0.781 | 0.789 | 0.861 | 0.903 | |
SAM | 11.33 | 9.47 | 9.44 | 13.27 | 25.46 | 8.91 | 10.15 | 9.12 | 7.99 | 7.86 |
In this section, we present the simulated and real data experimental results of different methods, companied with the computational efficiency and parameter analysis of the proposed NGmeet. The experiments are programmed in Matlab with CPU Core i7-7820HK 64G memory.
Setup. One multi-spectral image (MSI) CAVE ^{1}^{1}1http://www1.cs.columbia.edu/CAVE/databases/, and two HSI images, i.e. PaC ^{2}^{2}2http://www.ehu.eus/ccwintco/index.php/ and WDC ^{3}^{3}3https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral datasets are used (Table 3). These images have been widely used for a simulated study [9, 21, 28, 36, 47]. Following the settings in [9, 28], additive Gaussion noise with noise variance are added to the MSIs/HSIs with varies from to . Before denoising, the whole HSIs are normalized to [0, 255].
CAVE | PaC | WDC | |
---|---|---|---|
image size | 512512 | 256256 | 256256 |
number of bands | 31 | 89 | 192 |
The following methods are used for the comparison: spectral low-rank methods, i.e. LRTA [30] ^{4}^{4}4https://www.sandia.gov/tgkolda/TensorToolbox/, LRTV [22] ^{5}^{5}5https://sites.google.com/site/rshewei/home, MTSNMF [39] ^{6}^{6}6http://www.cs.zju.edu.cn/people/qianyt/, NAILRMA [21] PARAFAC [26] and FastHyDe [47] ^{7}^{7}7http://www.lx.it.pt/~bioucas/; spatial low-rank methods, i.e. TDL [28] KBR [36] ^{8}^{8}8http://gr.xjtu.edu.cn/web/dymeng/, LLRT [9] ^{9}^{9}9http://www.escience.cn/people/changyi/; and finally NGmeet (Algorithm 1), which combines the best of above two fields. Hyper-parameters of all compared methods are set based on authors’ codes or suggestions in the paper. The value of spectral dimension is the most import parameter, which is initialized by HySime [1] and updated via (7). Parameter is used to control the contribution of non-local regularization, and is a scaling factor controlling the re-estimation of noise variance [14]. We empirically set , and as introduced in [9], and in the whole experiments.
To thoroughly evaluate the performance of different methods, the peak signal-to-noise ratio (PSNR) index, the structural similarity (SSIM) [34] index and the spectral angle mean (SAM) [9, 22] index were adopted to give a quantitative assessment. The SAM index is to measure the mean spectrum degree between the original HSI and the restored HSI. The lower value of SAM means the higher similarity between original image and the denoised image.
Quantitative comparison. For each noise level setting, we calculate evaluation values of all the images from each dataset, as presented in Table 2. It can be easily observed that the proposed NGmeet method achieved the best results almost in all cases. Another interesting observation is that the non-local based method LLRT can achieve better results than FastHyDe, the best result of spectral low-rank methods, but it dose the opposite in the hyperspectral image cases. This phenomenon conforms the advantage of NL low-rank property in the MSI processing and the spectral low-rank property in the HSI processing.
Visual comparison. To further demonstrate the efficiency of the proposed method, Figure 3 shows the color images of CAVE-toy (composed of bands 31, 11 and 6 [21]) before and after denoising. The results of PaC and WDC can be found in the supplementary material. The PSNR values and the computational time of each methods are marked under the denoised images. It can be observed that FastHyDe, LLRT and NGmeet have huge advantage over the rest comparison methods. From the enlarged area, the results of FastHyDe LLRT produced some artifacts. Thus, our method NGmeet can produce the best visual quality.
Time | KBR | LLRT | NGmeet | ||
---|---|---|---|---|---|
(seconds) | stage B | stage B | stage A | stage B | total |
CAVE | 4330 | 1212 | 3 | 201 | 204 |
PaC | 828 | 488 | 2 | 37 | 39 |
WDC | 3570 | 1573 | 3 | 45 | 48 |
Computational efficiency. In this section, we will illustrate that in our denoising paradigm, the computational efficiency of the non-local denoising procedure will get rid of the huge spectral dimension. Compared to the previous non-local denoising methods, i.e. KBR [36] and LLRT [9], the proposed NGmeet includes additional stage A. Table 4 presents the computational time of different stages of the three methods. From Table 1 and 4, we can conclude that NGmeet spends little time to project the original HSI onto a reduced image (stage A), however, earning huge advantage in stage B including group matching step and non-local denoising.
Figure 4 displays the computational time and SSIM values of the proposed NGmeet, KBR [36] and LLRT [9], with the increase of spectral number. As illustrated, even though the performances of KBR and LLRT increase with the increase of spectral number, the computational time also increases linearly. Our method can achieve the best performance, meanwhile, the computational time is nearly unchanged with the increase of spectral number.
Convergence. To show the convergence of proposed NGmeet, Figure 5 presents the PSNR values with the increase of iteration, on the WDC dataset. From the figure, it can be observed that our method can converge to stable PSNR values very fast at different noise level.
Setup. Here, AVIRIS Indian Pines HSI ^{10}^{10}10https://engineering.purdue.edu/~biehl/MultiSpec/ and HYDICE Urban image ^{11}^{11}11http://www.tec.army.mil/hypercube are adopted in the real experiments (Table 5). The noisy HSIs are also scaled to the range [0 255], and the parameters involved in the proposed methods are set as the same in the simulated experiments. In addition, multiple regression theory-based approach [1] is adopted to estimate the initial noise variance of each HSI bands.
Urban | Indian Pines | |
---|---|---|
image size | 200200 | 145145 |
number of bands | 210 | 220 |
Visual comparison. Since reference clean images are missing, we just present the real Indian Pines and Urban images before and after denoising in figures 6 and 7. It can be obviously observed that the results produced by the proposed NGmeet can remove the noise and keep the spectral details simultaneously. LRTV can produce the most smooth results. However, the color of the denoised result changes a lot, indicating the loss of spectral information. The denoised results of FastHyDe and LLRT still contain stripes as presented in Figure 6. To sum up, although the proposed NGmeet is designed in the Gaussion noise assumption, it can also achieves the best results for real datasets.
is the key parameter to integrate the spatial and spectral information. Figure 8 presents the PSNR values achieved by NGmeet with different initialization of with being . PaC images was chosen as the test image, and the noise variance changes from to . is initialized by HySime [1] as for different noise variance cases, respectively. It confirms that the initialization of is reliable.
Table 6 presents the influence of different values with being initialized by HySime [1]. It can be observed that, the updating strategy of can improve the performance, and the selection of is robust.
PSNR(dB) | ||||
---|---|---|---|---|
43.09 | 36.49 | 33.54 | 29.91 | |
43.52 | 36.96 | 34.23 | 30.56 | |
43.43 | 37.02 | 34.21 | 30.83 | |
43.42 | 37.11 | 34.42 | 30.45 |
In this paper, we provide a new perspective to integrate the spatial non-local similarity and global spectral low-rank property, which are explored by low-dimensional projection and reduced image denoising, respectively. We have also proposed an alternating minimization method with iteration strategy to solve the optimization of the proposed GNmeet method. The superiority of our method are confirmed by the simulated and real dataset experiments. In our unified spatial-spectral paradigm, the usage of WNNM [19]
is not a must. In future, we plan to adopt Convolutional Neural Network
[7, 42] to explore non-local similarity.Hyperspectral image super-resolution via non-local sparse tensor factorization.
In2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, pages 3862–3871, July 2017.Note that the objective can be expressed as:
which is equal to find the best -rand approximation of . Thus, let rank- SVD of be , and , the closed-form solution of (4) is given by and . ∎
Since , then
(8) |
where the noise is given by . Note that
(9) |
Thus, the mean of the noise is zero. Let be a column in , then one column in can be expressed as
(10) |
Follow the definition of variance, we have
Thus, we obtain the proposition. ∎
Very recently deep learning based HSI denoising mehtods,
i.e. HSID-CNN [40] and HSI-DeNet [7] have been proposed. We didn’t compare the proposed NGmeet with these two methods, since the related codes are no public available. However, from the cross-over experimental report between our method and the two deep learning based methods, we found that our method performs much better than HSID-CNN [40] in the WDC data case with Gaussion noise. HSI-DeNet [7] didn’t report the Gaussion noise removal of CAVE dataset. However, if we regard LLRT as a baseline, the inprovement of our NGmeet compared to LLRT is a bit higher than that of HSI-DeNet [7]. These evidences prove the validity of our method compared to the state-of-the-art deep learning based methods.