Non-local Meets Global: An Integrated Paradigm for Hyperspectral Denoising

12/11/2018 ∙ by Wei He, et al. ∙ The Hong Kong University of Science and Technology 6

Non-local low-rank tensor approximation has been developed as a state-of-the-art method for hyperspectral image (HSI) denoising. Unfortunately, with more spectral bands for HSI, while the running time of these methods significantly increases, their denoising performance benefits little. In this paper, we claim that the HSI underlines a global spectral low-rank subspace, and the spectral subspaces of each full band patch groups should underlie this global low-rank subspace. This motivates us to propose a unified spatial-spectral paradigm for HSI denoising. As the new model is hard to optimize, we further propose an efficient algorithm for optimization, which is motivated by alternating minimization. This is done by first learning a low-dimensional projection and the related reduced image from the noisy HSI. Then, the non-local low-rank denoising and iterative regularization are developed to refine the reduced image and projection, respectively. Finally, experiments on synthetic and both real datasets demonstrate the superiority against the other state-of-the-arts HSI denoising methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 6

page 8

page 12

Code Repositories

NGMeet

Matlab Code for: "Non-local Meets Global: An Integrated Paradigm for Hyperspectral Denoising. Arvix. Dec 2018"


view repo

NGMeet

Matlab Code for: "Non-local Meets Global: An Integrated Paradigm for Hyperspectral Denoising. Arvix. Dec 2018"


view repo

tensor_decoding_spectral_SCI_cameras

We apply tensor decoding scheme to decode SCI cameras.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent decades have witnessed the development of hyperspectral imaging techniques [4, 38, 18]. The hyperspectral imaging system is able to cover the wavelength region from 0.4 to 2.5 at a nominal spectral resolution of 10 nm. With the wealth of available spectral information, hyperspectral images (HSI) have the high spectral diagnosis ability to distinguish precise details even between the similar materials [2, 31], providing the potential advantages of application in remote sensing [32, 33], medical diagnosis [20]

, face recognition 

[27, 33], quality control [17] and so on. Due to instrumental noise, HSI is often corrupted by Gaussian noise, which significantly influences the subsequent applications. As a preprocessing, HSI denoising is a fundamental step prior to HSI exploitation [6, 41, 43].

For HSI denoising, the spatial non-local similarity and global spectral correlation are the most two important properties. The spatial non-local similarity suggests that similar patches inside a HSI can be grouped and denoised together. The related methods [9, 12, 13, 28, 36, 45] denoise the HSIs via group matching of full band patches (FBPs, stacked by patches at the same location of HSI over all bands) and low-rank denoising of each non-local FBP group (NLFBPG). These methods have achieved state-of-the-art performance. However, they still face a crucial problem. For HSIs, the higher spectral dimension means the higher discriminant ability [2], thus more spectrums are desired. As the spectral number increases, the size of NLFBPG also becomes larger, leading to significantly more computations for the subsequent low-rank matrix/tensor approximations.

The HSIs have strong spectral correlation, which is modeled as low-rank property [1, 5, 41] and have also been widely adopted to the HSI denoising. However, only spectral low-rank regularization cannot remove the noise efficiently. One promising improvement is to project the original noisy HSI onto the low-dimensional spectral subspace, and denoise the projected HSI via spatial based methods [10, 29, 46, 47]. Unfortunately, these two-stage methods are significantly influenced by the quality of projection and the efficiency of spatial denoising. All of them fail to capture a clean projection matrix, which makes the restored HSI still be noisy.

Figure 1: Flowchart of the proposed method. It includes three stages: A. spectral low-rank denoising, B. non-local low-rank denoising and C. iteration regularization. B consists of two steps including group matching and non-local low-rank approximation.

To alleviate the aforementioned problems, this paper introduces a unified HSI denoising paradigm to integrate the spatial non-local similarity and global spectral low-rank property simultaneously. We start from the point that the HSI should underlies a low-dimensional spectral subspace, which has been widely accepted in hyperspectral imaging [16, 23], compressive sensing [3, 44], unmixing [2] and dimension reduction [1] tasks. Inspired by this fact, the whole NLFBPGs should also underlie a common low-dimensional spectral subspace. Thus, we first learn a global spectral low-rank projection, and subsequently exploit the spatial non-local similarity of projected HSI after the projection. The computational cost of non-local processing in our paradigm will almost keep the same with more spectral bands, and the global spectral low-rank property will also be enhanced. The contributions are summarized as follows:

  • We introduce a unified paradigm to exploit the spatial non-local and global spectral low-rank properties simultaneously. We transfer the non-local denoising to the reduced image and improve the computational efficiency against the increase of spectral band number;

  • The resulting new model for image denoising is hard to optimize, as it involves with both complex constraint and regularization. We further propose an efficient problem for optimization, which is inspired by alternating minimization;

  • Finally, the proposed method is not only the best compared with other state-of-the-art methods in simulated experiment, where Gaussian noise are added manually; but also achieves the most appealing recovered images for real datasets.

Notations We follow the tensor notation in [24], the tensor and matrix are represented as Euler script letters, i.e. and boldface capital letter, i.e. , respectively. For a -order tensor , the mode- unfolding operator is denoted as . We have , in which is the inverse operator of unfolding operator. The Frobenius norm of is defined by . The mode- product of a tensor and a matrix is defined as , where and .

2 Related work

Since denoising is an ill-posed problem, proper regulations based on the HSI prior knowledge is necessary [15, 35]. The mainstream of HSI denoising methods can be grouped into two categories: spatial non-local based methods and spectral low-rank based methods.

2.1 Spatial: Non-local similarity

HSIs illustrate the strong spatial non-local similarity. After the non-local low-rank modeling was first introduced to HSI denoising in [28], the flowchart of the non-local based methods become fixed: FBPs grouping and low-rank tensor approximation. Almost all the researchers focused on the low-rank tensor modeling of NLFBPGs, such as tucker decomposition [28], sparsity regularized tucker decomposition [36], Laplacian scale mixture low-rank modeling [13], and weighted low-rank tensor recovery [8] to exploit the spatial non-local similarity and spectral low-rank property simultaneously. However, with the increase of spectral number, the computational burden also increases significantly, blocking the application of these methods to the real high-spectrum HSIs.

Chang et.al [9] claimed that the spectral low-rank property of NLFBPGs is weak and proposed a unidirectional low-rank tensor recovery to explore the non-local similarity. It saved much computational burden and achieved the state-of-the-art performance in the HSI denoising. This reflects the fact that previous non-local low-rank based methods have not yet efficiently utilized the spectral low-rank property. How to balance the importance between spectral low-rank and spatial non-local similarity still remains a problem.

2.2 Spectral: Global low-rank property

The global spectral low-rank property of HSI has been widely accepted and applied to the subsequent applications [1, 5]. As pointed out in [1]

, the intrinsic dimension of the spectral subspace is far less than the spectral dimension of the original image. By vectorizing each band of the HSI and reshaping the original 3-D HSI into a 2-D matrix, various low-rank approximation methods such as principal components analysis (PCA) 

[5], robust PCA [11, 37, 41], low-rank matrix factorization [3, 39] have been directly adopted to denoise the HSI. However, these methods only explore the spectral prior of the HSI, ignoring the spatial prior information. Instantly, many conventional spatial regularizers such as total variation [22], low-rank tensor regularization [25, 30] are adopted to explore the spatial prior of HSI combined with spectral low-rank property.

A remedy is a two-stage method combining the spatial regularizer and spectral low-rank property together. This is done by firstly mapping the original HSI into the low-dimensional spectral subspace, and then denoise the mapped image via existing spatial denoising methods, e.g., wavelets [10, 29], BM3D [47] and HOSVD [46]. These two-stage methods provide a new sight to denoise the HSI in the transferred spectral space, which is very fast. However, these methods fail to combine the best of both worlds, and the extracted subspace is still corrupted by the noise.

3 Approaches

In this section, we propose a unified HSI denoising paradigm to integrate spatial non-local similarity and global spectral low-rank property. We first learn a low-dimensional projection and the related reduced image from the noisy HSI. Then the reduced image and the projection are updated by spatial non-local denoising and iteration regularization, respectively. The overview of the proposed paradigm is in Figure 1.

3.1 Unified spatial-spectral paradigm

Assuming that the clean HSI is corrupted by the additive Gaussian noise

(with zero mean and variance

), then the noisy HSI is generated by

(1)

First, to capture the spectral low-rank property in Section 2.2, we are motivated to use a low-rank representation of the clean HSI , i.e. , where , is a projection matrix capturing the common subspace of different spectrum, and is the reduced image. Second, to utilize the spatial low-rank property, we add a non-local low-rank regularizer on the reduced image . As a result, the proposed non-local meets global (NGmeet) denoising paradigm is presented as

(2)

where controls the contribution of spatial non-local regularization, the projection matrix is required to be orthogonal, and the clean HSI is recovered by .

The objective (2) is very hard to optimize, due to both the orthogonal constraint on and complex regularization on . An algorithm based on alternating minimization to approximately solve the objective function is proposed in Section 3.2.

Remark 3.1.

The orthogonal constraint is very important here. First, it encourages the representation held in to be more distinguish with each other. This helps to keep noise out of and further allows a closed-form solution for computing (Section 3.2.1

). Besides, it preserves the distribution of noise, which allows us to estimate the remained noise-level in reduced image and reuse the-state-of-art Gaussian based non-local method for spatial denoising (Section 

3.2.2).

However, before going to optimization details, we first look into (2), and see the insights why the proposed method can beat all previous spectral low-rank methods [10, 47].

Figure 2: The first row displays the coefficient image and the absolute difference signature between and the reference. The second row displays the refined coefficient image and the absolute difference signature between refined one and the reference. The test dataset is WDC with noise variance 50.

3.1.1 Necessity of iterative refinement

Recall that, in (2), the first item tries to exploit the spectral low-rank property and decompose the noisy into the coarse spectral low-rank projection and reduced image . Here, both and have physical meaning in the field of remote sensing [2]. Specifically, -th column of , denoted as , is regarded as the -th signature (known as endmember) of HSI, and the corresponding coefficient image is regarded as the abundance map.

Previous methods are mostly two-stage ones, they do not iterative refine the projection matrix they found, e.g. FastHyDe [45]. However, we model the spatial and spectral low-rank properties simultaneously, which enables iterative refinement of the projection matrix . To demonstrate the necessity of iterative refinement, we calculated the projection and reduced image from noisy WDC with noise variance 50. The reference and are from the original clean WDC. Figure 2 presents the comparison on signatures and the corresponding coefficient image before and after our refinement. From the figure, it can be observed that the projection atom and reduced image obtained by the spectral denoising method are still suffering from the noise, while the proposed method can produce much cleaner signatures and coefficient images.

3.2 Efficient optimization

As discussed in Section 3.1, the objective (2) is very hard to optimize. In this section, we are motivated to use alternating minimization for optimization (Algorithm 1). , stand for the input noisy image and output denoised image of the -th iteration, respectively. As will be shown in the sequel, Algorithm 1 tries to find a closed-form solution for (step 3) and reuses the-state-of-art spatial denosing method for computing (steps 4-6), which together make the algorithm very efficient. Besides, as will be refined during the iteration, iterative regularization [14] is adopt to boost the denosing performance (step 7).

0:  Noisy image , noise variance
1:  , estimating using HySime [1];
2:  for  do
3:     A). Spectral low-rank denoising: Estimate projection matrix and reduced image via SVD on ;
4:     B). Non-local reduced image denoising: -B.I) Obtain the set of tensors for via -NN search for each reference patch;
5:     -B.II) Denoise via Low-rank approximation and obtain ;
6:     -B.III) Reconstruct the cubes to image , and obtain the denoised HSI ;
7:     (C). Iterative regularization: , ;
8:  end for
9:  return  Denoised image ;
Algorithm 1 Non-local Meets Global(NGmeet)

3.2.1 Spectral denoising via

In this stage, we identify the projection matrix with the given and from (2), which leads to

(3)

However, this problem is hard without simple closed-form solution. Instead, since is obtained from iterative regularization, of which the noisy-level is decreased. Thus, we proposed to relax (3) as

(4)

which has the closed-form solution (Proposition 3.1). Thus, only a SVD on the folding matrix of is required, which can be efficiently computed.

Proposition 3.1.

Let be the rank- SVD of . The solution to (4) is given by the close-form as and .

3.2.2 Spatial denoising via

Note that we have from Section 3.2.1. Using in (2), the objective in this stage becomes:

(5)

where is a non-local denoising regularizer. Formulation (5) appears in many denoising models, e.g. WNNM [19]. Specifically, to solve this regularizer, we need to first group similar patches, then denoise each patch group tensors and finally assemble the final estimated .

However, all these model assume the noise on

follow univariate Gaussian distribution. If such assumption fails, the resulting performance can deteriorate significantly. Here, we have the following Proposition 

3.2. Therefore, the noise distribution is preserved from to , which enables us to use the existing spatial denoising methods. In this paper, we use WNNM [19] to denoise each patch group tensor, as it is widely used and gives the-state-of-art denoising performance.

Proposition 3.2.

Assume the noisy HSI is from (1), then the noise on the reduced image , where , still follows Gaussian distribution with zero mean and variance .

Finally, to use WNNM, we need to estimate the noise level in , whose noise level is changed during the iteration. From Proposition 3.2, we know the noisy level of is the same as , thus we propose to estimate it via

(6)

where is the a scaling factor controlling the re-estimation of noise variance, and stands for the averaging process of the tensor elements. The denoised group tensors are denoted as , which can be directly used to reconstruct the denoised reduced image . The output denoised image of -th iteration is .

3.2.3 Iterative refinement

Iteration regularization has been widely used to boost the denoising performance [9, 14, 19, 36]. Here we also introduce the it into our model (Algorithm 1) to refine the noisy projection . As shown in (4), the projection is significantly influenced by the noise intensity of input noisy image . Hence we update the next input noisy image as

where is to trade-off the denoised image and original noisy image . The estimation of can benefit from the lower noise variance of the input .

Besides, is also updated with the iteration. We initialize by HySime [1]. When the noisy image is corrupted by heavy noise, the estimated

will be small. Fortunately, the larger singular values obtained from the noisy image are less contaminated by the noise, and help to keep noise out of the reduced image. With the iteration, We increase

by

(7)

where is a constant value. Therefore, has the ability to capture more useful information with more iterations.

3.3 Complexity analysis

stage A stage B
NGmeet
LLRT
KBR
Table 1: Complexity comparison of each iteration between proposed NGmeet and state-of-the-arts non-local based methods. , where is the size of each patch and is the number of similar patches. is the number of and is the inner iteration of KBR.

Following the procedure of Algorithm 1, the main time complexity of each iteration includes stage A-SVD , stage B.non-local low-rank denoising of each . Table 1 presents the time complexity comparison between NGmeet and other non-local HSI denoising method. LLRT and KBR only need stage B to complete the denoising. As can be seen, the proposed NGmeet costs additional complexity in stage A, however, will be at least times faster in stage B.

spectral low-rank methods spatial low-rank methods
Image Index LRTA LRTV MTS-NMF NAIL-RMA PARA-FAC Fast-HyDe TDL KBR LLRT NGmeet
PSNR 44.12 41.47 44.27 28.51 38.01 46.72 45.58 46.20 47.14 47.87
CAVE 10 SSIM 0.969 0.949 0.972 0.941 0.921 0.985 0.983 0.980 0.989 0.990
SAM 7.90 16.54 8.49 14.52 13.86 6.62 6.07 8.94 4.65 4.72
PSNR 38.68 35.32 37.18 35.11 37.58 41.21 39.67 41.52 42.53 43.11
30 SSIM 0.913 0.818 0.855 0.775 0.888 0.945 0.942 0.942 0.974 0.972
SAM 12.86 33.32 14.97 32.43 17.37 14.06 12.54 19.43 8.23 7.46
PSNR 35.49 32.27 33.40 32.11 30.06 38.05 36.51 39.41 40.09 40.45
50 SSIM 0.858 0.719 0.730 0.638 0.571 0.889 0.888 0.922 0.950 0.951
SAM 16.53 43.65 19.06 22.85 38.35 20.08 18.23 21.31 11.48 9.80
PSNR 31.21 27.97 27.96 27.90 24.29 33.41 31.90 33.78 36.25 37.21
100 SSIM 0.735 0.529 0.493 0.453 0.256 0.746 0.734 0.851 0.910 0.927
SAM 22.67 54.85 26.33 55.66 51.83 30.72 28.51 26.41 18.17 16.23
PSNR 38.49 38.71 40.64 41.46 33.39 42.220 41.46 40.09 41.95 43.17
PaC 10 SSIM 0.975 0.979 0.988 0.987 0.866 0.990 0.988 0.984 0.989 0.992
SAM 4.90 3.29 2.76 3.46 9.05 2.99 3.06 2.86 2.75 2.61
PSNR 32.07 32.76 35.45 34.17 30.92 35.98 34.43 34.39 35.04 36.97
30 SSIM 0.908 0.920 0.958 0.941 0.845 0.962 0.949 0.947 0.957 0.971
SAM 7.88 5.76 4.17 6.54 9.28 5.09 5.11 4.28 4.86 4.30
PSNR 29.11 29.45 32.51 30.71 29.24 33.32 31.31 31.05 32.00 34.29
50 SSIM 0.836 0.850 0.921 0.886 0.846 0.936 0.904 0.892 0.918 0.948
SAM 9.20 8.60 5.50 8.83 11.40 6.55 6.14 5.40 6.55 5.18
PSNR 25.13 26.22 28.17 25.76 23.68 29.90 27.49 27.80 28.63 30.61
100 SSIM 0.655 0.729 0.808 0.728 0.598 0.873 0.789 0.793 0.833 0.890
SAM 10.17 12.76 8.40 12.93 20.22 8.68 7.67 6.95 7.68 6.86
PSNR 38.94 36.64 37.26 42.57 32.38 43.06 41.83 40.58 41.89 43.72
WDC 10 SSIM 0.974 0.968 0.975 0.989 0.914 0.991 0.989 0.986 0.990 0.993
SAM 5.602 4.653 4.429 3.637 8.087 3.070 3.680 3.090 3.700 2.830
PSNR 32.91 32.42 34.65 35.87 31.56 37.390 34.84 34.75 36.30 37.90
30 SSIM 0.917 0.909 0.953 0.958 0.898 0.971 0.953 0.951 0.967 0.975
SAM 8.331 5.991 5.557 7.011 9.009 5.140 6.400 5.240 5.460 4.640
PSNR 30.35 30.12 32.49 32.56 29.49 34.61 31.89 31.61 33.48 35.14
50 SSIM 0.864 0.849 0.922 0.919 0.837 0.948 0.910 0.900 0.938 0.955
SAM 9.43 7.09 6.71 9.22 13.64 6.57 7.94 6.63 6.43 5.83
PSNR 26.84 27.23 28.94 27.85 23.01 31.05 27.66 28.23 29.88 31.45
100 SSIM 0.734 0.740 0.830 0.805 0.550 0.894 0.781 0.789 0.861 0.903
SAM 11.33 9.47 9.44 13.27 25.46 8.91 10.15 9.12 7.99 7.86
Table 2: Quantitative comparison of different algorithms under various noise levels. The PSNR is in dB, and best results is in bold.

4 Experiments

In this section, we present the simulated and real data experimental results of different methods, companied with the computational efficiency and parameter analysis of the proposed NGmeet. The experiments are programmed in Matlab with CPU Core i7-7820HK 64G memory.

4.1 Simulated experiments

Setup. One multi-spectral image (MSI) CAVE 111http://www1.cs.columbia.edu/CAVE/databases/, and two HSI images, i.e. PaC 222http://www.ehu.eus/ccwintco/index.php/ and WDC 333https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral datasets are used (Table 3). These images have been widely used for a simulated study [9, 21, 28, 36, 47]. Following the settings in [9, 28], additive Gaussion noise with noise variance are added to the MSIs/HSIs with varies from to . Before denoising, the whole HSIs are normalized to [0, 255].

CAVE PaC WDC
image size 512512 256256 256256
number of bands 31 89 192
Table 3: Hyper-spectral images used for simulated experiments.
Figure 3: Denoising results on the CAVE-toy image with the noise variance 100. The color image is composed of bands 31, 11, and 6 for the red, green, and blue channels, respectively.

The following methods are used for the comparison: spectral low-rank methods, i.e. LRTA [30] 444https://www.sandia.gov/tgkolda/TensorToolbox/, LRTV [22] 555https://sites.google.com/site/rshewei/home, MTSNMF [39] 666http://www.cs.zju.edu.cn/people/qianyt/, NAILRMA [21] PARAFAC [26] and FastHyDe [47] 777http://www.lx.it.pt/~bioucas/; spatial low-rank methods, i.e. TDL [28] KBR [36] 888http://gr.xjtu.edu.cn/web/dymeng/, LLRT [9] 999http://www.escience.cn/people/changyi/; and finally NGmeet (Algorithm 1), which combines the best of above two fields. Hyper-parameters of all compared methods are set based on authors’ codes or suggestions in the paper. The value of spectral dimension is the most import parameter, which is initialized by HySime [1] and updated via (7). Parameter is used to control the contribution of non-local regularization, and is a scaling factor controlling the re-estimation of noise variance [14]. We empirically set , and as introduced in [9], and in the whole experiments.

To thoroughly evaluate the performance of different methods, the peak signal-to-noise ratio (PSNR) index, the structural similarity (SSIM) [34] index and the spectral angle mean (SAM) [9, 22] index were adopted to give a quantitative assessment. The SAM index is to measure the mean spectrum degree between the original HSI and the restored HSI. The lower value of SAM means the higher similarity between original image and the denoised image.

Quantitative comparison. For each noise level setting, we calculate evaluation values of all the images from each dataset, as presented in Table 2. It can be easily observed that the proposed NGmeet method achieved the best results almost in all cases. Another interesting observation is that the non-local based method LLRT can achieve better results than FastHyDe, the best result of spectral low-rank methods, but it dose the opposite in the hyperspectral image cases. This phenomenon conforms the advantage of NL low-rank property in the MSI processing and the spectral low-rank property in the HSI processing.

Visual comparison. To further demonstrate the efficiency of the proposed method, Figure 3 shows the color images of CAVE-toy (composed of bands 31, 11 and 6 [21]) before and after denoising. The results of PaC and WDC can be found in the supplementary material. The PSNR values and the computational time of each methods are marked under the denoised images. It can be observed that FastHyDe, LLRT and NGmeet have huge advantage over the rest comparison methods. From the enlarged area, the results of FastHyDe LLRT produced some artifacts. Thus, our method NGmeet can produce the best visual quality.

(a) Time v.s. number of bands
(b) SSIM v.s. number of bands
Figure 4: The computational time and SSIM values of different numbers of bands. WDC is used and noise variance is 100.
Time KBR LLRT NGmeet
(seconds) stage B stage B stage A stage B total
CAVE 4330 1212 3 201 204
PaC 828 488 2 37 39
WDC 3570 1573 3 45 48
Table 4: Average running time (in seconds) of each stage for the non-local low-rank based methods. stage A: spectral projection; stage B: spatial non-local low-rank denoising.

Computational efficiency. In this section, we will illustrate that in our denoising paradigm, the computational efficiency of the non-local denoising procedure will get rid of the huge spectral dimension. Compared to the previous non-local denoising methods, i.e. KBR [36] and LLRT [9], the proposed NGmeet includes additional stage A. Table 4 presents the computational time of different stages of the three methods. From Table 1 and 4, we can conclude that NGmeet spends little time to project the original HSI onto a reduced image (stage A), however, earning huge advantage in stage B including group matching step and non-local denoising.

Figure 4 displays the computational time and SSIM values of the proposed NGmeet, KBR [36] and LLRT [9], with the increase of spectral number. As illustrated, even though the performances of KBR and LLRT increase with the increase of spectral number, the computational time also increases linearly. Our method can achieve the best performance, meanwhile, the computational time is nearly unchanged with the increase of spectral number.

Convergence. To show the convergence of proposed NGmeet, Figure 5 presents the PSNR values with the increase of iteration, on the WDC dataset. From the figure, it can be observed that our method can converge to stable PSNR values very fast at different noise level.

Figure 5: PSNR v.s. iteration of NGmeet. WDC is used.

4.2 Real Data Experiments

Figure 6: Real data experimental results on the Indian Pines dataset. The color image is composed of bands 219, 109 and 1.
Figure 7: Real data experimental results on the Urban dataset of band 207.

Setup. Here, AVIRIS Indian Pines HSI 101010https://engineering.purdue.edu/~biehl/MultiSpec/ and HYDICE Urban image 111111http://www.tec.army.mil/hypercube are adopted in the real experiments (Table 5). The noisy HSIs are also scaled to the range [0 255], and the parameters involved in the proposed methods are set as the same in the simulated experiments. In addition, multiple regression theory-based approach [1] is adopted to estimate the initial noise variance of each HSI bands.

Urban Indian Pines
image size 200200 145145
number of bands 210 220
Table 5: Hyperspectral images used for real data experiments.

Visual comparison. Since reference clean images are missing, we just present the real Indian Pines and Urban images before and after denoising in figures 6 and 7. It can be obviously observed that the results produced by the proposed NGmeet can remove the noise and keep the spectral details simultaneously. LRTV can produce the most smooth results. However, the color of the denoised result changes a lot, indicating the loss of spectral information. The denoised results of FastHyDe and LLRT still contain stripes as presented in Figure 6. To sum up, although the proposed NGmeet is designed in the Gaussion noise assumption, it can also achieves the best results for real datasets.

4.3 Parameter analysis

is the key parameter to integrate the spatial and spectral information. Figure 8 presents the PSNR values achieved by NGmeet with different initialization of with being . PaC images was chosen as the test image, and the noise variance changes from to . is initialized by HySime [1] as for different noise variance cases, respectively. It confirms that the initialization of is reliable.

Figure 8: PSNR values achieved by the proposed methods with different parameter with on the PaC dataset.

Table 6 presents the influence of different values with being initialized by HySime [1]. It can be observed that, the updating strategy of can improve the performance, and the selection of is robust.

PSNR(dB)
43.09 36.49 33.54 29.91
43.52 36.96 34.23 30.56
43.43 37.02 34.21 30.83
43.42 37.11 34.42 30.45
Table 6: The influence of different for NGmeet.

5 Conclusion

In this paper, we provide a new perspective to integrate the spatial non-local similarity and global spectral low-rank property, which are explored by low-dimensional projection and reduced image denoising, respectively. We have also proposed an alternating minimization method with iteration strategy to solve the optimization of the proposed GNmeet method. The superiority of our method are confirmed by the simulated and real dataset experiments. In our unified spatial-spectral paradigm, the usage of WNNM [19]

is not a must. In future, we plan to adopt Convolutional Neural Network 

[7, 42] to explore non-local similarity.

References

  • [1] J. M. Bioucas-Dias and J. M. Nascimento. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens., 46(8):2435–2445, Aug. 2008.
  • [2] J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel.Topics Appl. Earth Observ. Remote Sens., 5(2):354–379, Apr. 2012.
  • [3] X. Cao, Q. Zhao, D. Meng, Y. Chen, and Z. Xu. Robust low-rank matrix factorization under general mixture noise distributions. IEEE Trans. on Image Process., 25(10):4677–4690, Oct 2016.
  • [4] A. Chakrabarti and T. Zickler. Statistics of real-world hyperspectral images. In CVPR, pages 193–200, June 2011.
  • [5] C.-I. Chang and Q. Du. Interference and noise-adjusted principal components analysis. IEEE Trans. Geosci. Remote Sens., 37(5):2387–2396, Sep. 1999.
  • [6] Y. Chang, L. Yan, H. Fang, and C. Luo. Anisotropic spectral-spatial total variation model for multispectral remote sensing image destriping. IEEE Trans. on Image Process., 24(6):1852–1866, Jun. 2015.
  • [7] Y. Chang, L. Yan, H. Fang, S. Zhong, and W. Liao. Hsi-denet: Hyperspectral image restoration via convolutional neural network. IEEE Trans. Geosci. Remote Sens., pages 1–16, 2018.
  • [8] Y. Chang, L. Yan, H. Fang, S. Zhong, and Z. Zhang. Weighted low-rank tensor recovery for hyperspectral image restoration. arXiv preprint arXiv:1709.00192, 2017.
  • [9] Y. Chang, L. Yan, and S. Zhong. Hyper-laplacian regularized unidirectional low-rank tensor recovery for multispectral image denoising. In CVPR, pages 4260–4268, 2017.
  • [10] G. Chen and S.-E. Qian. Denoising of hyperspectral imagery using principal component analysis and wavelet shrinkage. IEEE Trans. Geosci. Remote Sens., 49(3):973–980, Mar. 2011.
  • [11] Y. Chen, Y. Guo, Y. Wang, D. Wang, C. Peng, and G. He. Denoising of hyperspectral images using nonconvex low rank matrix approximation. IEEE Trans. Geosci. Remote Sens., 55(9):5366–5380, Jun. 2017.
  • [12] R. Dian, L. Fang, and S. Li.

    Hyperspectral image super-resolution via non-local sparse tensor factorization.

    In

    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 3862–3871, July 2017.
  • [13] W. Dong, G. Li, G. Shi, X. Li, and Y. Ma. Decomposable nonlocal tensor dictionary learning for multispectral image denoising. In ICCV, pages 442–449, 2015.
  • [14] W. Dong, G. Shi, and X. Li. Nonlocal image restoration with bilateral variance estimation: a low-rank approach. IEEE Trans. on Image Process., 22(2):700–711, 2013.
  • [15] Y. Fu, A. Lam, I. Sato, and Y. Sato. Adaptive spatial-spectral dictionary learning for hyperspectral image restoration. International Journal of Computer Vision, 122(2):228–245, 2017.
  • [16] Y. Fu, Y. Zheng, I. Sato, and Y. Sato. Exploiting spectral-spatial correlation for coded hyperspectral image restoration. In CVPR, pages 3727–3736, June 2016.
  • [17] C. Gendrin, Y. Roggo, and C. Collet. Pharmaceutical applications of vibrational chemical imaging and chemometrics: A review. J. Pharm. Biomed. Anal., 48(3):533 – 553, Nov. 2008.
  • [18] R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams. Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris). Remote Sens. Environ., 65(3):227–248, Sep. 1998.
  • [19] S. Gu, L. Zhang, W. Zuo, and X. Feng. Weighted nuclear norm minimization with application to image denoising. In CVPR, pages 2862–2869, 2014.
  • [20] B. F. Guolan Lu. Medical hyperspectral imaging: a review. Journal of Biomedical Optics, 19:19 – 19 – 24, 2014.
  • [21] W. He, H. Zhang, L. Zhang, and H. Shen. Hyperspectral image denoising via noise-adjusted iterative low-rank matrix approximation. IEEE J. Sel.Topics Appl. Earth Observ. Remote Sens., 8(6):3050–3061, 2015.
  • [22] W. He, H. Zhang, L. Zhang, and H. Shen. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens., 54(1):178–188, Jan. 2016.
  • [23] R. Kawakami, Y. Matsushita, J. Wright, M. Ben-Ezra, Y. W. Tai, and K. Ikeuchi. High-resolution hyperspectral imaging via matrix factorization. In CVPR 2011, pages 2329–2336, June 2011.
  • [24] T. Kolda and B. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009.
  • [25] C. Li, Y. Ma, J. Huang, X. Mei, and J. Ma. Hyperspectral image denoising using the robust low-rank tensor recovery. JOSA A, 32(9):1604–1612, 2015.
  • [26] X. Liu, S. Bourennane, and C. Fossati. Denoising of hyperspectral images using the parafac model and statistical performance analysis. IEEE Trans. Geosci. Remote Sens., 50(10):3717–3724, 2012.
  • [27] Z. Pan, G. Healey, M. Prasad, and B. Tromberg. Face recognition in hyperspectral images. IEEE Trans. Pattern Anal. Mach. Intell., 25(12):1552–1560, Dec 2003.
  • [28] Y. Peng, D. Meng, Z. Xu, C. Gao, Y. Yang, and B. Zhang. Decomposable nonlocal tensor dictionary learning for multispectral image denoising. In CVPR, pages 2949–2956, 2014.
  • [29] B. Rasti, J. R. Sveinsson, M. O. Ulfarsson, and J. A. Benediktsson. Hyperspectral image denoising using first order spectral roughness penalty in wavelet domain. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens, 7(6):2458–2467, Jun. 2014.
  • [30] N. Renard, S. Bourennane, and J. Blanc-Talon. Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Geosci. Remote Sens. Lett., 5(2):138–142, Apr. 2008.
  • [31] G. A. Shaw and H.-h. K. Burke. Spectral imaging for remote sensing. Lincoln Laboratory Journal, 14(1):3–28, 2003.
  • [32] D. W. Stein, S. G. Beaven, L. E. Hoff, E. M. Winter, A. P. Schaum, and A. D. Stocker. Anomaly detection from hyperspectral imagery. IEEE Signal Process Mag., 19(1):58–69, 2002.
  • [33] M. Uzair, A. Mahmood, and A. Mian. Hyperspectral face recognition with spatiospectral information fusion and pls regression. IEEE Trans. on Image Process., 24(3):1127–1137, March 2015.
  • [34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. on Image Process., 13(4):600–612, 2004.
  • [35] W. Wei, L. Zhang, C. Tian, A. Plaza, and Y. Zhang. Structured sparse coding-based hyperspectral imagery denoising with intracluster filtering. IEEE Trans. Geosci. Remote Sens., 55(12):6860–6876, 2017.
  • [36] Q. Xie, Q. Zhao, D. Meng, and Z. Xu. Kronecker-basis-representation based tensor sparsity and its applications to tensor recovery. IEEE Trans. Pattern Anal. Mach. Intell., 40(8):1888–1902, 2018.
  • [37] Y. Xie, Y. Qu, D. Tao, W. Wu, Q. Yuan, W. Zhang, et al. Hyperspectral image restoration via iteratively regularized weighted schatten p-norm minimization. IEEE Trans. Geosci. Remote Sens., 54(8):4642–4659, Aug. 2016.
  • [38] F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar. Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. IEEE Trans. on Image Process., 19(9):2241–2253, Sept 2010.
  • [39] M. Ye, Y. Qian, and J. Zhou. Multitask sparse nonnegative matrix factorization for joint spectral-spatial hyperspectral imagery denoising. IEEE Trans. Geosci. Remote Sens., 53(5):2621–2639, May 2015.
  • [40] Q. Yuan, Q. Zhang, J. Li, H. Shen, and L. Zhang. Hyperspectral image denoising employing a spatial-spectral deep residual convolutional neural network. IEEE Trans. Geosci. Remote Sens., pages 1–14, 2018.
  • [41] H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens., 52(8):4729–4743, Aug. 2014.
  • [42] K. Zhang, W. Zuo, S. Gu, and L. Zhang. Learning deep cnn denoiser prior for image restoration. In CVPR, volume 2, 2017.
  • [43] L. Zhang, W. Wei, Y. Zhang, C. Shen, A. van den Hengel, and Q. Shi. Cluster sparsity field for hyperspectral imagery denoising. In ECCV, pages 631–647, 2016.
  • [44] L. Zhang, W. Wei, Y. Zhang, C. Tian, and F. Li. Reweighted laplace prior based hyperspectral compressive sensing for unknown sparsity. In CVPR, June 2015.
  • [45] X. Zhang, X. Yuan, and L. Carin. Nonlocal low-rank tensor factor analysis for image restoration. In CVPR, pages 8232–8241, 2018.
  • [46] L. Zhuang and J. M. Bioucas-Dias. Hyperspectral image denoising based on global and non-local low-rank factorizations. In ICIP, pages 1900–1904. IEEE, 2017.
  • [47] L. Zhuang and J. M. Bioucas-Dias. Fast hyperspectral image denoising and inpainting based on low-rank and sparse representations. IEEE J. Sel.Topics Appl. Earth Observ. Remote Sens., 11(3):730–742, Mar. 2018.

Appendix A Proof

a.1 Proposition 3.1

Proof.

Note that the objective can be expressed as:

which is equal to find the best -rand approximation of . Thus, let rank- SVD of be , and , the closed-form solution of (4) is given by and . ∎

a.2 Proposition 3.2

Proof.

Since , then

(8)

where the noise is given by . Note that

(9)

Thus, the mean of the noise is zero. Let be a column in , then one column in can be expressed as

(10)

Follow the definition of variance, we have

Thus, we obtain the proposition. ∎

Appendix B Extra Experiments Results

Figure 9 and  10 show the color images of PaU [21]) (composed of bands 80, 34 and 9) and WDC (composed of bands 190, 60 and 27) before and after denoising.

Figure 9: Denoising results on the PaU image with the noise variance 50. The color image is composed of bands 80, 34, and 9 for the red, green, and blue channels, respectively.
Figure 10: Denoising results on the WDC image with the noise variance 100. The color image is composed of bands 190,60 and 27 for the red, green, and blue channels, respectively.

Appendix C Deep Networks

Very recently deep learning based HSI denoising mehtods,

i.e. HSID-CNN [40] and HSI-DeNet [7] have been proposed. We didn’t compare the proposed NGmeet with these two methods, since the related codes are no public available. However, from the cross-over experimental report between our method and the two deep learning based methods, we found that our method performs much better than HSID-CNN [40] in the WDC data case with Gaussion noise. HSI-DeNet [7] didn’t report the Gaussion noise removal of CAVE dataset. However, if we regard LLRT as a baseline, the inprovement of our NGmeet compared to LLRT is a bit higher than that of HSI-DeNet [7]. These evidences prove the validity of our method compared to the state-of-the-art deep learning based methods.