Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral Imagery

05/18/2020 ∙ by Junjun Jiang, et al. ∙ Harbin Institute of Technology 4

Recently, single gray/RGB image super-resolution reconstruction task has been extensively studied and made significant progress by leveraging the advanced machine learning techniques based on deep convolutional neural networks (DCNNs). However, there has been limited technical development focusing on single hyperspectral image super-resolution due to the high-dimensional and complex spectral patterns in hyperspectral image. In this paper, we make a step forward by investigating how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches for computationally efficient single hyperspectral image super-resolution, referred as SSPSR. Specifically, we introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data. Considering that the hyperspectral training samples are scarce and the spectral dimension of hyperspectral image data is very high, it is nontrivial to train a stable and effective deep network. Therefore, a group convolution (with shared network parameters) and progressive upsampling framework is proposed. This will not only alleviate the difficulty in feature extraction due to high-dimension of the hyperspectral data, but also make the training process more stable. To exploit the spatial and spectral prior, we design a spatial-spectral block (SSB), which consists of a spatial residual module and a spectral attention residual module. Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images, and outperforms state-of-the-arts. The source code is available at <https://github.com/junjun-jiang/SSPSR>

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 5

page 9

page 10

page 11

page 12

Code Repositories

SSPSR

A spatial-spectral prior deep network for hyperspectral image super-resolution


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Unlike human eyes, which can only be exposed to visible light, hyperspectral imaging is an imaging technique for collection and processing information across the entire range of electromagnetic spectrum [38]. The most important feature of hyperspectral imaging is the combination of imaging technology and spectral detection technology. While imaging the spatial features of the target, each spatial pixel in a hyperspectral image is dispersed to form dozens or even hundreds of narrow spectral bands for continuous spectral coverage. Therefore, hyperspectral images have a strong spectral diagnostic capability to distinguish materials that look similar for humans.

However, the hyperspectral imaging system is often compromised due to the limitations of the amount of the incident energy. There is always a tradeoff between the spatial and spectral resolution of the real imaging process. With the increase of spectral features, if all other factors are kept constant to ensure a high signal-to-noise ratio (SNR), the spatial resolution will inevitably become a victim. Therefore, how to obtain a reliable hyperspectral image with high-resolution still remains a very challenging problem.

Super-resolution reconstruction can infer a high-resolution image from one or sequential observed low-resolution images [36]. It is a post-processing technique that does not require hardware modifications, and thus could break through the limitations of the imaging system. According to whether the auxiliary information (such as panchromatic, RGB, or multispectral image) is utilized, hyperspectral image super-resolution techniques can be divided into two categories: fusion based hyperspectral image super-resolution (sometimes called hyperspectral image pansharpening) and single hyperspectral image super-resolution [57]

. The former merges the observed low-resolution hyperspectral image with the higher spatial resolution auxiliary image to improve the spatial resolution of the observed hyperspectral image. These fusion approaches based on Bayesian inference, matrix factorization, sparse representation, or recently advanced deep learning techniques have flourished in recent years and achieved considerable performance

[48, 58, 4]. However, most of these methods all assume that the input low-resolution hyperspectral image and the high-resolution auxiliary image are well co-registered. In practical applications, obtaining such well co-registered auxiliary images would be difficult, if not impossible [8, 35, 64].

Compared with fusion based hyperspectral image super-resolution, single hyperspectral image super-resolution has received less attention and there has been limited advancement due to the spectral patterns in hyperspectral images and no additional auxiliary information. To exploit the abundant spectral correlations among successive spectral bands, several single hyperspectral image super-resolution approaches based on sparse and dictionary learning or low-rank approximation have been developed [20, 17, 46, 21]. However, these hand-crafted priors can only reflect the characteristics of one aspect of the hyperspectral data.

Recently, deep convolutional neural network (DCNN) has shown extraordinary capability of modelling the relationship between the low-resolution images and high-resolution ones, i.e., single gray/RGB image super-resolution task [13, 30, 62]. The practiced rationale in these schemes can be summarized as follows: given a very large number of example pairs of original images and their corrupted versions, a deep network can be learned to restore the degraded image to its source.

Specifically, compared with the single gray/RGB image super-resolution based on deep learning, in the single hyperspectral image super-resolution task, it is nontrivial to train a computationally efficient and effective deep network. This is mainly due to the following reasons: on the one hand, hyperspectral images are not as popular as natural images, the training sample number of available hyperspectral image dataset is extremely small. Even if we can collect a lot of images, hyperspectral images may be obtained by different hyperspectral cameras. The differences in the number of spectral bands and imaging conditions will make it more difficult to establish a unified deep network. On the other hand, the spectral dimensionality of hyperspectral image data itself is very high. Unlike traditional gray/RGB images, hyperspectral images often have hundreds of contiguous spectral bands, which calls for larger dataset to guarantee the training process. Otherwise, it is easy to cause the over-fitting problem.

In order to deal with the above problems caused by the lack of data and the inability to fully exploit the spatial information and spatial correlation characteristics in hyperspectral data, a group convolution (with shared network parameters) and progressive upsampling framework is proposed in this paper, which can greatly reduce the size of the model and make it feasible to obtain stable training results under small data conditions. For exploiting the spatial and spectral correlation characteristics of hyperspectral data, we carefully design the spatial-spectral prior network (SSPN), which cascades multiple spatial-spectral blocks (SSBs). For each SSB, it contains a spatial residual module and a spectral attention residual module. The former consists of a standard residual block which is used to exploit spatial information of the hyperspectral data, while the latter consists of a spectral attention residual module which is used to extract spectral correlations. Through short and long skip connections, a residual in residual architecture is formed, which makes the spatial-spectral feature extraction more efficient.

Figure 1 shows the network architecture of our spatial-spectral prior network based super-resolution network (SSPSR). The input low-resolution hyperspectral image is firstly divided into several overlap groups. For each group, a branch network is applied to extract the spatial-spectral features of the input grouped hyperspectral images (a subset of the entire hyperspectral linages) and upscale them with a smaller unsampling factor (compared with the final target). And then, the output features of all branches are concatenated and fed to the following global spatial-spectral feature extraction and upsampling networks. Note that in order to let the SSPN in branch network and global network share the same structure, we insert a “reconstruction” layer after each branch upsampling module. Similar to many previous super-resolution networks, we also adapt a global residual structure to facilitate the prediction of the target. Therefore, in the proposed SSPSR network, the transmission of information flow is very flexible by designing these short (refer to residual spatial/spectral blocks), long (refer to the spatial-spectral prior network), global skip links. During the training phase, we share the network parameters of each branch across all groups, which avoids heavy computational cost and simplifies the complex optimization process. Comprehensive ablation studies demonstrate the effectiveness of each component and the fusion strategy used in the proposed method. Comparison results with state-of-the-art single hyperspectral image super-resolution methods on two public datasets demonstrate the effectiveness of the proposed SSPSR network.

We summarize the main contributions of this paper as follows. Considering the limited hyperspectral training samples and the high dimensionality of spectral bands, it is difficult to learn the mapping relationship from low-resolution space to high- resolution space in one-step upsampling. Inspired by the idea of some general image super-resolution methods, which con- duct super-resolution progressively, we apply the progressive upsampling scheme to the single hyperspectral image super-resolution task and verify its effectiveness. In addition, we propose a spectral grouping and parameter sharing strategy to greatly reduce the parameters of the model and alleviate the difficulty in feature extraction. Inspired by the efficient residual learning and attention mechanism, we develop a spatial-spectral feature extraction network to fully exploit the spatial-spectral prior of hyperspectral images.

The rest of this paper is organized as follows: Section II presents the related work of hyperspectral image super-resolution. In Section III, we give the details of our SSPSR network architecture and the SSB. Then, the network configuration and experimental results including ablation analysis are reported in Section IV. Finally, some conclusions are drawn in Section V.


Fig. 1: The overall network architecture of the proposed SSPSR network.

Ii Related Work

In this section, we briefly review some methods that are most relevant to our work, which include fusion based hyperspectral image super-resolution, single hyperspectral image super-resolution, and single gray/RGB image super-resolution. A list of hyperspectral image super-resolution resources collected by Jiang can be found at [22].

Ii-a Fusion based Hyperspectral Image Super-Resolution

Remote sensing image fusion is a very challenging problem with long history. Generally speaking, this problem can be classified to two categories, pansharpening and super-resolution. In order to improve the spatial resolution of the multispectral images, some previous works cast the fusion problem into a variational reconstruction task by blending a panchromatic image with higher resolution. This is often referred as pansharpening. A taxonomy of pansharpening based fusion methods can be found in the literature

[2, 18, 28, 33].

Recently, low-resolution hyperspectral image and high-resolution multispectral image fusion based spatial resolution improvement technique, which is often referred as hyperspectral image super-resolution, has received extensive attention. For example, Yokoya et al. [58] proposed a coupled nonnegative matrix factorization (CNMF) based approach to infer the high-resolution hyperspectral images with a pair of high-resolution multispectral image and low-resolution hyperspectral image. To exploit the redundancy and correlation in spectral domain, some approaches have been proposed by exploiting the sparsity [5], non-local similarity [14, 53], superpixel-guided self-similarity [15], clustering manifold structure [61]

, tensor and low-rank constraints

[44, 12]. Most recently, some deep learning based methods have gradually become popular due to its superior performance and fewer assumptions regarding the image prior [54, 11, 37, 7]. Inspired by the iterative optimization based on the observation model, some deep unfolding network for fusion based hyperspectral image super-resolution methods are becoming popular in recent years [51, 49, 10]. The common idea of the above fusion based hyperspectral image super-resolution methods is to borrow high-frequency spatial information from high-resolution auxiliary image, and fuse these information to the target high-resolution hyperspectral image. Though these approaches have achieved very good performance, the major drawback of them is that a well co-registered auxiliary image with a higher resolution is needed. However, obtaining such a well co-registered auxiliary image would be arduous, if not impossible in practical applications [8, 35, 64].

Ii-B Single Hyperspectral Image Super-Resolution

Without co-registered auxiliary image, single hyperspectral image super-resolution methods have still attracted considerable attention in reality. The pioneer work is proposed by Akgun et al. [3], in which a hyperspectral image acquisition model and the projection onto convex sets (POCS) algorithm [6] is applied to reconstruct the high-resolution hyperspectral image. By incorporating the low-rank and group-sparse constraints, Huang et al. [20] developed a novel method to tack with the unknown blurring problem. Recently, variants of sparse representations and dictionary learning based approaches are widely studied [46, 27]

. However, these methods have some drawbacks. First, they usually need to solve some complex and time consuming optimization problems in the testing phase. Second, the image priors are often hand-crafted and based on the internal example without consideration of any external information from external samples. Due to the superior performance in many computer vision problems, deep learning techniques have also been introduced into the single hyperspectral image super-resolution task very recently. For example, Yuan

et al. [59] and Xie et al. [52] firstly super-resolved the hyperspectral image based on the DCNNs, and then applied the nonnegative matrix factorization (NMF) to guarantee the spectral characteristic for the intermediate results. Essentially, they utilized DCNNs and matrix factorization to exploit the spatial and spectral features, separately, in a non-end-to-end manner. In [34], Mei et al. introduced a 3D full convolutional neural network to extract the feature of hyperspectral images. Although 3D convolution can well exploit the spectral correlation, the computational complexity is very large. Li et al. [29] proposed a grouped deep recursive residual network (GDRRN) by designing a group recursive module and embedding it into a global residual structure. This group-wise convolution and recursive structure can guarantee that it could yield very good performance. In our previous work [41], a feature pyramid block is designed to extract multi-scale features of the hyperspectral images. Most recently, inspired by the work of [43], which states that the image prior can be found within a CNN itself, Sidorov et al. [40] developed an effective single hyperspectral-image restoration algorithm. In general, these deep methods achieve better results than traditional methods. However, due to the limited hyperspectral training samples and the high dimensionality of spectral bands, it is difficult to fully exploit the spatial information and the correlation among the spectra of the hyperspectral data.

Ii-C Single Gray/RGB Image Super-Resolution

Recently, DCNN based approaches have achieved excellent performance over the single gray/RGB image super-resolution problem. The seminal work by Dong et al. [13] proposes a three layer convolutional neural network for the end-to-end image super-resolution(SRCNN) and achieved much better performance over conventional non-deep learning based methods. Benefiting from the residual learning, in VDSR [23] and DRCN [24] Kim et al. introduced very deep network for image super-resolution and achieved better results than the three layer SRCNN. The residual structure was then adopted in LapSRN [26], DRRN [42], and EDSR [30]

. By simply attaching residual blocks, introducing the feedback, or incorporating non-local operations into a recurrent neural network, RDN

[63], DBPN [16], and NLRN [31] are proposed. Inspired by the SE block [19], Zhang et al. developed a very deep network named RCAN by incorporating the channel attention module [62]. Most recently, Dai et al. introduced the non-local block and presented a second-order attention network (SAN) to capture the long-range dependencies [9]. Although fascinating results have been achieved, these methods are designed for the gray/RGB images, which have only one or three channels. When directly applying these approaches to the hyperspectral image, they will neglect the spectral correlations among spectra of the hyperspectral data, hindering the representation capacity of the network. In addition, for single gray/RGB image super-resolution, when using one- or three-channel pictures as network input, in order to extract features, a feature map of 64 (or more) channels is usually used. Similarly, if we also apply this 20-fold (or more) parameter growth network design scheme to hyperspectral images which have hundreds of channels, it will lead to a sharp increase in parameters. However, there is not enough hyperspectral data to support the model training like for the gray/RGB images.

Iii The Proposed SSPSR Method

Iii-a Network Architecture

In Fig. 1

, we show the network architecture of the proposed SSPSR method. It mainly consists of two parts: the branch networks and global network. For each branch network or the global network, it includes shallow feature extraction, spatial-spectral deep feature extraction, upsampling module, and reconstruction part. We denote

the input low-resolution hyperspectral image, the corresponding output high-resolution hyperspectral image, and the ground truth (original high-resolution hyperspectral image) of the input image . Our goal is to predict the high-resolution hyperspectral image from the input low-resolution hyperspectral image by the proposed end-to-end super-resolution reconstruction network,

(1)

where denotes the function of the proposed SSPSR method.

Different from previous methods, which treat the hyperspectral images as multiple single channel images (reconstructing them separately) or as a whole, we divide the whole hyperspectral image into some groups. In this way, we can not only exploit the correlations among neighboring spectral bands of hyperspectral images, but also reduce the dimensionality of features of each group. Inspired by the success of the recently proposed residual network structure, which has achieved very good performance in the field of image restoration, we specifically design a SSB based on residual network structure. As shown in Fig. 1, the proposed SSPSR network contains several branch networks and a global network. For each branch network and the global network, they first extract the shallow features and fed them to the SSPN, then upscale the outputs of SSPN with an intermediate upsampling factor. By cascading the parallel branch networks with the global network, we can super-resolve the input low-resolution hyperspectral image in a coarse-to-fine manner. In the following, we will give details of the branch network and global network, respectively.


Fig. 2: The network architecture of the spatial-spectral block (SSB), which consists of a spatial residual module and a spectral attention residual module. “+” and “” denote element-wise addition and element-wise multiplication, respectively.

Iii-A1 The Branch Network

Specifically, the input low-resolution hyperspectral image is firstly divided into groups, . It should be noted that, in our settings the neighboring groups may have overlaps. More details about the settings can be found at the experiment section. For each group , we directly apply one convolutional layer to obtain its shallow features as investigated in previous work [30, 62],

(2)

where denotes convolution operation, i.e., feature extraction layer. is then used for deep feature extraction with the proposed SSPN. Consequently, we can further have

(3)

where denotes the function of the proposed SSPN, which contains SSBs and we will present its details in the following.

The output of SSPN can be treated as the deep features of one grouped hyperspectral images. In order to alleviate the burden of the final super-resolution reconstruction, we adopt a strategy of progressive super-resolution reconstruction. Particularly, we add an upsampling module in the middle of the network (before feeding the output of branch SSPN to the global SSPN), which has proven to be a very effective technique, especially when the magnification is very large. Thus, by the upsampling module we obtain the upscaled feature maps,

(4)

where and denote an upsampling module and upscaled features respectively. In this paper, we leverage the PixelShuffle [39] operator to conduct the upsampling procedure.

Before feeding the upscaled features to the following global SSPN, we add one Conv layer after each branch upsampling module to reduce the number of feature channels to the spectral number of each input group. Therefore, the output of the branch network will have the same channels as the input grouped hyperspectral images, and we call this layer as a “reconstruction” layer,

(5)

where denotes the “reconstruction” layer (Here we use a lowercase term “rec” to represent a pseudo-reconstruction operation). By this Conv layer, each branch can be seen as a super-resolution reconstruction subnetwork. Another purpose of designing this layer is to make the branch SSPN and global SSPN have the same network structure.

Iii-A2 The Global Network

After extracting features from different groups with the branch networks, we concatenate them together from all branches (as shown in the “concatenation operator” of Fig. 1), i.e., . It should be noted that if the neighboring groups have overlaps, the integrated feature maps can be generated according to their original spectral band position and by averaging feature values in the overlapping bands. Similar to the local branch, before feeding the contacted features into the global SSPN, we apply one Conv layer to extract the “shallow features”,

(6)

where is similar to and is used to extract corresponding “shallow features” of the input contacted features of all branch networks.

And then, we further feed into the global SSPN, whose structure is the same as the local one,

(7)

where refers to the global version of . In this way, we extract the spatial-spectral features of the input hyperspectral images.

To upscale the obtained features to the target size, here we apply upsampling module once more (progressively reconstruction) to generate the upscaled spatial-spectral feature maps,

(8)

where refers to the global version of .

The final super-resolved hyperspectral images can be then obtained via one reconstruction layer by feeding the upscaled spatial-spectral features and the upscaled input hyperspectral images,

(9)

where refers to the Bicubic upsampling version of the input low-resolution hyperspectral images, is similar to and is used to extract shallow features of the input Bicubic upscaled hyperspectral images for residual learning, and is the reconstruction operation that has one layer. Here, “” is referred to as the residual learning.

Iii-B Spatial-Spectral Prior Network (SSPN)

Image super-resolution is a very ill-posed problem, which calls for additional prior (regularization) to constrain the reconstruction procedure. Traditional approaches all try to design sophisticated regularization terms such as total variation (TV), sparse, low-rank, by hand [5, 14, 15, 61, 44]. Therefore, the performance of these algorithms is highly dependent on whether the designed prior can well characterize the observed data. As for the hyperspectral image super-resolution problem, it is crucial to effectively exploit the intrinsic properties of hyperspectral images, i.e., the non-local self-similarity in spatial and the high correlation across spectra. Previous manually designed constraints are insufficient for accurate hyperspectral image restoration.

In this paper, we advocate a spatial-spectral feature extraction network (SSPN) to exploit the spatial and spectral prior. In particular, SSPN cascades spatial-spectral blocks (SSBs) and can be formulated as,

(10)

where refers to the function of the -th SSB, and is the input of the -th SSB and is the extracted features. Noted that we use the notations from the local branch network to demonstrate the detailed design of the local SSPN, and the global SSPN is the same to the local one.

To facilitate the prediction of the target, the long skip connection is further introduced in SSPN. This will lead to the direct passing of the low frequency features of the current features to the end, and let the current residual body pay more attention to the high frequency information. Therefore, the output of the SSPN can be obtained by

(11)

Here,“” is referred to as the residual learning (same as below). This residual in residual structure can enable fast as well as stable training.

In this paper, we specifically design the SSB to exploit the spatial-spectral information from the hyperspectral images. In particular, each SSB has two parts, i.e. a spatial residual module and a spectral attention residual module. The architecture of SSB is illustrated in Fig. 2. For the spatial residual module, we leverage the standard residual block with 33 convolutions to extract the spatial features,

(12)

where refers to the function of the spatial residual module for the -th SSB, and is the spatial feature for the -th SSB. The standard residual block can well extract the spatial information of a hyperspectral image.

However, due to the strong correlation between the spectra of a hyperspectral image, standard residual convolutional networks cannot effectively extract the spectral dependencies. The spectral correlation, which is characterized by that there exists strong correlation among neighboring spectral bands of hyperspectral image, has been widely used for hyperspectral image reconstruction and analysis [58, 50]. To exploit this correlation, we can use all the spectral bands to obtain the newly reconstructed spectral band , i.e., .

are the linear combination (reconstruction) weights. If similar spectral bands share similar weights, the correlation information will be embedded in the reconstructed spectral band, thus exploiting the correlation among neighboring spectral bands of hyperspectral image. If we relax the weights to any learnable parameters, this will be equal to learning a set of weight vectors

, and thus obtaining a new representation of the hyperspectral image. Mathematically, this can be achieved by some 11 filters (bottleneck layer), whose weights are . By designing a spectral network with 1

1 filters, we can expect to fully exploit the correlations between different spectral bands. It is worth noting that we further apply the ReLU layer to enhance its representation ability. Therefore, the structure of the SSB is designed as the combination of a spatial residual module and a spectral attention residual module as shown in Fig.

2. Thus, we have

(13)

where denotes the spectral network of the -th SSB.

To further improve the representation ability of spectral information as well as the entire network, we are inspired by Zhang et al. [62]

and introduce the channel attention mechanism to adaptively rescale each channel-wise feature by modeling the interdependencies across feature spectra. Specifically, a global average pooling layer is applied to the extracted feature maps of previous spectral network to obtain a global context embedding vector. And then, two thin fully connected layers with a simple gating mechanism (by sigmoid function) is applied to learn nonlinear interactions between spectra. Then we obtain the final channel scaling coefficient vector

, which is used to reweight the extracted feature maps. The output of the spectral attention residual module is simply computed by

(14)
Losses +SSTV
CC 0.9535 0.9560 0.9565
SAM 2.5152 2.3581 2.3527
RMSE 0.0117 0.0115 0.0114
ERGAS 5.1304 4.9903 4.9313
PSNR 40.0703 40.3515 40.3612
SSIM 0.9401 0.9437 0.9441
TABLE I:

Average quantitative performance by different loss functions over four testing images of Chikusei dataset with respect to six PQIs when the upsampling factor is 4.

Iii-C Loss Function

In order to measure the super-resolution performance, several cost functions have been investigated to make the super-resolution results approximate to ground truth high-resolution images. In the current literature, , , perceptual, and adversarial losses are the most commonly used loss functions. When compared with perceptual and adversarial losses, which may restore details that do not exist in the original images and is undesirable in remote sensing field, and losses are more credible. As for loss, it encourages finding pixel-wise averages of plausible solutions which are typically overly-smooth. Due to that loss can effectively penalize small errors and maintain better convergence throughout the training phase, we adopt loss to measure the reconstruction accuracy of the network. Specifically, the loss is defined by mean absolute error (MAE) between all the reconstructed images and the ground truth:

(15)

where and are the -th reconstructed high-resolution hyperspectral image and ground truth hyperspectral image, respectively. denotes the number of images in one training batch, and refers the parameter set of our network.

However, above-mentioned loss is primarily designed for general image restoration tasks. Although they can well preserve the spatial information of the super-resolution results, the reconstructed spectral information may be distorted due to the ignorance of the correlations among spectral features. In order to simultaneously ensure the spatial and spectral credibility of the reconstruction results, we introduce the spatial-spectral total variation (SSTV) [1]. It extends the conventional total variation model and accounts for both the spatial and the spectral correlation. In this paper, we add the SSTV to the loss to impose spatial and spectral smoothness simultaneously,

(16)

where , , and are functions to compute the horizontal, vertical, and spectral gradient of .

In summary, the final objective loss for the proposed model is a weighted sum of the two losses:

(17)

where is used to balance the contributions of different losses. In our experiments, we set it as a constant, .

In Table I, we report the reconstruction results (in terms of objective measurements) when using different losses (more details regarding the experimental settings can be found at the experiment section). Clearly, loss is much more suitable for our task, because it can effectively penalize small errors and maintain better convergence throughout the training phase. By introducing the SSTV constraint, slightly better results can be achieved.

Iii-D Implementation Details

We use Pytorch libraries

111https://pytorch.org to implement and train the proposed SSPSR network. We train different models to super-resolve the hyperspectral images for scale factors 4 and 8 with random initialization. We use the ADAM optimizer [25]

with an initial learning rate of 1e-4 which decays by a factor of 10 when it reaches 30 epochs. In our experiments, we find it will take 40 epochs to achieve a stable performance. The models are trained with a batch size of 32. As in many previous work, we also apply the Bicubic interpolation to downsample the high-resolution hyperspectral images to obtain the corresponding low-resolution hyperspectral images.

Unless otherwise specified, in the following experiments we set the spectral band number () of each group to 8 and the overlap () between neighboring groups to 2. To efficiently process the “edge” spectral bands, we adopt a so called “fallback” dividing strategy. When the last group has less than spectral bands, we select the last bands as the last group. Therefore, the number of groups can be obtained by the following equation,

(18)

where is the function that rounds the elements of to the nearest integers towards infinity. In the SSPN, the number of spatial-spectral blocks () is set to 3. We set the size of all Conv layers to 33 except for that in the spectral residual modules, where the kernel size is set to 1

1. To ensure that the size of the feature map is not changed, the zero-padding strategy is applied for these

Conv layers with kernel size 33. The Conv layers in shallow feature extraction and SSPN have filters, except for that in the channel-downscaling, i.e., the reconstruction network after the upscaled features at the branch networks (please refer to Eq. (5)).

Models CC SAM RMSE ERGAS PSNR SSIM
Our 4 0.9565 2.3527 0.0114 4.9313 40.3612 0.9441
Our - w/o GS 4 0.9548 2.4048 0.0116 5.0399 40.1901 0.9424
Our - w/o PU 4 0.9520 2.5239 0.0119 5.2329 39.9185 0.9388
Our - w/o PS 4 0.9537 2.4152 0.0118 5.0991 40.0712 0.9410
Our - w/o SA 4 0.9563 2.3597 0.0115 4.9443 40.3408 0.9438
Our 8 0.8766 4.0127 0.0191 8.3355 35.8368 0.8538
Our - w/o GS 8 0.8622 4.5121 0.0199 8.8459 35.3857 0.8427
Our - w/o PU 8 0.8585 4.5542 0.0202 9.0285 35.2489 0.8358
Our - w/o PS 8 0.8732 4.0587 0.0194 8.4621 35.7074 0.8522
Our - w/o SA 8 0.8760 4.0198 0.0192 8.3650 35.8144 0.8538

SG: Grouping Strategy, PU: Progressive Upsampling PS: Parameter Sharing, SA: Spectral Attention

TABLE II: Ablation study. Quantitative comparisons among some other variants of the proposed SSPSR method over four testing images of Chikusei dataset with respect to six PQIs.

bands ()
overlaps () groups () params FLOPs CC SAM RMSE ERGAS PSNR SSIM
128 0 1 14.12 11.16 0.9548 2.4048 0.0116 5.0399 40.1901 0.9424
1 0 128 13.53 215.87 0.9558 2.3456 0.0116 4.9609 40.2757 0.9432
8 0 16 13.56 35.34 0.9562 2.3670 0.0115 4.9540 40.3286 0.9437
8 2 21 13.56 43.51 0.9565 2.3527 0.0114 4.9313 40.3612 0.9441
8 4 31 13.56 59.87 0.9567 2.3520 0.0114 4.9251 40.3759 0.9443
8 6 61 13.56 108.94 0.9568 2.3512 0.0113 4.9205 40.3801 0.9445
TABLE III: The performance of some typical setting for the spectral band numbers of each group and overlaps between neighboring groups when using the grouping strategy of the proposed SSPSR method.

Iv Experiments and Results

In this section, we present a detailed analysis and evaluation of our approach on three public hyperspectral image datasets, which include two remote sensing hyperspectral image datasets, i.e., Chikusei dataset [56]222https://www.sal.t.u-tokyo.ac.jp/hyperdata/ and Pavia Center dataset333http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes, and one nature hyperspectral image dataset, i.e., CAVE dataset [55]444https://www.cs.columbia.edu/CAVE/databases/multispectral/. We compare the proposed method with eight comparison methods, including four state-of-the-art deep single gray/RGB image super-resolution methods, VDSR [23], EDSR [30], RCAN [62], and SAN [9], and four representative and most relevant deep single hyperspectral image super-resolution methods, TLCNN [59], 3DCNN [34], GDRRN [29], and DeepPrior [40]

. We carefully adjust hyperparameters of these comparison methods to achieve their best performance. Bicubic interpolation is introduced as the baseline.

Evaluation measures. Six widely used quantitative picture quality indices (PQIs) are employed to evaluate the performance of our method, including cross correlation (CC) [32], spectral angle mapper (SAM) [60], root mean squared error (RMSE), erreur relative globale adimensionnelle de synthese (ERGAS) [45], peak signal-to-noise ratio (PSNR), and structure similarity (SSIM) [47]. For PSNR and SSIM of the reconstructed hyperspectral images, we report their mean values of all spectral bands. CC, SAM, and ERGAS are three widely adopted quality indices in HS fusion task, while the remaining three indices are commonly used quantitative image restoration quality indices. The best values for these indices are 1, 0, 0, 0, , and 1, respectively.


CC SAM RMSE ERGAS PSNR SSIM
Bicubic 4 0.9212 3.4040 0.0156 6.7564 37.6377 0.8949
VDSR [23] 4 0.9227 3.6642 0.0148 6.8708 37.7755 0.9065
EDSR [30] 4 0.9510 2.5580 0.0121 5.3708 39.8289 0.9354
RCAN [62] 4 0.9518 2.5397 0.0120 5.3205 39.9041 0.9359
SAN [9] 4 0.9514 2.5547 0.0120 5.3349 39.8671 0.9357
TLCNN [59] 4 0.9196 3.8573 0.0150 6.7522 37.7251 0.9008
3DCNN [34] 4 0.9355 3.1174 0.0140 6.0026 38.6091 0.9127
GDRRN [29] 4 0.9369 2.500 0.0137 5.9540 38.7198 0.9193
DeepPrior [40] 4 0.9293 3.5590 0.0147 6.2096 38.1923 0.9010
SSPSR 4 0.9565 2.3527 0.0114 4.9894 40.3612 0.9413
Bicubic 8 0.8314 5.0436 0.0224 4.8488 34.5049 0.8228
VDSR [23] 8 0.8344 5.1778 0.0216 4.9052 34.5661 0.8305
EDSR [30] 8 0.8636 4.4205 0.0201 4.5091 35.4217 0.8501
RCAN [62] 8 0.8665 4.3757 0.0198 4.5229 35.5044 0.8531
SAN [9] 8 0.8664 4.3922 0.0198 4.5170 35.5018 0.8527
TLCNN [59] 8 0.8249 5.3041 0.0224 4.8843 34.3488 0.8215
3DCNN [34] 8 0.8428 4.8432 0.0215 4.5964 34.8375 0.8313
GDRRN [29] 8 0.8421 4.3160 0.0214 4.5879 34.8153 0.8357
DeepPrior [40] 8 0.8366 5.3386 0.0219 4.6789 34.6692 0.8126
SSPSR 8 0.8766 4.0127 0.0191 4.3120 35.8368 0.8624
TABLE IV: Average quantitative comparisons of ten different approaches over four testing images from Chikusei dataset with respect to six PQIs.

CC SAM RMSE ERGAS PSNR SSIM
Bicubic 4 0.8594 6.1399 0.0437 6.8814 27.5874 0.6961
VDSR [23] 4 0.8659 6.7004 0.0419 6.6991 27.8821 0.7242
EDSR [30] 4 0.8922 5.8657 0.0379 6.0199 28.7981 0.7722
RCAN [62] 4 0.8917 5.9785 0.0376 6.0485 28.8165 0.7719
SAN [9] 4 0.8927 5.9590 0.0374 5.9903 28.8554 0.7740
TLCNN [59] 4 0.8563 6.9013 0.0431 6.9139 27.6682 0.7141
3DCNN [34] 4 0.8813 5.8669 0.0396 6.2665 28.4114 0.7501
GDRRN [29] 4 0.8829 5.4750 0.0393 6.2264 28.4726 0.7530
DeepPrior [40] 4 0.8723 6.2665 0.0410 6.4845 28.1061 0.7365
SSPSR 4 0.9003 5.4612 0.0362 5.8014 29.1581 0.7903
Bicubic 8 0.6969 7.8478 0.0630 4.8280 24.5972 0.4725
VDSR [23] 8 0.7116 8.0769 0.0611 4.6851 24.8483 0.5017
EDSR [30] 8 0.7215 7.8594 0.05983 4.6359 25.0041 0.5130
RCAN [62] 8 0.7152 7.9992 0.0604 4.6930 24.9183 0.5086
SAN [9] 8 0.7104 8.0371 0.0609 4.7646 24.8485 0.5054
TLCNN [59] 8 0.6880 8.3843 0.0633 4.9143 24.5215 0.4790
3DCNN [34] 8 0.7163 7.6878 0.0605 4.6469 24.9336 0.5038
GDRRN [29] 8 0.7111 7.3531 0.0607 4.6220 24.8648 0.5014
DeepPrior [40] 8 0.7007 7.9281 0.0618 4.7366 24.7252 0.4963
SSPSR 8 0.7359 7.3312 0.0586 4.5266 25.1985 0.5365
TABLE V: Average quantitative comparisons of ten different approaches over four testing images from Pavia Centre dataset with respect to six PQIs.

Iv-a Ablation Studies

The proposed SSPSR method contains four main components including Grouping Strategy (GS), Progressive Upsampling (PU), Parameter Sharing (PS), and Spectral Attention (SA). In order to validate the effectiveness of these components, we modify our model and compare their variants. We use the training images from Chikusei dataset as a training set, and evaluate the super-resolution performance (in terms of average objective results) on the four testing images from Chikusei dataset (more details regarding the experimental settings on Chikusei dataset can be found in the following subsection). Table II tabulates the four variants of the proposed method, in which denotes the upsampling scale. In the following, we will give the detailed analysis about them.

Grouping Strategy (GS). To effectively exploit the correlation among neighboring spectral bands of hyperspectral image and reduce the parameters of the model, we design a grouping strategy to divide the input hyperspectral image into some overlap groups. In order to verify the effectiveness of this strategy, we remove the grouping strategy and treat them as one group. As shown in Table II, “Our - w/o GS”, where the grouping strategy is discarded, is getting worse. The grouping strategy leads to a considerable performance improvement, e.g.,+0.17 dB for 4 and +0.45 dB for 8. As for other objective indicators, the gains are also considerable.

In addition to above with/without GS comparisons, we also report the number of parameters and FLOPs as well as the six PQIs of our method under some typical setting for the spectral band numbers () of each group and overlaps () between neighboring groups. The group number is calculated by Eq. (18). As shown in Table II, when and , our method considers all the spectral bands as a whole group () and there is no grouping strategy, i.e., the case of “Our - w/o GS”. When , , and our method will treat each spectral band as a group and this can be seen as a special case, i.e., the band-wise grouping. From the results, we can see that regardless of whether we treat all spectra as a whole or treat them separately, their performance cannot be compared with our proposed grouping strategy. When comparing the two schemes, the band-wise one obtained better performance due to the combination of grouping and parameter sharing. However, it will also greatly increase the computational overhead (please refer to the FLOPs). Because the more branches of the model, the more calculations are required.


Fig. 3: Reconstructed composite images of one test hyperspectral image in Chikusei dataset with spectral bands 70-100-36 as R-G-B when the upsampling factor is = 4. From left to right, top to down, they are the ground truth, results of VDSR [23], EDSR [30], RCAN [62], SAN [9], TLCNN [59], 3DCNN [34], GDRRN [29], DeepPrior [40], and the proposed SSPSR method. The bottom table shows the PSNR (dB) and SSIM results of the reconstructed RGB composite image of different methods.

We also report the performance of the proposed methods with different settings for the overlaps between neighboring groups, i.e., and . With the increase of overlap (from to ), the performance of our method will be gradually improved, but the calculation amount of the model is also constantly expanding. It is worth noting that because we adopt a strategy of parameter sharing, when we fix the spectral band number and change the overlap , the parameters of the model are the same. In order to achieve a balance among the number of parameters and FLOPs and the objective results, in this paper, we set the and to 8 and 2, respectively.

Progressive Upsampling (PU). To learn the end-to-end relationship between low-resolution input and high-resolution output, there are two commonly used upsampling frameworks, pre-upsampling super-resolution and post-upsampling super-resolution. They either increase the parameters of the network or increase the difficulty of training. Inspired by Laplacian pyramid super-resolution network [26], we leverage a progressive upsampling super-resolution framework. In this way, it decomposes a difficult task into some easy tasks, thus not only greatly reducing the learning difficulty but also obtaining better performance. In Table II, we report the performance of the proposed SSPSR method without the PU strategy, i.e., “Our - w/o PU”. We remove the upsampling module in the branch networks and obtain the variant of our method. We can see that our method with PU achieves better performance on all the six indices, including the spatial reconstruction fidelity (e.g., RMSE, PSNR and SSIM) and the spectral consistency (CC, SAM, and ERGAS). Especially when the upsampling factor is large, this strategy appears to be paramount. For example, the improvement of CC and PSNR of 8 is greater than that of 4, e.g., +0.045 and +0.45 dB for 4, and +0.181 and +0.58 dB for 8.


Fig. 4: Reconstructed composite images of one test hyperspectral image in Chikusei dataset with spectral bands 70-100-36 as R-G-B when the upsampling factor is = 8. From left to right, top to down, they are the ground truth, results of VDSR [23], EDSR [30], RCAN [62], SAN [9], TLCNN [59], 3DCNN [34], GDRRN [29], DeepPrior [40], and the proposed SSPSR method. The bottom table shows the PSNR (dB) and SSIM results of the reconstructed RGB composite image of different methods.

Parameter Sharing (PS). In the proposed SSPSR method, in order to make the training process more efficient, we share the network parameters of each branch across all groups. In Table II, we tabulates the comparison results of the proposed SSPSR method with and without parameter sharing strategy. Obviously, by parameter sharing, we have greatly reduced the computational complexity of the model. Although parameter sharing strategy reduces the parameters of the model, it does not weaken the representation ability of the model. Through the parameter sharing strategy555Since the network parameters are mainly dominated by module of SSPN, we can deduce that the parameter ratio between the models with and without parameter sharing is ., we can make full use of the training samples provided by different branches (training “more” data with only one branch network parameters), so that we get a more stable model. From the results, we can see that the overall performance of the parameter sharing strategy is even better than the parametric unsharing method on all six PQIs under and .

Spectral Attention (SA). To exploit the spatial-spectral prior, we apply the bottleneck network (with 11 filters) to extract the correlations among neighboring spectral bands of hyperspectral image. In addition, the attention module is also introduced to model the interdependencies between the spectra of the hyperspectral data. To verify the effectiveness of the SA module, we compare the performance of with and without SA module. As shown in Table II, with the SA mechanism, our method has achieves a slight performance gain compared to “Our - w/o SA” that without SA mechanism. By adding the SA module, although the improvement of each objective index is relatively small, the improvement of spectral confidence (i.e., SAM) is more obvious than that of spatial reconstruction confidence (i.e., PSNR), 2.2% vs. 0.43% for and 11% vs. 1.3% for . This proves that the introduction of SA will be more conducive to the representation of spectral features.

Iv-B Results on Chikusei Dataset

The Chikusei dataset is taken by Headwall Hyperspec-VNIR-C imaging sensor, and it is an urban area in Chikusei, Ibaraki, Japan, taken on 29 July 2014. It has 128 spectral bands in the spectral range from 363 nm to 1018 nm and 25172335 pixels in total.

Due to missing information on the edge, we first crop the center region of the image to obtain a subimage with 23042048128 pixels, which is further divided into training and test data. Specifically, the top region of this image are extracted to form the testing data, which has four non-overlap hyperspectral images with 512512128 pixels. Besides, from the remaining region of the subimage, we extract overlap patches as reference high-resolution hyperspectral images for training (10% of the training data is included as a validation set). When the upsampling factor is 4, we let the extracted patches as 6464 pixels (with 32 pixels overlap); when the upsampling factor is 8, we let the extracted patches as 128128 pixels (with 64 pixels overlap). Here we use different block sizes for different factors mainly because of the following considerations: if the factor is large and the patch size is small, the input information is very limited and this will hinder the training of the network. Therefore, we use a big patch size for the large factor. Note that the low-resolution hyperspectral images is generated by Bicubic downsampling (the Matlab function imresize) the ground truth with a factor of 4 or 8.

Table IV reports the average objective performance over four testing images of all comparison algorithms, where bold represents the best result, underline denotes the second best. We can easily observe that the proposed SSPSR method significantly outperforms other algorithms with respect to all objective evaluation indexes. The average PSNR value of our method is more than 0.30 dB higher than that of the second best method. As a two-step method (first super-resolves the hyperspectral images and then conduct decomposition), TLCNN [59] can well reconstruct the target hyperspectral images. Similar to our method, GDRRN [29] also takes a group strategy, and thus can well exploit the spectral information (it achieves the second best results in term of SAM). DeepPrior [40] is a very novel method, however, it takes much time to adjust the results and there is no superior strategy to determine when to stop iteration. RCAN [62] and SAN [9] receive the similar results and are slight better than EDSR [30]. This may be due to the fact that the former two consider the channel attention, and thus can well capture the spectral features of the hyperspectral data.

Fig. 3 and Fig. 4 show the reconstructed composite images of one test hyperspectral image in Chikusei dataset of different comparison methods with upsampling factors = 4 and = 8, respectively. We can also easily observe that the proposed SSPSR method performs better than other algorithms, in the better recovery of both finer-grained textures and coarser-grained structures (please refer to the regions marked with red boxes). At the bottom of these visual comparison results, we also report their PSNR and SSIM values of the reconstructed composite images. Our approach SSPSR still has considerable advantages.


Fig. 5: Reconstructed composite images (the first row) and the error maps (the second row) of one test hyperspectral image in Pavia Center dataset with spectral bands 32-21-11 as R-G-B with upsampling factor = 4. From left to right, they are the ground truth, results of EDSR [30], RCAN [62], SAN [9], 3DCNN [34], GDRRN [29], and the proposed SSPSR method. The bottom images are the reconstruction error maps of the corresponding methods.

Fig. 6: Reconstructed composite images (the first row) and the error maps (the second row) of one test hyperspectral image in Pavia Center dataset with spectral bands 32-21-11 as R-G-B with upsampling factor = 8. From left to right, they are the ground truth, results of EDSR [30], RCAN [62], SAN [9], 3DCNN [34], GDRRN [29], and the proposed SSPSR method. The bottom images are the reconstruction error maps of the corresponding methods.

Iv-C Results on Pavia Centre Dataset

The Pavia Centre dataset is taken by Reflective Optics System Imaging Spectrometer (ROSIS) sensor, and it is a flight campaign over the center area of Pavia, northern Italy, in 2001. It has 102 spectral bands (the water vapor absorption and noisy spectral bands have been removed from the initially 115 spectral bands) and 10961096 pixels in total. It should be noted that in the Pavia Centre scene, regions that contain no information are removed, leaving a meaningful region with 1096715 pixels.

To evaluate the proposed SSPSR method, we crop the center region of the image to obtain a subimage with 1096715 102 pixels, which is further divided into training and testing data. Specifically, the left part of this image are extracted to form the testing data, which has four non-overlap hyperspectral images with 223223 pixels. Besides, from the remaining region of the subimage, we extract overlap patches as reference high-resolution hyperspectral images for training (10% of the training data is included as a validation set). Similar to previous settings, the patch size and low-resolution hyperspectral images are generated accordingly.

Table V tabulates the average performance in terms of six PQIs over four testing images of all competing approaches. We can easily observe that the proposed SSPSR method significantly outperforms other algorithms with respect to almost all objective evaluation indexes. The average PSNR value of our method is 0.3 dB for 4 and 0.2 dB for 8 higher than the second best method. As the most competitive general gray/RGB image super-resolution methods, EDSR, RCAN, and SAN can achieve quite pleasurable results. However, their SAM indices are relatively poor when compared with these single hyperspectral image super-resolution methods, i.e., 3DCNN [34] and GDRRN [29].

Fig. 5 and Fig. 6 show the reconstructed composite images and error maps of one test hyperspectral image in Pavia Center dataset of the six most competitive approaches with upsampling factors = 4 and = 8, respectively. The results of EDSR [30], 3DCNN [34], and GDRRN [29] are very blur, while RCAN [62] and SAN [9] seem to introduce some noise. The proposed SSPSR method can maintain the main structural information. From the error maps of these methods, we can notice that the proposed method does not include obvious contour information of the image, which indicates that our method can well recover these information. It should be noted that when compared with the situation = 4, the visual results with upsampling factor = 8 are worse. In addition, when we compare the visual results of Fig. 4 and Fig. 6, we also notice that reconstructed results on Pavia Center dataset are worse than these on Chikusei dataset. We think this is mainly due to the limited number of the training samples of the Pavia Center database. This is also a major drawback of these deep learning based methods. That is, they require a large number of training samples, otherwise they are difficult to train a model with promising generalization ability.


Fig. 7: Reconstructed images of stuffed_toys at 480nm, 580nm and 680nm with upsampling factor = 4. The first 3 rows are the reconstructed results for 480nm, 580nm and 680nm spectral bands, respectively; the last 3 rows show the error maps of the comparison methods. In (g), we report the RMSE, PSNR (dB), and SSIM results of the competing methods.

Fig. 8: Reconstructed images of real_and_fake_apples at 480nm, 580nm and 680nm with upsampling factor = 8. The first 3 rows are the reconstructed results for 480nm, 580nm and 680nm spectral bands, respectively; the last 3 rows show the error maps of the comparison methods. In (g), we report the RMSE, PSNR (dB), and SSIM results of the competing methods.

Iv-D Results on CAVE dataset

The previous experiments are conducted on the Chikusei and Pavia Centre datasets, which are all remotely sensed hyperspectral images. To further verify the effectiveness of the proposed SSPSR method, we also conduct comparison experiments on hyperspectral images of natural scenes. Specifically, we use the CAVE multispectral image database because it is widely used in many multispectral image recovery tasks. The database consists of 32 scenes of everyday objects with spatial size of 512512, including 31 spectral bands ranging from 400nm to 700nm at 10nm steps. To prepare samples for training, we randomly select 20 hyperspectral images from the database (10% samples are randomly selected for evaluations). When the upsampling factor is 4, we extract patches with 6464 pixels (32 pixels overlap) for training; when the upsampling factor is 8, we let the extracted patches as 128128 pixels (with 64 pixels overlap). The corresponding low-resolution hyperspectral image are generated by Bicubic downsampling with a factor of 4 or 8. The remaining 12 hyperspectral images of the database are used for testing, where the original images are treated as ground truth high-resolution hyperspectral images, and the low-resolution hyperspectral inputs are generated similarly as the training samples. For this dataset, we set the spectral band number () of each group to 4 and the overlap () between neighboring groups to 1. Since the Cave dataset can provide more training samples, we use a larger to design our network.

We compare the proposed SSPSR method with some very competitive approaches, EDSR [30], RCAN [62], 3DCNN [34], and GDRRN [29]. The average performance of the CC, SAM, RMSE, ERGAS, PSNR, and SSIM results of competing methods for different upsampling factors on the CAVE dataset are reported in Table VI. From these results, we notice that the 3DCNN method performs worse than other methods. Clearly, the proposed SSPSR method outperforms all other competing methods. The proposed SSPSR method performs much better than EDSR [30] and RCAN [62], which focus on exploiting the spatial prior. On average, the PSNR and SSIM values of the proposed SSPSR method for upsampling factor = 4/8 are 0.3/0.4 dB and 0.002/0.012 higher than the second best method, respectively.

Fig. 7 and Fig. 8 show the reconstructed HR hyperspectral images and the corresponding error maps at 480nm, 580nm and 680nm by the competing methods for test images stuffed_toys and real_and_fake_apples with upsampling factors = 4 and = 8, respectively. From the visual reconstruction results, we can see that all the comparison methods can well reconstruct the high-resolution spatial structures of the hyperspectral images. In these error maps, we learn that the proposed method and RCAN method achieve the best reconstruction fidelity in recovering the details of the original hyperspectral images. For example, the edges of the checkerboards and the contours of dog’s ears and apples (please refer to the regions marked with red boxes). In the subfigure (g), we also report the RMSE, PSNR, and SSIM results of each spectral band for the competing methods. Obviously, the proposed SSPSR method performs best in most cases. 3DCNN [34] and GDRRN [29], which are designed for the hyperspectral images, can achieve favorable results in some cases, but their performance seems to be unstable when reconstructing different spectral bands.


CC SAM RMSE ERGAS PSNR SSIM
Bicubic 4 0.9868 4.1759 0.0212 5.2719 34.7214 0.9277
EDSR [30] 4 0.9931 3.5499 0.0149 3.5921 38.1575 0.9522
RCAN [62] 4 0.9935 3.6050 0.0142 3.4178 38.7585 0.9530
SAN [9] 4 0.9935 3.5951 0.0143 3.4200 38.7188 0.9531
3DCNN [34] 4 0.9928 3.3463 0.0154 3.7042 37.9759 0.9522
GDRRN [29] 4 0.9934 3.4143 0.0145 3.5086 38.4507 0.9538
SSPSR 4 0.9939 3.1846 0.0138 3.3384 39.0892 0.9553
Bicubic 8 0.9666 5.8962 0.0346 4.2175 30.2056 0.8526
EDSR [30] 8 0.9778 5.6865 0.0279 3.3903 32.4072 0.8842
RCAN [62] 8 0.9791 5.9771 0.0268 3.1781 32.9544 0.8884
SAN [9] 8 0.9795 5.8683 0.0267 3.1437 33.0012 0.8888
3DCNN [34] 8 0.9755 5.0948 0.0292 3.5536 31.9691 0.8863
GDRRN [29] 8 0.9769 5.3597 0.0280 3.3460 32.5763 0.8890
SSPSR 8 0.9805 4.4874 0.0257 3.0419 33.4340 0.9010
TABLE VI: Quantitative comparisons of different approaches over 12 testing images from Cave dataset with respect to six PQIs.

V Conclusions

In this paper, a novel deep neural network based on spatial-spectral prior network (SSPN) is introduced to address the single hyperspectral image super-resolution problem. In particular, in order to discover the spatial and spatial correlation characteristics of hyperspectral data, we carefully designed a spatial-spectral prior network (SSPN) to fully exploit the spatial information and correlation among the different spectral features. In addition, to cope with the problems that the training samples of hyperspectral image are limited and the dimensionality is high, a group convolution (with shared network parameters) and progressive upsampling framework is proposed. In this way, we can expect to greatly reduce the parameters of the model and make it possible to obtain stable training results under small data and large spectral band number conditions. In our introduced network, the transmission of information flow is very flexible by the short, long, global skip links via residual learning. To regularize the network outputs, we adopt a spatial-spectral total variation (SSTV) based constraint to preserve the edge sharpness spectral correlations of the super-resolved high-resolution hyperspectral image. Evaluations on three public hyperspectral datasets demonstrate that our model not only achieves the best performance in terms of some commonly used objective indicators, but also generates clear high-resolution images which are perceptually closer to the ground truth when compared with state-of-the-arts.

References

  • [1] H. K. Aggarwal and A. Majumdar (2016) Hyperspectral image denoising using spatio-spectral total variation. IEEE Geoscience and Remote Sensing Letters 13 (3), pp. 442–446. Cited by: §III-C.
  • [2] B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva (2012) Twenty-five years of pansharpening: a critical review and new developments. In Signal and Image Processing for Remote Sensing, pp. 552–599. Cited by: §II-A.
  • [3] T. Akgun, Y. Altunbasak, and R. M. Mersereau (2005) Super-resolution reconstruction of hyperspectral images. IEEE Transactions on Image Processing 14 (11), pp. 1860–1875. Cited by: §II-B.
  • [4] N. Akhtar, F. Shafait, and A. Mian (2014) Sparse spatio-spectral representation for hyperspectral image super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 63–78. Cited by: §I.
  • [5] N. Akhtar, F. Shafait, and A. Mian (2015) Bayesian sparse representation for hyperspectral image super resolution. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    pp. 3631–3640. Cited by: §II-A, §III-B.
  • [6] H. H. Bauschke and J. M. Borwein (1996) On projection algorithms for solving convex feasibility problems. SIAM review 38 (3), pp. 367–426. Cited by: §II-B.
  • [7] R. A. Borsoi, T. Imbiriba, and J. C. M. Bermudez (2020) Super-resolution for hyperspectral and multispectral image fusion accounting for seasonal spectral variability. IEEE Transactions on Image Processing 29 (), pp. 116–127. Cited by: §II-A.
  • [8] C. Chen, Y. Li, W. Liu, and J. Huang (2015) SIRF: simultaneous satellite image registration and fusion in a unified framework. IEEE Transactions on Image Processing 24 (11), pp. 4213–4224. Cited by: §I, §II-A.
  • [9] T. Dai, J. Cai, Y. Zhang, S. Xia, and L. Zhang (2019) Second-order attention network for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11065–11074. Cited by: §II-C, Fig. 3, Fig. 4, Fig. 5, Fig. 6, §IV-B, §IV-C, TABLE IV, TABLE V, TABLE VI, §IV.
  • [10] X. Deng and P. L. Dragotti (2020) Deep coupled ista network for multi-modal image super-resolution. IEEE Transactions on Image Processing 29 (), pp. 1683–1698. Cited by: §II-A.
  • [11] R. Dian, S. Li, A. Guo, and L. Fang (2018) Deep hyperspectral image sharpening. IEEE Transactions on Neural Networks and Learning Systems 29 (11), pp. 5345–5355. Cited by: §II-A.
  • [12] R. Dian and S. Li (2019) Hyperspectral image super-resolution via subspace-based low tensor multi-rank regularization. IEEE Transactions on Image Processing 28 (10), pp. 5135–5146. Cited by: §II-A.
  • [13] C. Dong, C. C. Loy, K. He, and X. Tang (2015) Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (2), pp. 295–307. Cited by: §I, §II-C.
  • [14] W. Dong, F. Fu, G. Shi, X. Cao, J. Wu, G. Li, and X. Li (2016) Hyperspectral image super-resolution via non-negative structured sparse representation. IEEE Transactions on Image Processing 25 (5), pp. 2337–2352. Cited by: §II-A, §III-B.
  • [15] X. Han, B. Shi, and Y. Zheng (2018) Self-similarity constrained sparse representation for hyperspectral image super-resolution. IEEE Transactions on Image Processing 27 (11), pp. 5625–5637. Cited by: §II-A, §III-B.
  • [16] M. Haris, G. Shakhnarovich, and N. Ukita (2018) Deep back-projection networks for super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1664–1673. Cited by: §II-C.
  • [17] S. He, H. Zhou, Y. Wang, W. Cao, and Z. Han (2016) Super-resolution reconstruction of hyperspectral images via low rank tensor modeling and total variation regularization. In 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 6962–6965. Cited by: §I.
  • [18] X. He, L. Condat, J. M. Bioucas-Dias, J. Chanussot, and J. Xia (2014) A new pansharpening method based on spatial and spectral sparsity priors. IEEE Transactions on Image Processing 23 (9), pp. 4160–4174. Cited by: §II-A.
  • [19] J. Hu, L. Shen, and G. Sun (2017) Squeeze-and-excitation networks. CoRR abs/1709.01507. External Links: 1709.01507 Cited by: §II-C.
  • [20] H. Huang, A. G. Christodoulou, and W. Sun (2014) Super-resolution hyperspectral imaging with unknown blurring by low-rank and group-sparse modeling. In 2014 IEEE International Conference on Image Processing (ICIP), pp. 2155–2159. Cited by: §I, §II-B.
  • [21] H. Irmak, G. B. Akar, and S. E. Yuksel (2018) A map-based approach for hyperspectral imagery super-resolution. IEEE Transactions on Image Processing 27 (6), pp. 2942–2951. Cited by: §I.
  • [22] J. Jiang Hyperspectral image super-resolution benchmark. Note: https://github.com/junjun-jiang/Hyperspectral-Image-Super-Resolution-Benchmark Cited by: §II.
  • [23] J. Kim, J. Kwon Lee, and K. Mu Lee (2016) Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1646–1654. Cited by: §II-C, Fig. 3, Fig. 4, TABLE IV, TABLE V, §IV.
  • [24] J. Kim, J. Kwon Lee, and K. Mu Lee (2016) Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1637–1645. Cited by: §II-C.
  • [25] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §III-D.
  • [26] W. Lai, J. Huang, N. Ahuja, and M. Yang (2017) Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 624–632. Cited by: §II-C, §IV-A.
  • [27] J. Li, Q. Yuan, H. Shen, X. Meng, and L. Zhang (2016) Hyperspectral image super-resolution by spectral mixture analysis and spatial–spectral group sparsity. IEEE Geoscience and Remote Sensing Letters 13 (9), pp. 1250–1254. Cited by: §II-B.
  • [28] K. Li, W. Xie, Q. Du, and Y. Li (2019) DDLPS: detail-based deep laplacian pansharpening for hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing 57 (10), pp. 8011–8025. Cited by: §II-A.
  • [29] Y. Li, L. Zhang, C. Dingl, W. Wei, and Y. Zhang (2018) Single hyperspectral image super-resolution with grouped deep recursive residual network. In Proceedings of the IEEE International Conference on Multimedia Big Data (BigMM), pp. 1–4. Cited by: §II-B, Fig. 3, Fig. 4, Fig. 5, Fig. 6, §IV-B, §IV-C, §IV-C, §IV-D, §IV-D, TABLE IV, TABLE V, TABLE VI, §IV.
  • [30] B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee (2017) Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops (CVPRW), pp. 136–144. Cited by: §I, §II-C, §III-A1, Fig. 3, Fig. 4, Fig. 5, Fig. 6, §IV-B, §IV-C, §IV-D, TABLE IV, TABLE V, TABLE VI, §IV.
  • [31] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang (2018) Non-local recurrent network for image restoration. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 1673–1682. Cited by: §II-C.
  • [32] L. Loncan, L. B. De Almeida, J. M. Bioucas-Dias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao, G. A. Licciardi, M. Simoes, et al. (2015) Hyperspectral pansharpening: a review. IEEE Geoscience and remote sensing magazine 3 (3), pp. 27–46. Cited by: §IV.
  • [33] J. Ma, W. Yu, C. Chen, P. Liang, X. Guo, and J. Jiang (2020)

    Pan-GAN: an unsupervised learning method for pan-sharpening in remote sensing image fusion using a generative adversarial network

    .
    Information Fusion. Cited by: §II-A.
  • [34] S. Mei, X. Yuan, J. Ji, Y. Zhang, S. Wan, and Q. Du (2017) Hyperspectral image spatial super-resolution via 3d full convolutional neural network. Remote Sensing 9 (11), pp. 1139. Cited by: §II-B, Fig. 3, Fig. 4, Fig. 5, Fig. 6, §IV-C, §IV-C, §IV-D, §IV-D, TABLE IV, TABLE V, TABLE VI, §IV.
  • [35] Z. Pan and H. Shen (2018) Multispectral image super-resolution via rgb image fusion and radiometric calibration. IEEE Transactions on Image Processing 28 (4), pp. 1783–1797. Cited by: §I, §II-A.
  • [36] S. C. Park, M. K. Park, and M. G. Kang (2003) Super-resolution image reconstruction: a technical overview. IEEE signal processing magazine 20 (3), pp. 21–36. Cited by: §I.
  • [37] Y. Qu, H. Qi, and C. Kwan (2018) Unsupervised sparse dirichlet-net for hyperspectral image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2511–2520. Cited by: §II-A.
  • [38] L. J. Rickard, R. W. Basedow, E. F. Zalewski, P. R. Silverglate, and M. Landers (1993) HYDICE: an airborne system for hyperspectral imaging. In Optical Engineering and Photonics in Aerospace Sensing, pp. 173–179. Cited by: §I.
  • [39] W. Shi, J. Caballero, F. Husz r, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 1874–1883. Cited by: §III-A1.
  • [40] O. Sidorov and J. Y. Hardeberg (2019) Deep hyperspectral prior: single-image denoising, inpainting, super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Vol. , pp. 3844–3851. Cited by: §II-B, Fig. 3, Fig. 4, §IV-B, TABLE IV, TABLE V, §IV.
  • [41] H. Sun, Z. Zhong, D. Zhai, X. Liu, and J. Jiang (2019) Hyperspectral image super-resolution using multi-scale feature pyramid network. In International Forum on Digital TV and Wireless Multimedia Communications, pp. 49–61. Cited by: §II-B.
  • [42] Y. Tai, J. Yang, and X. Liu (2017) Image super-resolution via deep recursive residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3147–3155. Cited by: §II-C.
  • [43] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2018) Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9446–9454. Cited by: §II-B.
  • [44] M. A. Veganzones, M. Simoes, G. Licciardi, N. Yokoya, J. M. Bioucas-Dias, and J. Chanussot (2015) Hyperspectral super-resolution of locally low rank images from complementary multisource data. IEEE Transactions on Image Processing 25 (1), pp. 274–288. Cited by: §II-A, §III-B.
  • [45] L. Wald (2002) Data fusion: definitions and architectures: fusion of images of different spatial resolutions. Presses des MINES. Cited by: §IV.
  • [46] Y. Wang, X. a. Chen, Z. Han, S. He, et al. (2017) Hyperspectral image super-resolution via nonlocal low-rank tensor approximation and total variation regularization. Remote Sensing 9 (12), pp. 1286. Cited by: §I, §II-B.
  • [47] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, et al. (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13 (4), pp. 600–612. Cited by: §IV.
  • [48] Q. Wei, N. Dobigeon, and J. Tourneret (2015) Bayesian fusion of multi-band images. IEEE Journal of Selected Topics in Signal Processing 9 (6), pp. 1117–1127. Cited by: §I.
  • [49] B. Wen, U. S. Kamilov, D. Liu, H. Mansour, and P. T. Boufounos (2018) DEEPCASD: an end-to-end approach for multi-spectral image super-resolution. In ICASSP, Cited by: §II-A.
  • [50] E. Wycoff, T. Chan, K. Jia, W. Ma, and Y. Ma (2013) A non-negative sparse promoting algorithm for high resolution hyperspectral imaging. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1409–1413. Cited by: §III-B.
  • [51] Q. Xie, M. Zhou, Q. Zhao, D. Meng, W. Zuo, and Z. Xu (2019) Multispectral and hyperspectral image fusion by ms/hs fusion net. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1585–1594. Cited by: §II-A.
  • [52] W. Xie, X. Jia, Y. Li, and J. Lei (2019) Hyperspectral image super-resolution using deep feature matrix factorization. IEEE Transactions on Geoscience and Remote Sensing 57 (8), pp. 6055–6067. Cited by: §II-B.
  • [53] Y. Xu, Z. Wu, J. Chanussot, and Z. Wei (2019) Nonlocal patch tensor sparse representation for hyperspectral image super-resolution. IEEE Transactions on Image Processing 28 (6), pp. 3034–3047. Cited by: §II-A.
  • [54] J. Yang, X. Fu, Y. Hu, Y. Huang, X. Ding, and J. Paisley (2017) PanNet: a deep network architecture for pan-sharpening. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 5449–5457. Cited by: §II-A.
  • [55] F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar (2010) Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum. IEEE Transactions on Image Processing 19 (9), pp. 2241–2253. Cited by: §IV.
  • [56] N. Yokoya and A. Iwasaki (2016-05) Airborne hyperspectral data over chikusei. Technical report Technical Report SAL-2016-05-27, Space Application Laboratory, University of Tokyo, Japan. Cited by: §IV.
  • [57] N. Yokoya, C. Grohnfeldt, and J. Chanussot (2017) Hyperspectral and multispectral data fusion: a comparative review of the recent literature. IEEE Geoscience and Remote Sensing Magazine 5 (2), pp. 29–56. Cited by: §I.
  • [58] N. Yokoya, T. Yairi, and A. Iwasaki (2011) Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Transactions on Geoscience and Remote Sensing 50 (2), pp. 528–537. Cited by: §I, §II-A, §III-B.
  • [59] Y. Yuan, X. Zheng, and X. Lu (2017)

    Hyperspectral image superresolution by transfer learning

    .
    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 10 (5), pp. 1963–1974. Cited by: §II-B, Fig. 3, Fig. 4, §IV-B, TABLE IV, TABLE V, §IV.
  • [60] R. H. Yuhas, A. F. Goetz, and J. W. Boardman (1992) Discrimination among semi-arid landscape endmembers using the spectral angle mapper (sam) algorithm. In JPL, Summaries of the Third Annual JPL Airborne Geoscience Workshop. Volume 1: AVIRIS Workshop, pp. 147-149, Cited by: §IV.
  • [61] L. Zhang, W. Wei, C. Bai, Y. Gao, and Y. Zhang (2018) Exploiting clustering manifold structure for hyperspectral imagery super-resolution. IEEE Transactions on Image Processing 27 (12), pp. 5969–5982. Cited by: §II-A, §III-B.
  • [62] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu (2018) Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 286–301. Cited by: §I, §II-C, §III-A1, §III-B, Fig. 3, Fig. 4, Fig. 5, Fig. 6, §IV-B, §IV-C, §IV-D, TABLE IV, TABLE V, TABLE VI, §IV.
  • [63] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu (2018) Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2472–2481. Cited by: §II-C.
  • [64] Y. Zhou, A. Rangarajan, and P. D. Gader (2020) An integrated approach to registration and fusion of hyperspectral and multispectral images. IEEE Transactions on Geoscience and Remote Sensing 58 (5), pp. 3020–3033. External Links: Document, ISSN 1558-0644 Cited by: §I, §II-A.