Hyperspectral Pansharpening Based on Improved Deep Image Prior and Residual Reconstruction

07/06/2021 ∙ by Wele Gedara Chaminda Bandara, et al. ∙ Johns Hopkins University 6

Hyperspectral pansharpening aims to synthesize a low-resolution hyperspectral image (LR-HSI) with a registered panchromatic image (PAN) to generate an enhanced HSI with high spectral and spatial resolution. Recently proposed HS pansharpening methods have obtained remarkable results using deep convolutional networks (ConvNets), which typically consist of three steps: (1) up-sampling the LR-HSI, (2) predicting the residual image via a ConvNet, and (3) obtaining the final fused HSI by adding the outputs from first and second steps. Recent methods have leveraged Deep Image Prior (DIP) to up-sample the LR-HSI due to its excellent ability to preserve both spatial and spectral information, without learning from large data sets. However, we observed that the quality of up-sampled HSIs can be further improved by introducing an additional spatial-domain constraint to the conventional spectral-domain energy function. We define our spatial-domain constraint as the L_1 distance between the predicted PAN image and the actual PAN image. To estimate the PAN image of the up-sampled HSI, we also propose a learnable spectral response function (SRF). Moreover, we noticed that the residual image between the up-sampled HSI and the reference HSI mainly consists of edge information and very fine structures. In order to accurately estimate fine information, we propose a novel over-complete network, called HyperKite, which focuses on learning high-level features by constraining the receptive from increasing in the deep layers. We perform experiments on three HSI datasets to demonstrate the superiority of our DIP-HyperKite over the state-of-the-art pansharpening methods. The deployment codes, pre-trained models, and final fusion outputs of our DIP-HyperKite and the methods used for the comparisons will be publicly made available at https://github.com/wgcban/DIP-HyperKite.git.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 6

page 7

page 10

page 12

page 13

page 15

Code Repositories

DIP-HyperKite

DIP-HyperKite: Hyperspectral Pansharpening Based on Improved Deep Image Prior and Residual Reconstruction


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Hyperspectral images (HSIs) with a large number of spectral bands have gained immense attention in the field of remote sensing due to its applications in broad research areas such as classification [1], unmixing [2]

, anomaly detection 

[3], change detection [4]

, etc. However, due to the limited incident energy available when capturing an image, hyperspectral imaging systems face trade-offs between spectral resolution, spatial resolution, and signal-to-noise ratio (SNR) 

[5]. For this reason, hyperspectral imaging systems can provide images with high spectral resolution but with low spatial resolution. In contrast, multispectral imaging systems can provide data with high spatial resolution but with fewer spectral bands (e.g., panchromatic images or multispectral images(MSIs) with three or four spectral bands). Low spatial resolution in HSIs leads to relatively poor performance in some practical remote sensing applications, such as road topology extraction [6], and spectral unmixing [7]

. Therefore, full-resolution HSIs with high spatial and spectral resolution are desired. One way to obtain such ideal HSIs is to fuse high spectral resolution HSIs with high spatial resolution PAN/MSIs. This fusion process is called HS pansharpening in the remote sensing literature, which is indeed a form of super-resolution.

Traditional pansharpening methods can be mainly divided into four classes [5]: (1) component substitution (CS), (2) multi-resolution analysis (MRA), (3) Bayesian, and (4) matrix factorization. Component substitution methods rely on substituting the spatial component of the HSI with the MSI/PAN image. The family of CS contains algorithms such as Gram–Schmidt adaptive (GSA) [8, 9]

, principal component analysis (PCA) 

[10, 11, 12], and intensity-hue-saturation (IHS) [13]. Even though the CS methods usually generate pansharpened HSIs with accurate spatial information, sometimes they suffer from critical spectral distortions. The MRA approaches are based on injecting the spatial details obtained through the multi-scale decomposition of the MSI/PAN image into the HSI. In order to extract the spatial details from the PAN image, several algorithms have been proposed in the literature, such as decimated wavelet transform (DWT) [14], undecimated wavelet transform (UDWT) [15], smoothing filter-based intensity modulation (SFIM) [16], modulation transfer function with generalized Laplacian pyramid (MTF-GLP) [17], and MTF-GLP with high-pass modulation (MTF-GLP-HPM) [18]. In contrast to the CS methods, the MRA family performs better in spectral preservation, but is more sensitive to registration errors which may cause critical distortions in the spatial domain. Due to these inherent advantages and disadvantages of CS and MRA approaches, there have been works which attempted to combine both CS and MRA methods. One of the representatives of hybrid CS and MRA algorithm is guided filter PCA (GFPCA) [19]

. The Bayesian-based methods also provide a convenient way to regularize the fusion methods by modeling the posterior distribution of the target HSI provided that the LR-HSI and MSI/PAN image. Examples of the algorithms based on the Bayesian inference framework include convex regularization under a Bayesian framework (abbreviated as Hysure)  

[20], naive Bayesian Gaussian prior (abbreviated as BF) [21], and sparsity promoted Gaussian prior (abbreviated as BFS) [22]. Finally, the coupled non-negative matrix factorization (abbreviated as CNMF) is one of the examples for matrix factorization-based methods, which regularizes the fusion problem by using the priors of spectral unmixing [23]. However, the fusion performance of traditional pansharpening approaches is generally limited due to their inadequate representation ability. In addition, the algorithms mentioned above may result in severe quality degradation when the assumptions do not align with a particular dataset. Furthermore, most traditional pansharpening approaches typically reach the optimal solution through an iterative process, which is time-consuming and inefficient.

Recently, deep learning (DL) models based on convolutional neural networks (ConvNets) have also been introduced for the HS pansharpening problem due to ConvNets’ excellent ability to learn high-level features automatically. ConvNet-based HS pansharpening methods generally consist of three steps,

  1. Up-sampling step: Up-sampling the LR-HSI to the spatial resolution of the PAN image,

  2. Residual reconstruction step: Concatenating the up-sampled HSI and PAN image along the spectral dimension and passing it through a residual learning network to learn the residual image,

  3. Final fusion step: Obtaining the final fused HSI by adding the up-sampled HSI and the residual image.

There have been many methods proposed to up-sample LR-HSI to the spatial resolution of PAN. In the earliest studies, nearest-neighbor and bicubic interpolation were the famous methods to perform up-sampling. However, the methods mentioned above conduct upsampling on each band of the LR-HSI successively, thus ignoring the high spectral correlation of HSIs which may lead to spectral distortions

[24, 25]. In order to minimize the spectral distortion, data-driven up-sampling techniques (i.e., deep super-resolution networks) have also been utilized in HS pansharpening. The LapSRN [26] network is an example of such a data-driven super-resolution method, which progressively super-resolves a LR image in a coarse-to-fine manner in a Laplacian pyramid framework. However, the LapSRN method requires a large number of images for training which is impractical in the HS domain due to the limited number of datasets available to the public. A remedy to the problem mentioned above was proposed by Ulyanov et al. [25] where they proposed a deep learning-based super-resolution framework called deep image prior (DIP). The proposed method uses a randomly initialized ConvNet to upsample an image, using its structure as an image prior, similar to bicubic upsampling. However, this method does not require any training but produces much cleaner results with sharper edges. Motivated by the super-resolution performance of DIP in the RGB domain, researchers have applied DIP to the HS pansharpening problem [27, 24] and achieved impressive results. However, we observed that the energy function defined in HS DIP up-sampling directly applies the energy function formulated for the RGB DIP process, where they only impose spectral-domain constraint by computing the distance between the down-sampled version of the target up-sampled HSI and the LR-HSI. However, the existing HS DIP methods do not impose any spatial-domain constraint by utilizing the available PAN image. We address this issue by introducing an additional spatial-domain constraint to the HS DIP process as our first contribution.

For residual reconstruction, various ConvNet architectures have been proposed in the literature to accurately predict the residual component between the up-sampled HSI and the reference HSI with less spectral and spatial distortion. Among those, Giuseppe et al. [28] was the first to introduce simple three-layer ConvNet architecture for the residual learning. Further, Lin et al. [29] improved the spatial and spectral prediction capability of Giuseppe’s work (abbreviated as HyperPNN) by introducing spectral and spatial prediction modules. To further enhance the representational power of ConvNets, attention mechanisms [30] have also been introduced. Among those, Zheng et al. [24] proposed a spatial and spectral attention mechanism (abbreviated as DHP-DARN) for the residual learning in which they cascade several channel-spatial-attention residual blocks to adaptively learn more informative channel-wise and spatial-domain features simultaneously. More recently, Xu et al. [31] proposed a design (abbreviated as SDPNet) based on two encoder-decoder networks to extract deep-level features from two types of source images with densely connected blocks to strengthen feature propagation. However, we experimentally observed that most of the existing residual learning methods fail when predicting the high-frequency information, such as edges and delicate structures in the residual image. The main reason for this observation is due to the fact that the increasing receptive field of the network in the deep layers. Motivated by this observation, we introduce an over-complete network, called HyperKite, for residual reconstruction task as our second contribution, which constrains the receptive field from increasing in deep layers thus extracting more high-frequency information.

The main contributions of this paper are summarized as follows:

  1. A novel spatial constraint is introduced for the DIP up-sampling process. To the best of our knowledge, this is the first study that integrates both spatial and spectral constraints to the DIP up-sampling. The proposed spatial constraint significantly improves the spatial and spectral performance measures of the up-sampled HSIs.

  2. An over-complete network, called HyperKite, is proposed for the residual reconstruction, which is highly capable of extracting high-frequency information of the residual image by appropriately constraining the receptive field of the network.

  3. We conduct extensive experiments to clearly demonstrate the improvements brought in from our contributions to the HS pansharpening. We compared the fusion performance of DIP-HyperKite with both conventional and deep learning-based approaches. The deployment codes, pre-trained models, and final fusion results of our DIP-HyperKite as well as the comparison methods in the results and discussion will be publicly made available at https://github.com/wgcban/DIP-HyperKite.git.

The rest of this paper is organized as follows. Section II provides some basics DIP and over-complete representations. In Section III the proposed DIP-HyperKite is described in detail. Section IV describes the datasets and performance metrics that we used in the experiments. In Section V, the experimental results and discussions of different data sets are presented. Finally, the conclusions are drawn in Section VI.

Ii Related Work

Ii-a DIP for HSI up-sampling

Generally, ConvNets have an excellent ability to learn realistic image priors from a large amount of visual data, placing them in leading positions on the benchmarks of various image processing tasks [32, 33]. Contrary to the general opinion on deep networks that they require large data to capture image priors, DIP [25] has shown that a randomly initialized network can capture low-level image statistics before any training. Concretely, in HS pansharpening, DIP can generate the up-sampled HSI of the LR-HSI with spatial up-sampling factor

by taking a fixed randomly initialized vector

as the input, and utilizing the deep network as a parametric function . Next, the network is optimized over its parameters to obtain the up-sampled HSI as follows:

(1)

where is an energy function that controls the fidelity toward the LR-HSI , and is a regularization function based on prior knowledge. In [25], it has been shown that the regularization term can be implicitly substituted by the deep network. Therefore, the minimization problem in (1) has simplified to optimizing the network over its parameters as follows:

(2)

where denotes the optimal set of parameters of the network. Furthermore, the most straightforward and commonly utilized energy function in HS pansharpening is that the distance [29] between the down-sampled version of the up-sampled HSI and the LR-HSI as follows:

(3)

where denotes the down-sampling operator by a factor of .

Ii-B Over-complete ConvNets

Fig. 1: (a) Effect of under-complete ConvNet on receptive field where the deeper layers focus on a larger region of the input thus extracting high-level/low-frequency information. (b) Effect of over-complete ConvNet on receptive field where the deeper layers focus on a much smaller region in the input thus extracting low level/high-frequency information.

Most of the current architectures in deep learning are “encoder-decoder” [34, 35, 36] based. Here, the encoder translates the high-dimensional input to a low-dimensional latent space while the decoder learns to take the latent low-dimensional representation back to a high-dimensional output. These type of architectures learn low-level features at their initial layers and high-level features at their deeper layers. These are termed under-complete networks as the input is taken to a lower spatial dimension in the latent space.

In signal processing, over-complete dictionaries are widely used for their highly robust characteristic [37]. The number of basis functions here are more than the number of input signal samples which enables a higher flexibility for capturing structure in data. In [38], over-complete auto-encoders were found to be better feature extractors for denoising when compared to under-complete auto-encoders. In an over-complete network [39], the encoder takes the input data to a higher spatial dimension unlike a traditional encoder. This is achieved by using an upsampling layer after every convolutional layer in the encoder. Using upsampling layers in the encoder causes the receptive field to be constrained in the deep layers. This causes the deep layers in the network to learn more fine-context high-frequency information when compared to under-complete networks. Increase in receptive field for an over-complete network can be generalized in an layer as follows:

(4)

where the initial receptive field of the conv filter is assumed to be on the image . This phenomenon has been visualized in Fig 1. As shown in Figure 1 (b), by employing an upsampling layer after every convolutional layer in the encoder, the over-complete network restricts the receptive field size to a smaller region which forces the network to learn very fine edges as it tries to focus heavily on smaller regions. This is completely different from the conventional over-complete architectures where they perform downsampling after each convolution block which makes the network to focus on a much larger region in the input as shown in Figure 1 (a).

Over-complete networks in deep learning is a new topic and was initially proposed for medical image segmentation of small anatomy [39]. It has since been successfully extended to solve fine-context requiring tasks like fine edge segmentation of 3D volumes [40], deep subspace clustering [41], MRI reconstruction [42], adversarial defense against videos [43] and image restoration problems like single image de-raining [44].

Iii Methodology

Fig. 2: The overall flowchart of our proposed DIP-HyperKite for HS pansharpening. In the first step, we up-sample the LR-HSI via DIP process to obtain the up-sampled HSI

. The DIP process takes a fixed noise tensor

as input for a given LR-HSI , and produces the up-sampled HSI by optimizing the proposed spatial+spectral energy function over the DIP network parameters . In the second step, we take the up-sampled HSI and the PAN image as inputs to predict the residual component using our proposed over-complete network - HyperKite. Finally, the predicted residual image is added to the up-sampled HSI to obtain the pansharpened HSI .

The overall flowchart of the proposed DIP-HyperKite for HS pansharpening is shown in Figure 2. As can be seen from Figure 2 the proposed method consists of two main steps. In the first step, the LR-HSI with pixels and spectral bands is up-sampled to the spatial resolution of the PAN image , where denotes the ratio between spatial resolution of and . We denote the output from the DIP process as . In the second step, we train an over-complete deep network which takes up-sampled HSI and the corresponding PAN images as inputs to predict the residual component between the up-sampled HSI and the reference HSI .

Iii-a Up-sampling via DIP

As shown in Figure 2, the low resolution HSI is up-sampled to the spatial resolution of the PAN image using the DIP. This recently introduced DIP method is different from the other existing up-sampling techniques such as bicubic interpolation, and LapSNR [45]. The main advantage of DIP over these conventional methods is that it does not require a large dataset for training. In other words, for each LR image , the DIP network takes a fixed random tensor as an input and optimize the network parameters

by minimizing the loss function

which is defined in terms of the output up-sampled image and available LR-HSI as given in (3). In contrast, the LapSRN network utilized in [46] is highly relied upon the RGB image datasets and the knowledge adaptation techniques. Furthermore, the bicubic and LapSNR methods up-sample each band in the HSI separately; thus ignoring the high spatial correlation between the spectral bands, which results in the loss of spatial details. Although the DIP method is capable of producing high-quality upsampling images compared to the other existing methods, it only utilizes the information from the LR-HSI , thus only imposing constraint on the spectral domain. However, we observed that the quality of the sampled HSIs can be further improved by incorporating an additional spatial constraint in the loss function using the available PAN image . In the next section we explain our novel spatial+spectral loss function.

Iii-A1 Proposed spatial+spectral energy function for HS DIP

Fig. 3: The proposed learnable spectral response function , and the computational procedure of evaluating the spatial loss term . We take the up-sampled HSI as the input, and feed it in to a Global Average Pooling (GAP) layer, which yielding a vector with a single entry for each spectral band. Then we pass it through a gating mechanism by forming a bottleneck with two fully-connected (FC) layers (

convolutions) around the non-linearity to learn the spectral response of each band. Next, we apply a Softmax activation function to obtain

normalized spectral response , and then take the channel-wise multiplication followed by channel averaging to obtain the estimated PAN image . Finally, we compute the the distance between the estimated PAN image and the reference PAN image to obtain the spatial loss .

As we discussed in Section II-A, the energy function given in (3) enforces a constraint only in spectral domain by defining the distance between up-sampled HSI and the LR HSI . Instead, we propose a loss function (denoted by ) for HS DIP, which enforces the constraints in both spatial and spectral domains as follows:

(5)

where denotes the spectral response function, (scalar) is the spectral response of -th band, is the -th band image of the up-sampled HSI , is the element-wise multiplication, and is a regularization constant. The first term in (5) enforces the spectral constraint on as in (3), and the additional second term enforces the constraint in spatial domain on by utilizing the available PAN image .

In the simplest case, the spectral response function can be approximated as the average across all spectral bands (i.e. ) [47, 48]. In this scenario, the spatial loss term in (5) enforces that the average across all the spectral bands in up-sampled HSI to be close as possible to the PAN image , thus assuming a flat (i.e. uniform) spectral response. However, in general, this assumption is not valid as spectral response varies with wavelength coverage and different spectral bands describe the same semantic information across a wide spectral range with varying quality (i.e. PSNR) [49].

A recent attempt [49] estimate the spectral response function

by utilizing the larger eigenvalue of the structure tensor (ST) matrix (originally proposed in Harris corner detection algorithm

[50]). However, this method cannot be directly utilized in an end-to-end deep learning network due to the difficulties encountered while performing back-propagation In addition, it is highly computationally complex as it requires to compute derivatives of each band image along both -and -directions at each iteration of learning as part of constructing the structure tensor matrix. Instead, we propose a computationally lightweight and learnable spectral response function which can be easily integrated into the spatial loss term in (5) and can be simultaneously learned with DIP.

In this part we describe our novel way of estimating the spectral response function which is computationally lightweight, differentiable, and can be easily integrated into the existing DIP learning process. The overall computational procedure of estimating the spectral response function and thereby evaluating the spatial energy that we introduced for the DIP process in (5) is graphically depicted in Figure 3. First, we assume that the spectral response is proportional to the ratio of information in each spectral band. The next problem arises with this assumption is how do we quantify the information embedded in each spectral band. Motivated by recently proposed Squeeze-and-Excitation networks, we utilize global average pooling to quantify the global information present in each band. Formally, a statistic which quantifies the informative features in each spectral band is generated by shrinking the up-sampled HSI through its spatial dimensions such that the -th element in is calculated as:

(6)

Next, we use a simple gating mechanism to capture the dependencies among spectral bands using the band-wise descriptor that we obtained in the previous step. We parameterize the spectral response function by forming a bottleneck with two fully-connected (FC) layers around the non-linearity as follows:

(7)

where is the Sigmoid activation function,

is the ReLU non-linearity, and

are the learnable weight matrices. Here, we use Sigmoid activation to guarantee that the spectral responses of all the bands sump up to one.

Iii-A2 DIP network

Figure 4 illustrates a U-Net like deep network that we used for the DIP method. The DIP network includes five down-sampling blocks , five upsampling blocks , and five skip-connection blocks (

). We use stride convolutions as the down-sampling operator, bi-linear up-sampling as the upsampling operator, and Lanczos2 as non-linearity. We initialize the input noise vector with uniform noise between

and . The Table I

tabulates the values of all the hyperparameters of DIP network.

Fig. 4: The DIP network utilized for the up-sampling process. The DIP network is a U-Net like network which consists of five down-sampling blocks , five upsampling blocks , and five skip-connection blocks (). The values of all the hyperparameters of DIP network is summarized in Table I.
Hyperparameter Value
Optimizer Adam
Number of iterations 1300
Learning rate 0.001
Weight decay 0.0001
Momentum 0.9
Batch size 4
LeakyReLU slope 0.2
TABLE I: Hyperparameter values of the DIP network.
Fig. 5: We observed that the residual component (see third column) between up-sampled HSI (see first column) and the reference HSI (see second column) mainly consists of boundary information and very fine structures. To support this observation we show the residual component for three different wavelength bands (i.e. band 10, band 52, and band 100) in the Pavia Center data set which will be introduced in Section IV-A. This observation motivated us to use an over-complete network for the residual learning task, which is highly capable of learning low-level features such as fine edges and structures by transforming the input image into a higher dimension. We recommend that readers zoom in on this image to get a close-up view.

Iii-B Residual learning via over-complete HyperKite

Fig. 6: The proposed HyperKite architecture for the residual prediction task. We denote the kernel size and the number of filters associated with each convolution block (shown in red color box) as and , respectively. The values of all hyperparameters for HyperKite is summarized in Table II.
Hyperparameter Value
Optimizer Adam
Num_it 2500
Learning rate 0.001
Weight decay 0.0001
Momentum 0.9
Batch size 4
LeakyReLU slope 0.2
TABLE II: Hyperparameter values of HyperKite

Our motivation to design an over-complete network for the residual learning task emerged after observing the residual images between DIP up-sampled image and reference HSI as visualized in Figure 5. As we can see from Figure 5, the residual images correspond to different wavelength band mainly consists of boundary information like edges and other high-frequency components. In order to accurately capture this fine information, we design an over-complete HyperKite for the residual learning as shown in Figure 6.

The proposed HyperKite consists of an Initial Feature Extraction Network (IFEN), a High-dimensional Feature Mapping Network (HDFMN), and a Final Residual Reconstruction Network (FRRN). The input to the HyperKite

is obtained by concatenating the up-sampled HSI and the PAN image along the spectral dimension (denoted as ). The HyperKite starts with the IFEN layer, where one

convolutional layer is applied followed by Batch Normalization (BN) and LeakyReLU non-linearity to extract initial feature representation as:

(8)

where denotes the convolution followed by LeakyReLU and batch normalization, denotes the extracted features transformed from in dimensional pixel-space, and is the number of filters in the convolutional layer. Figure 7 (a) shows six example feature maps of for the 20-th patch of the Pavia Center dataset that we will introduce in Section IV. As we can see from the figure, the initial feature extraction network extract low-level feature of the input . In order to capture high-level features that that required for the residual learning, we successively transform the output of IFEN into three higher-dimensional pixel-spaces by utilizing the “bilinear” up-sampling denoted as , , and . Then we perform convolution followed by BN and LeakyReLU to extract meaningful high-level features at each higher-dimensional space as,

(9)
(10)
(11)

where denotes the “bilinear” interpolation by a factor of , denotes the convolution layer followed by BN and LeakyReLU at the d-th higher-dimensional feature space. Next, we successively transform the extracted high-level features to the original dimensional space by employing “bilinear” downsampling and skip connections. Formally, we can define the operations of HDFMN as,

(12)
(13)
(14)

where denotes the “bilinear” downsampling by a factor of , denotes the feature concatenation operator, denotes the convolution followed by BN and LeakyReLU at the d-th dimensional feature space, and is the most relevant high-level features obtained at space. After flowing through all the downsampling layers (decoder blocks), a convolutional layer is employed to recover the spectral dimension, and reconstruct the residual image as:

(15)

where denotes the convolutional layer followed by BN and LeakyReLU employed at FRNN.

After carrying out DIP up-sampling and residual prediction of our DIP-HyperKite, the DIP up-sampled HSIs and are created. Finally, we can obtain the fused HSI by using and as:

(16)

To this end, we utilize loss to optimize HyperKite, which has been demonstrated as a superior choice for remote sensing image SR [29, 5] and also experimentally verified to be effective for improving the fusion accuracy. For the training set where is the -th input, is the corresponding reference HSI, and is the total number of training HSIs in the training set. The loss function utilized for HyperKite training can be defined as follows:

(17)

Moreover, all the parameter details of our proposed HyperKite are summarized in Table II

. We train our network in Pytorch framework using an NVIDIA Quadro 8000 GPU. We use Adam optimizer with a learning rate of 0.001, weight decay of 0.0001 and momentum 0.9 to train HyperKite. We use a batch size of 4 and train the network for 2500 epcochs.

Fig. 7: Visualization of filter responses of HyperKite. (a) Feature maps from the first layer of encoder. (b) Feature maps from the second layer of encoder. (c) Feature maps from the third layer of encoder. (d) Feature maps from the third layer of encoder. By restricting the receptive field, HyperKite is able to focus on edges and smaller regions. Zoom in recommended.

Iv Experimental Settings

Iv-a Datasets

To evaluate the performance of our proposed DIP-HyperKite for HS pansharpening, we conduct a series of experiments on three HS data sets, which are described in detail below.

Iv-A1 Pavia Center dataset

The Pavia Center scene was captured by the ROSIS camera [51]. The original HSI consists of spectral bands spanning from to nm. The spatial size of the original image is pixels, where a single pixel is equivalent to geometric resolution of m. The thirteen noisy spectral bands in the original HSI were discarded, thus resulting in a HSI with spectral bands spanning from to nm. In addition, a rectangular area of size pixels with no information at the center of the original HSI was also discarded, and the resulting “two-part” image with size of was used for the experiments. Following the same experimental procedure outlined in [24], we also used only the top-left corner of the HSI with size of , and partitioned it into cubic patches of size with no overlap, which constituted the reference images () of Pavia Center data set. In order to generate PAN images () and LR-HSIs () corresponding to each HR-HSI, we utilize Wald’s protocol [52]. Following the Wald’s protocol, we generate PAN images () of size by averaging first spectral bands of HR reference HSI. In order to generate LR-HSIs of size , we spatially blurred the HR reference HSI with an Gaussian filter, and then downsampled the result. The scaling factor was set to for the Pavia Center dataset. We randomly select cubic patches for the training, and the rest of the seven patches forms the testing set of the Pavia Center dataset.

Iv-A2 Botswana dataset

The Botswana scene was acquired by the Hyperion sensor on the NASA’s Earth Observing 1 (EO-1) satellite. The original Botswana HSI consists of spectral bands spanning from to nm with spectral resolution of nm. The spatial size of the original Botswana image is pixels. We remove the uncalibrated and noisy spectral bands in the original image, thus resulting in a HSI with spectral bands. Following the same experimental procedure outlined in [24], we also use only the top-left corner of the HSI with size of , and partitioned it into 20 cubic patches of size with no overlap, which constitute the reference images of the Botswana dataset. In order to generate PAN images and the LR-HSIs corresponding to each HR reference image, we follow the Wald’s protocol. We generate PAN images of size by averaging first spectral bands of HR-HSI. To generate LR-HSIs , we spatially blur the HR-HSI with an window, and perform down-sampling. For the Botswana dataset, we set the down-sampling factor to 3. We randomly select cubic patches for training, and the rest of the patches are utilized for testing.

Iv-A3 Chikusei dataset [53]

The Chikusei scene was captured by the Headwall Hyperspec-VNIR-C imaging sensor over the agricultural and urban areas in Chikusei, Japan. The original Chikusei HSI consists of 128 spectral bands spanning from to nm. The spatial size of the Chikusei HSI is pixels, where a single pixel is equivalent to geometric resolution of m. We used top-left corner of the HSI with size of , and partitioned it into cubic patches of size with no overlap, which constituted the reference images of Chikusei dataset. Following the Wald’s protocol, we generate PAN images of size by averaging first spectral bands of high resolution HSI. To generate LR-HSIs , we spatially blur the HR-HSI with an window, and perform down-sampling. For the Chikusei dataset, we set the down-sampling factor to . We randomly select 61 cubic patches for training, and the rest of the patches are utilized for testing.

Note

The standard deviation

of the Gaussian filter that we use to generate LR-HSIs is calculated as [54].

Iv-B Performance measures

In order to evaluate the quality of the proposed pansharpening method, we use different image quality measures. Following [24], we use Cross-Correlation (CC), Spectral Angle Mapping (SAM), Root Mean Square Error (RMSE), Errur Relative Globale Adimensionnelle Desynthese (ERGAS), and Peak Signal to Noise Ratio (PSNR). These measures have been widely used in the HSI processing community and are appropriate for evaluating fusion in spectral and spatial resolutions.

Iv-B1 Cross-Correlation (CC)

The CC metric characterizes the geometric distortion, and is defined as:

(18)

where, CCS denotes the cross-correlation for a single-band image as follows:

(19)

where, is the sample mean of . The ideal value of CC is , which indicates that the two HSIs are highly correlated.

Iv-B2 Sam

SAM is a spectral measure which is defined as:

(20)

where given the vectors , ,

(21)

where denotes the inner product between and , and is the norm. The SAM is a measure of the spectral shape preservation. The SAM values reported in our experiments are in degrees and thus belongs to . The optimal value of SAM is . The values of SAM reported in our experiments have obtained by averaging the values for all image pixels.

Iv-B3 Rsnr/ Rmse

The reconstruction SNR (RSNR) or root mean square error (RMSE) is related to the difference between the reference and fuse images, which is defined as follows:

(22)
(23)

Iv-B4 Ergas

Relative dimensionless global error in synthesis (ERGAS) calculates the amount of spectral distortion in the image. The ERGAS measure is defined as:

(24)

where is the ratio between the linear resolution of the PAN image and the HSIs. defined as:

(25)

, and i the sample mean of the -th band of . The ideal value of ERGAS is 0.

Iv-B5 Peak Signal to Noise Ratio (PSNR)

PSNR also assess the fusion quality of each bans, and the average PSNR is calculated as:

(26)

where is the maximum pixel value in the th band of . A larger value of PSNR indicates a higher reconstruction quality in spatial information of the fusion result.

V Results and Discussion

This section presents the results of our proposed DIP-HyperKite for HS pansharpening, and compares it with the state-of-the-art methods on the Pavia Center, Botswana, and Chikusei datasets. For better clarity, we divide this section into two parts. In the first part (section V-A), we highlight the contribution from our proposed spatial+spectral energy function for the DIP up-sampling process and compare it with available state-of-the-art up-sampling techniques such as nearest-neighbor, bicubic, LapSRN, and DIP with only spectral loss. In the second part (section V-B), we present the final fusion results that we obtain from our proposed HyperKite network and compare it with classical and deep-learning-based pansharpening approaches.

Fig. 8: The variation of CC, SAM, RMSE, ERGAS, and PSNR with the regularization constant in our spectral+spectral energy function for Pavia Center dataset. We select as the optimal value of regularization constant for the Pavia Center dataset by considering all the performance metrics.
Fig. 9: The variation of CC, SAM, RMSE, ERGAS, and PSNR with the regularization constant in our spectral+spectral energy function for Botswana dataset. We select as the optimal value of regularization constant for the Botswana dataset by considering all the performance metrics.
Fig. 10: The variation of CC, SAM, RMSE, ERGAS, and PSNR with the regularization constant in our spectral+spectral energy function for Chikusei dataset. We select as the optimal value of regularization constant for the Chikusei dataset by considering all the performance metrics.

V-a Effect of the proposed spatial+spectral energy function for the DIP up-sampling process

As we discussed in Section III-A, the recently proposed pansharpening methods such as DHP-DARN [24] and DHP [27] utilized the DIP process to up-sample the LR-HSI instead of using the nearest-neighbor, bicubic, or LapSRN techniques due to its excellent performance. However, we have observed that the quality of up-sampled HSI can be further improved by carefully redesigning the loss function used in the DIP optimization. Instead of only utilizing spectral constraint in the DIP loss function, we derived a novel loss function with spectral and spatial constraints. This section demonstrates the performance improvement brought by our proposed spatial+spectral loss function to the DIP up-sampling process. We compare DIP with the proposed spatial+spectral loss against the DIP with spectral loss only. Furthermore, to make the analysis more comprehensive, we also added a conventional up-sampling techniques used in the HS pansharpening domain, such as nearest-neighbor, and bicubic. Further, motivated by the experimental discussion in [24], we also added the results from LapSRN [26], which is trained on a large amount of RGB images.

Method CC SAM RMSE RSNR ERGAS PSNR
() () () () () ()
Nearest-neighbor 0.809 7.70 1.22 9.63 19.97 19.65
Bicubic 0.840 7.45 1.13 11.26 18.48 20.36
LapSRN [26] 0.843 7.37 1.12 11.56 18.16 20.49
DIP+spectral[24] 0.844 8.04 0.94 14.91 15.42 21.89
DIP+ (ours) 0.900 8.18 0.58 24.36 9.66 26.15
(+6.6%) - (-37.8%) (+63.4%) (-37.3%) (+19.5%)
TABLE III: Average quantitative results for different up-sampling techniques on the Pavia Center dataset.
Method CC SAM RMSE RSNR ERGAS PSNR
() () () () () ()
Nearest-neighbor 0.854 2.52 7.87 29.03 9.08 28.87
Bicubic 0.852 2.42 7.67 29.60 8.77 29.17
LapSRN [26] 0.858 2.47 6.27 34.01 8.27 29.01
DIP+spectral [24] 0.833 2.40 6.66 32.91 8.75 29.75
DIP+ (ours) 0.861 2.30 5.39 37.80 8.13 31.28
(+0.4%) (-4.2%) (-14.0%) (+11.2%) (-1.7%) (+5.2%)
TABLE IV: Average quantitative results for different up-sampling techniques on the Botswana dataset.
Method CC SAM RMSE RSNR ERGAS PSNR
() () () () () ()
Nearest-neighbor 0.861 4.05 9.99 18.26 17.03 23.73
Bicubic 0.884 3.86 0.093 20.07 15.75 24.52
LapSRN [26] 0.885 3.75 8.53 21.37 14.33 25.06
DIP+spectral[24] 0.869 4.64 7.54 24.16 13.80 25.75
DIP+(ours) 0.885 5.05 5.56 29.69 10.18 28.06
(+0.1%) - (-26.3%) (+22.9%) (-26.2%) (+8.9%)
TABLE V: Average quantitative results for different up-sampling techniques on the Chikusei dataset.

V-A1 Tuning the hyperparameter in our spatial+spectral energy function

We start our discussion with the effect of the regularization constant in our proposed spatial+spectral loss function as defined in (5). The variation of CC, SAM, RMSE, ERGAS and PSNR values when varying the regularization parameter from to for the Pavia Center, Botswana and Chikusei datasets are shown in Figure 8, Figure 9, and Figure 10, respectively. As can be seen from these figures, as the value of the regularization constant increases, the performance metrics also begin to improve, then hit a saturation point, and then degrade, for all three data sets. Therefore, we carefully select the regularization constant for each dataset by considering all the performance metrics. For example, consider the variation of the performance metrics with the regularization constant for the Pavia Center dataset which is shown in Figure 8. As we can see, when the value of the regularization constant increases from to , we can see that CC, RMSE, ERGAS, and PSNR start to improve, and when increases beyond the performance metrics start to degrade. Therefore, we set as the optimal value of the regularization constant of our proposed spatial+spectral energy term for the Pavia Center dataset. The variation in performance metrics with the regularization parameter for the Botswana and Chikusei datasets are also shown in Figure 9 and Figure 10, respectively. Following the same analysis we described for the Pavia Center dataset, we select as the optimal value of the regularization constant for the Botswana and Chikusei datasets. Note that the performance improvement bringing from our proposed spatial+spectral loss function for the DIP upsampling process. Under the optimal regularization constant (), our spatial+spectral energy function improves the quality of up-sampled HSIs over the spectral loss (equivalent to point in Figure 8, 9, and 10) in-terms of CC, RMSE, ERGAS, and PSNR metrics by 6.64%, 37.8%, 63.4%, 37.3%, and 19.5%, respectively for the Pavia Center dataset. For the Botswana dataset, our proposed loss function improves CC, SAM, RMSE, ERGAS, and PSNR metrics over the DIP with spectral loss by 3.3%, 4.2%, 19.1%, 14.7%, 7.0%, and , respectively. Similarly for the Chikusei dataset, our method improves CC, RMSE, ERGAS, and PSNR metrics compared to DIP with spectral loss by 1.8%, 26.3%, 22.9%, 26.2%, and 8.9%, respectively.

Fig. 11: Up-sampled images of 1-st patch (in 1-st row) and 11-th patch (in 2-nd row) of Pavia Center dataset. (a) LR-HSI. (b) Nearest-neighbor. (c) Bicubic. (d) LapSRN [26]. (e) DIP with only spectral energy [24]. (f) DIP with our spatial+spectral energy . (g) Reference. The RGB image is generated by utilizing the 10-th, 30-th, and 60-th bands of the HSI for blue, green and red bands, respectively.
Fig. 12: Up-sampled images of 12-th (in first row) and 14-th (in second row) patch of Botswana dataset. (a) LR-HSI. (b) Nearest-neighbor. (c) Bicubic. (d) LapSRN [26]. (e) DIP with only spectral energy [24]. (f) DIP with our spatial+spectral energy ). (g) Reference. The RGB image is generated by utilizing the 10-th, 35-th, and 61-th bands of the HSI for blue, green and red bands, respectively.
Fig. 13: Up-sampled images of 37-th (in first row) and 50-th (in second row) patch of Chikusei dataset. (a) LR-HSI. (b) Nearest-neighbor. (c) Bicubic. (d) LapSRN [26]. (e) DIP with only spectral energy [24]. (f) DIP with our spatial+spectral energy (). (g) Reference. The RGB image is generated by utilizing the 12-th, 20-th, and 29-th bands for blue, green and red bands, respectively.
Discussion on the regularization constant

Let us first consider the case where the regularization constant

is set to zero. This is equivalent to the case where we only have the spectral constraint. In this case, the DIP network minimizes the distance between the down-sampled version of the up-sampled HSI and the LR-HSI. Since the down-sampling operator acts as a low-pass filter in the frequency domain, what DIP network actually minimizes is that the distance between the low-pass version of the up-sampled HSI and the LR-HSI. Because of this reason, the up-sampled HSI from the DIP network trained only with spectral constraint lacks the high frequency components such as edge information and fine structures. Now let us consider the case where we have both spatial and spectral constraint in the DIP loss function. As we described in Section

III-A, we combined the spatial and spectral constraints via regularization parameter . The value of controls the fidelity of the predicted PAN image towards the actual PAN image. Since the predicted PAN image and the up-sampled HSI are coupled via spectral response function, to make the predicted PAN image close as possible to the actual PAN image, the DIP network tries to predict some of the high-frequency components such as edges and fine structures in the PAN image, while maintaining the low-pass version of the up-sampled HSI close to the LR-HSI. Therefore, the regularization constant what actually controls is the amount of high-frequency components fused from PAN image to the up-sampled HSI. This explain the observation that we made from Figure 8, Figure 9, and Figure 10, where when the value of the regularization parameter increases the DIP network embed some of the high-frequency information to the up-sampled HSI, which ultimately helps to improve the quality of the up-sampled image. However, when the value of the regularization constant is large, the spatial loss term starts to dominate the loss function, and resulting in drop of spectral-domain performance metrics such as SAM and ERGAS. Therefore, we can achieve high-quality up-sampled HSIs by appropriately controlling the regularization parameter in spatial+spectral energy function.

V-A2 Comparison of DIP with the proposed spatial+spectral loss with state-of-the-art up-sampling techniques

In the previous section, we determined the optimal value of the regularization constant for our proposed spatial+spectral loss function for the three datasets. In this section, we compare DIP with our spatial+spectral loss against DIP with only spectral loss, and other commonly used up-sampling techniques such as nearest neighbor, bicubic, and LapSRN, both qualitatively and quantitatively.

Table III summarizes the quantitative results of nearest-neighbor, bicubic, LapSRN, and DIP up-sampling methods for the Pavia Center dataset. For this dataset, our proposed DIP method improves the quality of up-sampled images in terms of CC, SAM, RSNR, ERGAS, and PSNR performance measures by , , , and , respectively. We have also noticed that this improvement is accompanied by a drop in the SAM index which is around 1.8% compared to the DIP with spectral loss. This fall in the SAM index is not that significant compared to the improvements we have achieved in terms of all other performance measures. Further, we can cross-verify these quantitative results with qualitative results that we have shown in Figure 11 for the Pavia Center dataset. We can see that the DIP up-sampled images with our proposed spatial+spectral constraint looks much more closer to the reference image, and have predicted very fine structures and edges compared to other upsampling methods.

We also summarize the quantitative results for different up-sampling methods for the Botswana dataset in Table IV. As we can see, DIP with the proposed spatial+spectral loss improves the quality of up-sampled images in terms of all the performance metrics by a significant margin: CC value increased by 0.4%, SAM value reduced by 4.3%, RMSE value reduced by 14.0%, RSNR value improved by 11.2%, ERGAS value reduced by 1.7%, and PSNR value value increased by 5.2%. Also, we can verify these quantitative results with the qualitative results shown in Figure 12 for the Botswana dataset. Similar to the qualitative results that we have observed for the Pavia Center dataset, we can see the the up-sampled images using DIP with our proposed spatial+spectral loss is much more closer to the reference HSI.

Finally, we summarize the quantitative results for different upsampling methods for the Chikusei dataset in Table V. In this case also, the performance of DIP up-sampled images with our proposed spatial+spectral loss outperforms five out of six performance measures that we considered for the analysis. As we can see from the Table V, our DIP method has increased the value of CC by 0.1%, has decreased the value of RMSE by 26.3%, has increased the RSNR by 22.9%, has decreased the ERGAS by 26.2%, and has increased the PSNR by 8.9% over the state-of-the-art results. Similar to the Pavia Center dataset, in this dataset also we have observed that the drop in SAM index; however this is negligible compared to the performance gained in terms of the other quantitative measures. Furthermore, following the similar trend with other datasets, we have included the qualitative results in Figure 13 for the Chikusei dataset. From the qualitative results also we can see that DIP with our spatial+spectral constraint is able to predict very fine structures and edges more accurately than the other methods.

In summary, we have shown that the DIP method with our proposed spatial+spectral constraints outperforms the state-of-the-art up-sampling methods with a significant margin in all the datasets that we have considered in this study. In the next section, we present final fusion results and compare them with state-of-the-art pansharpening algorithms, qualitatively and quantitatively.

Method CC SAM RMSE RSNR ERGAS PSNR
() () () () () ()
PCA[11] 0.845 8.92 3.45 34.32 6.64 31.26
GFPCA [19] 0.902 8.31 3.98 29.34 7.44 29.09
BF [21] 0.918 9.60 3.44 31.99 6.63 30.22
BFS [22] 0.925 8.10 3.05 34.37 6.00 31.09
SFIM [16] 0.946 6.76 2.55 37.47 5.43 32.61
GS[9] 0.961 6.62 2.55 38.08 4.95 32.93
GSA[9] 0.950 7.15 2.34 39.60 4.70 33.52
MTF-GLP-HPM [18] 0.955 6.81 2.25 40.70 4.77 33.97
CNMF [23] 0.960 6.64 2.20 40.79 4.39 34.14
MTF-GLP [17] 0.956 6.55 2.20 40.70 4.45 34.12
HySure [20] 0.966 6.13 1.80 44.60 3.77 35.91
HyperPNN [29] 0.967 6.09 1.67 48.62 3.82 36.70
DHP-DARN [24] 0.969 6.43 1.56 49.17 3.95 37.30
DIP-HyperKite (ours) 0.980 5.61 1.29 51.72 2.85 38.65
TABLE VI: The average quantitative results on the Pavia Center dataset.
Method CC SAM RMSE RSNR ERGAS PSNR
() () () () () ()
PCA[11] 0.946 2.22 1.74 57.10 2.89 28.17
GFPCA [19] 0.925 2.48 1.97 53.81 3.18 26.75
BF [21] 0.919 2.41 1.86 55.43 3.37 26.88
BFS [22] 0.918 2.39 1.85 55.52 3.38 26.91
SFIM [16] 0.890 3.31 2.56 48.30 2.98 27.27
GS[9] 0.949 2.17 1.68 57.55 2.74 28.32
GSA[9] 0.964 1.86 1.28 63.02 2.16 30.78
MGH [18] 0.962 1.90 1.33 62.23 2.15 30.47
CNMF [23] 0.951 2.28 1.38 60.90 2.48 29.63
MG [17] 0.963 1.88 1.32 62.23 2.16 30.45
HySure [20] 0.963 1.93 1.19 63.80 2.12 30.97
HyperPNN [29] 0.957 1.92 1.06 66.22 2.40 29.00
DHP-DARN [24] 0.954 1.91 1.05 66.22 2.35 29.98
DIP-HyperKite (ours) 0.974 1.68 0.96 67.98 1.89 32.12
TABLE VII: The average quantitative results on the Botswana dataset.
Method CC SAM RMSE RSNR ERGAS PSNR
() () () () () ()
PCA[11] 0.297 9.99 4.47 17.86 16.70 30.96
GFPCA [19] 0.883 4.76 1.98 34.22 7.00 37.05
BF [21] 0.903 5.15 1.94 34.40 6.62 37.89
BFS [22] 0.917 4.69 1.72 36.84 6.39 37.99
SFIM [16] 0.928 3.79 1.43 40.51 6.43 39.55
GS[9] 0.733 5.64 2.96 26.37 8.17 35.13
GSA[9] 0.943 3.52 1.42 40.73 4.30 41.38
MTF-GLP-HPM [18] 0.929 3.82 1.45 39.38 6.40 39.85
CNMF [23] 0.900 4.72 1.91 36.73 5.75 39.65
MTF-GLP [17] 0.938 3.81 1.52 39.38 4.41 41.05
HySure [20] 0.960 2.98 1.13 45.24 3.69 43.14
HyperPNN [29] 0.946 3.97 1.11 46.55 4.77 41.57
DHP-DARN [24] 0.953 3.60 1.05 46.66 4.44 42.24
DIP-HyperKite (ours) 0.974 2.85 1.03 46.97 3.62 43.53
TABLE VIII: The average quantitative results on the Chikusei dataset.
Fig. 14: Visual results generated by different pansharpening algorithms for the first (in first and second column), third (in third and fourth column), 12-th (in fifth and sixth column), 20-th (in seventh and eight column), 21-st (in nine and tenth column) patches of the Pavia Center dataset. (a) SFIM [16]. (b) GS [9]. (c) GSA [9]. (d) MTF-GLP-HPM [18]. (e) CNMF [23]. (f) MTF-GLP [17]. (g) HySure [20]. (h) HyperPNN [29]. (i) DHP-DARN [24]. (j) DIP-HyperKite (ours). (k) Reference.
Fig. 15: Visual results generated by different pansharpening algorithms for the first (in first and second column), fourth (in third and fourth column), 12-th (in fifth and sixth column), 16-th (in seventh and eight column), 19-st (in nine and tenth column) patches of the Botswana dataset. (a) SFIM [16]. (b) GS [9]. (c) GSA [9]. (d) MTF-GLP-HPM [18]. (e) CNMF [23]. (f) MTF-GLP [17]. (g) HySure [20]. (h) HyperPNN [29]. (i) DHP-DARN [24]. (j) DIP-HyperKite (ours). (k) Reference.
Fig. 16: Visual results generated by different pansharpening algorithms for the fifth (in first and second column), 13-th (in third and fourth column), 16-th (in fifth and sixth column), 27-th (in seventh and eight column), 32-nd (in nine and tenth column) patches of the Chikusei dataset. (a) SFIM [16]. (b) BF [21]. (c) GSA [9]. (d) MTF-GLP-HPM [18]. (e) BFS [22]. (f) MTF-GLP [17]. (g) HySure [20]. (h) HyperPNN [29]. (i) DHP-DARN [24]. (j) DIP-HyperKite (ours). (k) Reference.

V-B Final fusion results:

In this section we compare final fusion results from our DIP-HyperKite with the state-of-the-art pansharpening approaches such as PCA [11], GFPCA [19], BF [21], BFS [22], SFIM [16], GS [9], GSA [9], MTF-GLP-HPM [18], CNMF [23], MTF-GLP [17], HySure [20], HyperPNN [29], and DHP-DARN [24] for Pavia Center, Botswana, and Chikusei datasets.

Final Fusion results on the Pavia Center dataset

The average quantitative results for different pansharpening approaches on the testing set of the Pavia Center dataset are shown in Table VI. As can be seen from Table VI, our proposed HyperKite achieves the highest CC value compared to all the other pansharpening approaches that we have considered in this study. A higher CC value indicates that the fused HSI is closer to the actual HSI with less geometric distortion. Furthermore, our proposed DIP-HyperKite achieved the smallest values for SAM, RMSE, and ERGAS performance measures, indicating the best fusion performance over the other pansharpening approaches. Especially the smallest SAM and ERGAS indicate that our DIP-HyperKite can fuse HSIs with less spectral distortion than the state-of-the-art methods. In addition, our DIP-HyperKite improved the PSNR metric by 3.6% over the state-of-the-art value. To further verify the fusion quality of our proposed DIP-HyperKite, we present qualitative results in Figure 14 for the Pavia Center dataset. To better highlight the fusion quality between different pansharpening approaches, we have shown the error plots along with the RGB composite image for each fused HSI. According to the figure, the error maps corresponding to our DIP-HyperKite are much purple than the other pansharpening approaches, indicating minor fusion error. This is mainly because of the ability of our HyperKite network to predict very fine structures and edges by constraining the receptive field of the deep network.

Final Fusion results on the Botswana dataset

The Table VII summarizes the average quantitative results of different fusion methods on the Botswana dataset. Similar to the Pavia Center dataset, we can see that our DIP-HyperKite outperforms all the other HS pansharpening approaches by a considerable margin. Concretely, our DIP-HyperKite has improved the CC by 1.03%, and PSNR by 3.71%. In addition, our method has reduced the SAM by 9.68%, RMSE by 8.57%, and ERGAS by 10.85%. Furthermore, we have shown qualitative results related to different pansharpening approaches on Botswana dataset in Figure 15. By observing the RGB images and error plots in Figure 15, we can see that the fusion results related to our method are much closer to reference image than the other pansharpening approaches.

Final Fusion results on the Chikusei dataset

In this section, we compare the qualitative and quantitative results on the Chikusei dataset. The average quantitative results of different pansharpening approaches on the Chikusei dataset is listed in Table VIII. Similar to the results we have observed for the other two datasets, for this dataset also our proposed DIP-HyperKite outperforms all the pansharpening approaches that we considered for the analysis. Our pansharpening method improves the CC, SAM, RMSE, RSNR, ERGAS, and PSNR performance measures over the state-of-the-art results by 1.45%, 19.0%, 6.67%, 0.67%, 18.5%, and 0.90%, respectively. To further highlight the fusion quality of our method we present the qualitative results of selected panshaprpening approaches for the Chikusei dataset is shown in Figure 16. By observing the RGB composite image and the error maps we can clearly see that the fusion quality of the proposed DIP-HyperKite is higher than the other pansharpening approaches.

Vi Conclusion

In this paper, we have presented a novel approach for HS pansharpening, which mainly consists of three steps: (1) Up-sampling the LR-HSI via DIP, (2) Predicting the residual image via over-complete HyperKite, and (3) Obtaining the final fused HSI by summation. The previously proposed DIP methods for HS up-sampling only impose a constraint in the spectral-domain by utilizing LR-HSI. To better preserve both spatial and spectral information, we first exploited an additional spatial constraint to DIP by utilizing the available PAN image, thereby introduced both spatial and spectral constraints to the DIP. The comprehensive experiments conducted on three HS datasets showed that our proposed spatial+spectral loss function significantly improved the quality of up-sampled HSIs in CC, RMSE, RSNR, SAM, ERGAS, and PSNR performance measures. Next, in the residual prediction task, we have shown that the residual component between up-sampled HSI and the reference HSI primarily consists of edge information and very fine structures. Motivated by this observation, we proposed a novel over-complete deep-learning network for the residual prediction task. In contrast to the conventional under-complete representations, we have shown that our over-complete network is competent to focus on high-level features such as edges and fine structures by constraining the receptive field of the network. Finally, the fused HSI is obtained by adding the residual HSI and the up-sampled HSI. The comprehensive experiments conducted on three HS datasets demonstrated the superiority of our DIP-HyperKite over the other state-of-the-art results in terms of qualitative and quantitative evaluations.

References

  • [1]

    P. Ghamisi, J. Plaza, Y. Chen, J. Li, and A. J. Plaza, “Advanced spectral classifiers for hyperspectral images: A review,”

    IEEE Geoscience and Remote Sensing Magazine, vol. 5, no. 1, pp. 8–32, 2017.
  • [2] R. Heylen, M. Parente, and P. Gader, “A review of nonlinear hyperspectral unmixing methods,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 6, pp. 1844–1868, 2014.
  • [3] S. Matteoli, M. Diani, and G. Corsini, “A tutorial overview of anomaly detection in hyperspectral images,” IEEE Aerospace and Electronic Systems Magazine, vol. 25, no. 7, pp. 5–28, 2010.
  • [4] S. Liu, D. Marinelli, L. Bruzzone, and F. Bovolo, “A review of change detection in multitemporal hyperspectral images: Current techniques, applications, and challenges,” IEEE Geoscience and Remote Sensing Magazine, vol. 7, no. 2, pp. 140–158, 2019.
  • [5] L. Loncan, L. B. De Almeida, J. M. Bioucas-Dias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao, G. A. Licciardi, M. Simoes et al., “Hyperspectral pansharpening: A review,” IEEE Geoscience and remote sensing magazine, vol. 3, no. 3, pp. 27–46, 2015.
  • [6] A. Mohammadzadeh, A. Tavakoli, and M. J. Valadan Zoej, “Road extraction based on fuzzy logic and mathematical morphology from pan-sharpened ikonos images,” The photogrammetric record, vol. 21, no. 113, pp. 44–60, 2006.
  • [7] G. Licciardi, A. Villa, M. M. Khan, and J. Chanussot, “Image fusion and spectral unmixing of hyperspectral images for spatial improvement of classification maps,” in 2012 IEEE International Geoscience and Remote Sensing Symposium.   IEEE, 2012, pp. 7290–7293.
  • [8] C. A. Laben and B. V. Brower, “Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening,” Jan. 4 2000, uS Patent 6,011,875.
  • [9] B. Aiazzi, S. Baronti, and M. Selva, “Improving component substitution pansharpening through multivariate regression of ms pan data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3230–3239, 2007.
  • [10] V. P. Shah, N. H. Younan, and R. L. King, “An efficient pan-sharpening method via a combined adaptive pca approach and contourlets,” IEEE transactions on geoscience and remote sensing, vol. 46, no. 5, pp. 1323–1335, 2008.
  • [11] P. Kwarteng and A. Chavez, “Extracting spectral contrast in landsat thematic mapper image data using selective principal component analysis,” Photogramm. Eng. Remote Sens, vol. 55, no. 1, pp. 339–348, 1989.
  • [12] G. Vivone, L. Alparone, J. Chanussot, M. Dalla Mura, A. Garzelli, G. A. Licciardi, R. Restaino, and L. Wald, “A critical comparison among pansharpening algorithms,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 5, pp. 2565–2586, 2014.
  • [13] T.-M. Tu, S.-C. Su, H.-C. Shyu, and P. S. Huang, “A new look at ihs-like image fusion methods,” Information fusion, vol. 2, no. 3, pp. 177–186, 2001.
  • [14] S. G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” in Fundamental Papers in Wavelet Theory.   Princeton University Press, 2009, pp. 494–513.
  • [15] G. P. Nason and B. W. Silverman, “The stationary wavelet transform and some statistical applications,” in Wavelets and statistics.   Springer, 1995, pp. 281–299.
  • [16] J. Liu, “Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details,” International Journal of Remote Sensing, vol. 21, no. 18, pp. 3461–3472, 2000.
  • [17] B. Aiazzi, L. Alparone, S. Baronti, and A. Garzelli, “Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis,” IEEE Transactions on geoscience and remote sensing, vol. 40, no. 10, pp. 2300–2312, 2002.
  • [18] B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva, “Mtf-tailored multiscale fusion of high-resolution ms and pan imagery,” Photogrammetric Engineering & Remote Sensing, vol. 72, no. 5, pp. 591–596, 2006.
  • [19] W. Liao, F. Van Coillie, S. Gautama, A. Pizurica, and W. Philips, “Fusion of thermal infrared hyperspectral and vis rgb data using guided filter and supervised fusion graph.”
  • [20] M. Simoes, J. Bioucas-Dias, L. B. Almeida, and J. Chanussot, “A convex formulation for hyperspectral image superresolution via subspace-based regularization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 6, pp. 3373–3388, 2014.
  • [21] Q. Wei, N. Dobigeon, and J.-Y. Tourneret, “Bayesian fusion of multi-band images,” IEEE Journal of Selected Topics in Signal Processing, vol. 9, no. 6, pp. 1117–1127, 2015.
  • [22] Q. Wei, J. Bioucas-Dias, N. Dobigeon, and J.-Y. Tourneret, “Hyperspectral and multispectral image fusion based on a sparse representation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 7, pp. 3658–3668, 2015.
  • [23] N. Yokoya, T. Yairi, and A. Iwasaki, “Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 2, pp. 528–537, 2011.
  • [24] Y. Zheng, J. Li, Y. Li, J. Guo, X. Wu, and J. Chanussot, “Hyperspectral pansharpening using deep prior and dual attention residual network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 11, pp. 8059–8076, 2020.
  • [25] V. Lempitsky, A. Vedaldi, and D. Ulyanov, “Deep image prior,” in

    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

    , 2018, pp. 9446–9454.
  • [26] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 624–632.
  • [27] O. Sidorov and J. Y. Hardeberg, “Deep hyperspectral prior: Single-image denoising, inpainting, super-resolution,” in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 3844–3851.
  • [28]

    G. Masi, D. Cozzolino, L. Verdoliva, and G. Scarpa, “Pansharpening by convolutional neural networks,”

    Remote Sensing, vol. 8, no. 7, p. 594, 2016.
  • [29] L. He, J. Zhu, J. Li, A. Plaza, J. Chanussot, and B. Li, “Hyperpnn: Hyperspectral pansharpening via spectrally predictive convolutional neural networks,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 8, pp. 3092–3100, 2019.
  • [30] J. Li, R. Cui, B. Li, R. Song, Y. Li, Y. Dai, and Q. Du, “Hyperspectral image super-resolution by band attention through adversarial learning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 6, pp. 4304–4318, 2020.
  • [31] H. Xu, J. Ma, Z. Shao, H. Zhang, J. Jiang, and X. Guo, “Sdpnet: A deep network for pan-sharpening with enhanced information representation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 5, pp. 4120–4134, 2020.
  • [32] H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with bm3d?” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 2392–2399.
  • [33] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al.

    , “Photo-realistic single image super-resolution using a generative adversarial network,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681–4690.
  • [34] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2015, pp. 234–241.
  • [35] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
  • [36] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • [37] M. S. Lewicki and T. J. Sejnowski, “Learning overcomplete representations,” Neural computation, vol. 12, no. 2, pp. 337–365, 2000.
  • [38]

    P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” in

    Proceedings of the 25th international conference on Machine learning

    , 2008, pp. 1096–1103.
  • [39] J. M. J. Valanarasu, V. A. Sindagi, I. Hacihaliloglu, and V. M. Patel, “Kiu-net: Towards accurate segmentation of biomedical images using over-complete representations,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2020, pp. 363–373.
  • [40] ——, “Kiu-net: Overcomplete convolutional architectures for biomedical image and volumetric segmentation,” arXiv preprint arXiv:2010.01663, 2020.
  • [41] J. M. J. Valanarasu and V. M. Patel, “Overcomplete deep subspace clustering networks,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 746–755.
  • [42] P. Guo, J. M. J. Valanarasu, P. Wang, J. Zhou, S. Jiang, and V. M. Patel, “Over-and-under complete convolutional rnn for mri reconstruction,” 2021.
  • [43] S.-Y. Lo, J. M. J. Valanarasu, and V. M. Patel, “Overcomplete representations against adversarial videos,” arXiv preprint arXiv:2012.04262, 2020.
  • [44] R. Yasarla, J. M. J. Valanarasu, and V. M. Patel, “Exploring overcomplete representations for single image deraining using cnns,” IEEE Journal of Selected Topics in Signal Processing, 2020.
  • [45] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 624–632.
  • [46] K. Li, W. Xie, Q. Du, and Y. Li, “Ddlps: Detail-based deep laplacian pansharpening for hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 10, pp. 8011–8025, 2019.
  • [47] M. Simões, J. Bioucas‐Dias, L. B. Almeida, and J. Chanussot, “A convex formulation for hyperspectral image superresolution via subspace-based regularization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 6, pp. 3373–3388, 2015.
  • [48] Q. Wei, N. Dobigeon, and J.-Y. Tourneret, “Fast fusion of multi-band images based on solving a sylvester equation,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 4109–4121, 2015.
  • [49] W. Xie, J. Lei, Y. Cui, Y. Li, and Q. Du, “Hyperspectral pansharpening with deep priors,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 5, pp. 1529–1543, 2020.
  • [50] C. G. Harris, M. Stephens et al., “A combined corner and edge detector.” in Alvey vision conference

    , vol. 15, no. 50.   Citeseer, 1988, pp. 10–5244.

  • [51] S. Holzwarth, A. Muller, M. Habermeyer, R. Richter, A. Hausold, S. Thiemann, and P. Strobl, “Hysens-dais 7915/rosis imaging spectrometers at dlr,” in Proceedings of the 3rd EARSeL workshop on imaging spectroscopy, 2003, pp. 3–14.
  • [52] Y. Zeng, W. Huang, M. Liu, H. Zhang, and B. Zou, “Fusion of satellite images in urban area: Assessing the quality of resulting images,” in 2010 18th International Conference on Geoinformatics.   IEEE, 2010, pp. 1–4.
  • [53] N. Yokoya and A. Iwasaki, “Airborne hyperspectral data over chikusei,” Space Appl. Lab., Univ. Tokyo, Tokyo, Japan, Tech. Rep. SAL-2016-05-27, 2016.
  • [54] Q. Wei, “Bayesian fusion of multi-band images: A powerful tool for super-resolution,” Ph.D. dissertation, Institut national polytechnique de Toulouse (INPT), 2015.