Multispectral Pan-sharpening via Dual-Channel Convolutional Network with Convolutional LSTM Based Hierarchical Spatial-Spectral Feature Fusion
Multispectral pan-sharpening aims at producing a high resolution (HR) multispectral (MS) image in both spatial and spectral domains by fusing a panchromatic (PAN) image and a corresponding MS image. In this paper, we propose a novel dual-channel network (DCNet) framework for MS pan-sharpening. In our DCNet, the dual-channel backbone involves a spatial channel to capture spatial information with a 2D CNN, and a spectral channel to extract spectral information with a 3D CNN. This heterogeneous 2D/3D CNN architecture can minimize causing spectral information distortion, which typically happens in conventional 2D CNN models. In order to fully integrate the spatial and spectral features captured from different levels, we introduce a multi-level fusion strategy. Specifically, a spatial-spectral CLSTM (S^2-CLSTM) module is proposed for fusing the hierarchical spatial and spectral features, which can effectively capture correlations among multi-level features. The S^2-CLSTM module attaches two fusion ways: the intra-level fusion via bi-directional lateral connections and inter-level fusion via the cell state in the S^2-CLSTM. Finally, the ideal HR-MS image is recovered by a reconstruction module. Extensive experiments have been conducted at both simulated lower scale and the original scale of real-world datasets. Compared with the state-of-the-art methods, the proposed DCNet achieves superior or competitive performance.
READ FULL TEXT