Deep learning based cloud detection for remote sensing images by the fusion of multi-scale convolutional features
Cloud detection is an important preprocessing step for the precise application of optical satellite imagery. In this paper, we propose a deep convolutional neural network based cloud detection method named multi-scale convolutional feature fusion (MSCFF) for remote sensing images. In the network architecture of MSCFF, the encoder and corresponding decoder modules, which provide both local and global context by densifying feature maps with trainable filter banks, are utilized to extract multi-scale and high-level spatial features. The feature maps of multiple scales are then up-sampled and concatenated, and a novel MSCFF module is designed to fuse the features of different scales for the output. The output feature maps of the network are regarded as probability maps, and fed to a binary classifier for the final pixel-wise cloud and cloud shadow segmentation. The MSCFF method was validated on hundreds of globally distributed optical satellite images, with spatial resolutions ranging from 0.5 to 50 m, including Landsat-5/7/8, Gaofen-1/2/4, Sentinel-2, Ziyuan-3, CBERS-04, Huanjing-1, and collected high-resolution images exported from Google Earth. The experimental results indicate that MSCFF has obvious advantages over the traditional rule-based cloud detection methods and the state-of-the-art deep learning models in terms of accuracy, especially in bright surface covered areas. The effectiveness of MSCFF means that it has great promise for the practical application of cloud detection for multiple types of satellite imagery. Our established global high-resolution cloud detection validation dataset has been made available online.
READ FULL TEXT