Exploiting multi-temporal information for improved speckle reduction of Sentinel-1 SAR images by deep learning

02/01/2021
by   Emanuele Dalsasso, et al.
0

Deep learning approaches show unprecedented results for speckle reduction in SAR amplitude images. The wide availability of multi-temporal stacks of SAR images can improve even further the quality of denoising. In this paper, we propose a flexible yet efficient way to integrate temporal information into a deep neural network for speckle suppression. Archives provide access to long time-series of SAR images, from which multi-temporal averages can be computed with virtually no remaining speckle fluctuations. The proposed method combines this multi-temporal average and the image at a given date in the form of a ratio image and uses a state-of-the-art neural network to remove the speckle in this ratio image. This simple strategy is shown to offer a noticeable improvement compared to filtering the original image without knowledge of the multi-temporal average.

READ FULL TEXT VIEW PDF

Authors

page 3

06/26/2020

SAR2SAR: a self-supervised despeckling algorithm for SAR images

Speckle reduction is a key step in many remote sensing applications. By ...
06/28/2020

SAR Image Despeckling by Deep Neural Networks: from a pre-trained model to an end-to-end training strategy

Speckle reduction is a longstanding topic in synthetic aperture radar (S...
04/23/2020

Virtual SAR: A Synthetic Dataset for Deep Learning based Speckle Noise Reduction Algorithms

Synthetic Aperture Radar (SAR) images contain a huge amount of informati...
07/26/2018

Multi-temporal Sentinel-1 and -2 Data Fusion for Optical Image Simulation

In this paper, we present the optical image simulation from a synthetic ...
11/07/2018

Maximal Entropy Reduction Algorithm for SAR ADC Clock Compression

Reduction of comparison cycles leads to power savings of a successive-ap...
06/15/2021

Cascading Convolutional Temporal Colour Constancy

Computational Colour Constancy (CCC) consists of estimating the colour o...
04/08/2021

Exploiting Natural Language for Efficient Risk-Aware Multi-robot SaR Planning

The ability to develop a high-level understanding of a scene, such as pe...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Remote sensing technologies are widely used for Earth observation, with a stream of images that are continuously provided by satellites carrying instruments with imaging capabilities. Among them, Synthetic Aperture Radar (SAR) is an active-imaging technology particularly attractive due to its capability to operate by day or night, in (almost) any weather condition, and to the complementary information it provides in addition to optical imaging. Interpreting SAR images is however not trivial. Indeed, they are affected by strong fluctuations inherent to coherent systems: the so-called speckle phenomenon.

The speckle has been modeled by Goodman as a multiplicative noise [1]

, whose variance is signal-dependent. Speckle reduction techniques have been intensively studied. Most recent approaches rely on cutting-edge machine learning techniques based on deep neural networks. These networks have the ability to learn implicit patterns of the set of images used during training and then to generalize to similar data. Supervised techniques require a large amount of noise-free and noisy image pairs. However, for SAR images, providing ground-truth images is difficult. To circumvent this problem, MuLoG

[2] employs pre-trained Gaussian denoisers trained on natural images inside a framework that alternates non-linear steps, accounting for the Fisher-Tippett distribution of log-transformed data, and Gaussian denoising steps. Alternatively, noisy images resembling SAR images can be generated by simulating speckle noise on optical images, which then serve as ground-truth [3, 4, 5]. As the discrepancy between the optical domain and the SAR domain is a source of artifacts, multi-temporal SAR series can instead be averaged to produce an image whose content better matches that of an actual SAR acquisition (e.g. textures, edges, bright points). This image can then serve as reference with respect to another SAR image acquired on the same area [6, 7]. Synthetic speckle noise can also be added to the average image [8] or to a denoised version of the average image [9]. The disadvantages stemming from the lack of an actual ground-truth (and above all, the difficulty to accurately model speckle correlations) motivate the interest of the community towards self-supervised approaches [10, 11, 12, 13].

Since the launch of the Sentinel-1 satellite mission in the context of the Copernicus program of the European Space Agency, long time-series of SAR images are nowadays freely available online. Incorporating multi-temporal information with deep learning can potentially significantly improve denoising processes. Notably, stable structures in observed areas such as buildings or roads are clearly visible in multi-temporal averages, even when the contrast with the surrounding area is weak. RABASAR [14]

is a framework designed to extend single-image speckle reduction techniques by including a high-quality multi-temporal average image. The multi-temporal mean is referred to as a ”super-image”, owing to its high signal-to-noise ratio, and represents a summary of the multi-temporal SAR stack. RABASAR combines a speckled image with the super-image by forming the ratio between the two images. This ratio image has a reduced informational content (only speckle fluctuations and changes with respect to the super-image are present) and is thus easier to denoise. Most of the single-image despeckling methods are designed for spatially uncorrelated speckle and require a spatial resampling (typically, a down-sampling by a factor 2) when applied to actual SAR data. Utilizing SAR2SAR

[13], a self-supervised deep learning algorithm, avoids sub-sampling altogether which preserves the spatial resolution of the restored images. Indeed, SAR2SAR is trained directly on Sentinel-1 SAR images, integrating knowledge of the spatial correlation of speckle. The main idea of this paper is to combine multi-temporal information with the SAR2SAR deep learning algorithm to improve speckle reduction in SAR imaging.

2 The SAR2SAR approach

Under the hypothesis of a sufficiently high number of independent and identically distributed scatterers, Goodman’s fully-developed model [1] relates the measured intensity , the reflectivity , and the speckle in a multiplicative model: . The speckle

is statistically described as a random variable following a gamma distribution:

(1)

depending on the number of looks . It follows that the intensity has a signal-dependent expectation and variance . When training a neural network for speckle reduction, it is often beneficial to apply a logarithmic transform to input SAR intensities (). Indeed, not only does it allow to compress the high-dynamic range of values, but it also stabilizes the variance. The log-speckle then has an additive behaviour and is distributed according to a Fisher-Tippett distribution:

(2)

After a large dataset composed by pairs of ground-truth () and noisy (

) images is built, a Convolutional Neural Network (CNN) can be trained for speckle reduction to learn a parametric mapping

that minimizes the empirical risk according to a loss function

:

(3)

It is shown in [13] that the negative log-likelihood defines an efficient loss, given, for the Fisher-Tippett distribution, by:

(4)

In practice, given the impossibility to access to the ground-truth reflectivity, multi-temporal stacks of co-registered SAR images can be used instead. Given a set of co-registered SAR images, the network parameters can be learned by:

(5)

provided that temporal changes between images and are adequately compensated. This unsupervised training is at the core of the SAR2SAR algorithm [13], where multi-temporal stacks of Sentinel-1 SAR images are used to train an unsupervised model for speckle suppression from 1-look images. The advantage of learning directly from real SAR images is that the network can model the spatial correlations of speckle. Thus, the algorithm can be directly applied to SAR images without any pre-processing step (which preserves the spatial resolution, see [15]).

3 A multi-temporal extension of SAR2SAR

3.1 Ratio-based filtering

The proposed extension is based on RABASAR, a simple yet efficient multi-temporal despeckling method (see [14] for a complete description). The key idea is the use of the ratio between a speckled image and the temporal mean of the SAR image stack (named in the following ”the super-image).

The first step of this method is to compute the super-image . Averaging the multi-temporal series strongly reduces the speckle. If some noticeable speckle fluctuations remain, a slight spatial filtering may be necessary to obtain a very high-quality super-image.

For any image in the stack, the ratio is then computed and denoised by a state-of-the-art speckle reduction method. In the absence of speckle fluctuations in the super-image, the noise distribution is similar to the distribution of a single-look SAR image and a traditional despeckling algorithm can be applied. If speckle fluctuations remain in the super-image image, an adaption of the despeckling algorithm is necessary (a generic method, called RuLoG, is proposed in [14]). For all despeckling methods that are sensitive to speckle correlations, a sub-sampling of the ratio image is necessary, which alters the resolution of the restored ratio image

. Finally, the denoised estimation of

is retrieved by multiplying with the super-image : .

In this paper, we propose to replace the denoising step of the ratio image by the SAR2SAR approach, which avoids the sub-sampling procedure required by standard despeckling filters.

Figure 1: Comparison of several despeckling strategies for single-image and multi-temporal processing.

3.2 Adaptation of SAR2SAR to ratio images

Owing to their non-linear nature, neural networks are very sensitive to the dynamic range of their inputs. Significantly shifting the dynamic range of input images between training and testing most often leads to catastrophic results. Ratio images have very different ranges compared to SAR intensity images: in the absence of significant changes between the image and the super-image , the expected value of the ratio is 1. It is then necessary to appropriately rescale them in order to use the SAR2SAR network trained on SAR images (i.e., not specifically trained on ratio images). We experimented with several normalization strategies and describe the one that worked best. SAR2SAR processes log-transformed intensities that are approximately rescaled to the range by a fixed affine transform where and respectively correspond to the minimum and maximum log-transformed intensity computed over the whole training set. In order to preserve the original range, in log-domain, of the image when processing the ratio , we normalize the super-image : . The normalization factor is chosen such that the average log-transformed intensity of the super-image is equal to 0: , with the average value computed over all pixels of an image. The modified ratio image is processed by SAR2SAR. The obtained despeckled ratio is then multiplied by the normalized super-image to produce the final estimate of the restored image: .

4 Experimental results

We illustrate our method on single-look Sentinel-1 images of an area near Lelystad, Netherlands. A stack of 25 images was spatially co-registered and temporally averaged. Remaining speckle fluctuations were suppressed with MuLoG+BM3D [2], using an equivalent number of look estimated in a homogeneous area. A single-look amplitude image and the super-image are shown in Fig.1, left column.

Fig.1 compares restoration results obtained by several strategies: the top block gives single-image restoration results and the bottom block shows how the use of a super-image improves the despeckling. Images (a) and (d) suffer from artifacts due to the application of a despeckling method that is sensitive to spatial correlations of speckle directly on a Sentinel-1 image. Downsampling the images reduces speckle correlation and suppresses these artifacts ((b) and (e)). This comes at the cost of a noticeable resolution loss, somewhat mitigated by the use of a super-image. SAR2SAR is robust to speckle correlations. It gives superior results in the single-image scenario (image c) and offers a restoration with an improved preservation of details such as thin roads or field edges using the proposed multi-temporal approach (image f).

5 Conclusion

The combination of a deep neural network trained to remove speckle on actual SAR images and multi-temporal information in the form of a super-image (i.e., high-quality temporal average of co-registered SAR images) improves the quality of restored images. Future work will consider more complex ways to include multi-temporal information in deep-neural networks. Other methods to compute the super-image can also be considered, see [16].

References