SAR2SAR: a self-supervised despeckling algorithm for SAR images

06/26/2020
by   Emanuele Dalsasso, et al.
Jean Monnet University
0

Speckle reduction is a key step in many remote sensing applications. By strongly affecting synthetic aperture radar (SAR) images, it makes them difficult to analyse. Due to the difficulty to model the spatial correlation of speckle, a deep learning algorithm with self-supervision is proposed in this paper: SAR2SAR. Multi-temporal time series are leveraged and the neural network learns to restore SAR images by only looking at noisy acquisitions. To this purpose, the recently proposed noise2noise framework has been employed. The strategy to adapt it to SAR despeckling is presented, based on a compensation of temporal changes and a loss function adapted to the statistics of speckle. A study with synthetic speckle noise is presented to compare the performances of the proposed method with other state-of-the-art filters. Then, results on real images are discussed, to show the potential of the proposed algorithm. The code is made available to allow testing and reproducible research in this field.

READ FULL TEXT VIEW PDF

page 4

page 5

07/04/2020

Speckle2Void: Deep Self-Supervised SAR Despeckling with Blind-Spot Convolutional Neural Networks

Information extraction from synthetic aperture radar (SAR) images is hea...
01/15/2020

Towards Deep Unsupervised SAR Despeckling with Blind-Spot Convolutional Neural Networks

SAR despeckling is a problem of paramount importance in remote sensing, ...
02/01/2021

Exploiting multi-temporal information for improved speckle reduction of Sentinel-1 SAR images by deep learning

Deep learning approaches show unprecedented results for speckle reductio...
06/16/2020

Multi-Objective CNN Based Algorithm for SAR Despeckling

Deep learning (DL) in remote sensing has nowadays became an effective op...
07/22/2022

Fast strategies for multi-temporal speckle reduction of Sentinel-1 GRD images

Reducing speckle and limiting the variations of the physical parameters ...
07/05/2022

Deriving Surface Resistivity from Polarimetric SAR Data Using Dual-Input UNet

Traditional survey methods for finding surface resistivity are time-cons...

Code Repositories

I Introduction

Synthetic Aperture Radar (SAR) is an active imaging technology that is widely used for Earth observation, thanks to its capability of acquiring images by day or night and in (almost) all-weather conditions. Agriculture, forestry, oceanography are among the fields that benefit from the exploitation of SAR images, employed in a wide range of practical applications such as urban monitoring, land-use mapping, biomass estimation, damage assessment, oil spill detection, ice monitoring, among others

[2]. This is achieved through advanced techniques, like interferometry and polarimetry.

However, interpreting SAR images is a challenging task, both for human observers and for automatic tools aiming at extracting useful information. Indeed, they are corrupted by speckle. Although commonly referred to as noise (we will also adopt this convention in this paper), speckle is a physical phenomenon that is caused by the coherent sum of the contributions from different elementary scatterers within the same resolution cell, which the radar cannot resolve. The phase differences induce fluctuations in the complex summation and then in the observed amplitude, that produces in the observed image a granular behaviour.

In this context, being capable of effectively removing speckle from SAR images is of crucial importance for the community. Great efforts have been devoted to this topic. Among the most sophisticated techniques recently developed, one can mention non-local (NL) algorithms [3, 4, 5, 6], generalized in NL-SAR [7], a fully automatic algorithm that handles any SAR modality, single- or multi-look images, by performing several non-local estimations to best restore speckle-free data.

In [8], a general framework, called MuLoG, is proposed to apply any image denoiser originally designed for additive Gaussian noise within an iterative speckle removal procedure. Recent advances of deep learning for additive white Gaussian noise (AWGN) removal can thus be exploited [9]. Training an end-to-end model for speckle reduction has been most recently studied [10, 11, 12, 13, 14, 15]

. However, taking into account the peculiarities of SAR data remains challenging: noise distribution is not the same as in natural images, as well as the content, the texture, or the physical meaning of a pixel value. On top of that, there is an inherent scarcity of speckle-free references to train supervised deep learning algorithms to map a noisy SAR image to a speckle-free image. Thus, borrowing algorithms proposed in the computer vision field and extending them to the speckle removal task is not straightforward.

In this paper, we address the lack of noise-free references by extending the noise2noise approach proposed by Lehtinen et al. in [1] in order to take into account the peculiarities of SAR data. Thanks to a grounded loss function formulation, which depends on the speckle model, large stacks of multitemporal images acquired over the same area are exploited in two ways. In the first instance, as described in [our SARCNN], a dataset of noiseless images is created and our model is trained with synthetic speckle, following the fully developed Goodman’s model of noise [16]. At a later stage, we feed the network with real acquisitions, allowing learning of the spatial correlation introduced by the SAR processing steps, namely spectral windowing and oversampling [17] [18]

. The problem of temporal changes is addressed by a strategy for change compensation. This ensures robustness and generalization of the algorithm, since any temporal series of SAR images can be used at this transfer learning step.

Ii Related works

SAR images lack of noise-free references, which makes it non-trivial to adapt deep learning image denoising methods to speckle reduction. An analysis of the solutions recently proposed is carried out in this section.

The first paper that investigates the use of CNN for SAR image despeckling has been proposed by Chierchia et al. [10]. Inspired by Zhang et al. [19], their SAR-CNN is an adaptation of the denoising CNN (DnCNN) to SAR images. The groundtruth is created exploiting temporal series of images: assuming that no change has occurred between acquisitions, the images are temporally multilooked to produce a reference image. While achieving high-quality results both on images with synthetic noise and on real SAR images, the method is difficult to reproduce. Not only is it rare to observe temporal stability, but the definition of absence of change is ambiguous. Only images with short temporal baseline generally offer sufficient temporal stability, but this often comes with a strong temporal correlation of speckle, which undermines the ability of the network to efficiently remove speckle.

An alternative approach considered by Wang et al. [11, 12] and by Zhang et al. [13] consist in using natural images and produce SAR-like data by generating synthetic speckle, following Goodman’s model [16]. Only the case of multilooked images has been considered in the experiments, making the speckle fluctuations less prominent than in the most interesting case of single-look SAR acquisitions.

The use of natural images, combined with synthetic speckle noise, has the advantage that a huge dataset can be effortlessly generated, allowing the training of models with numerous parameters (i.e., deep architectures). However, peculiar characteristics of SAR images are neglected: content, geometry, resolution, scattering phenomena, etc. A compromise is proposed by Lattari et al. [14]. While the network is initially trained on a synthetic dataset built from natural images, a fine-tuning on SAR images is subsequently carried out. To this purpose, stacks of images are temporally averaged to produce a target image, then corrupted with synthetic speckle. In this way, the model can better handle real SAR images. To this end, a U-Net architecture [20] is employed in a residual fashion, along with the homomorphic approach.

At present, there does not exist a clear strategy on how to train deep learning models for SAR image despeckling. Some insight is given in [our SARCNN], where a high-quality dataset of noise-free SAR images is built to train an end-to-end deep learning model.

Algorithms developed using speckle generated under Goodman’s fully developed speckle model generally assume an absence of spatial correlations [16], which is not the case in actual SAR images synthetized by space agencies [21, 17]. Thus, a careful pre-processing step must be performed before handling real images to prevent the apparition of strong artifacts [18]. Whitening the spectrum [17, 22, 23] or down-sampling the image are possible strategies [18]. Yet, they are either not easy to apply in a systematic fashion or they result in a loss of spatial resolution.

To overcome these issues, an end-to-end self-supervised deep learning model trained on real SAR images is proposed by Boulch et al. [24]. Their work is based on the intuition that, if no change occurs, randomly picking two images from a temporal stack and training a neural network to reproduce one image starting from the other eventually leads the network to output the underlying speckle-free reflectivity, as only the speckle realization is changing.

As temporal stability is rarely observed in practise, Molini et al. [15] present an algorithm enabling direct training on real images, learning to denoise from a single image at a time. Their method however relies on the assumption that speckle is spatially uncorrelated, i.e. a whitening preprocessing stage is shown to be crucial in order to achieve good performances The authors suggest further studies on real images, while showing promising preliminary results.

Iii Speckle model

The fluctuations affecting SAR images arise from the 3-D spatial configuration and the nature of the scatterers inside a resolution cell. Echoes generated by each scatterer interfere either in a constructive or in a destructive way.

These perturbations are generally modeled as a multiplicative noise, i.e. the speckle. Assuming a large number of elementary scatterers, producing echoes with independent and identically distributed (i.i.d.) complex amplitudes, the fully-developed speckle model proposed by Goodman et al. [16] relates the measured intensity , the underlying reflectivity , and the speckle as follows:

(1)

where the speckle component is modeled by a random variable distributed according to a gamma distribution:

(2)

with the number of looks and the gamma function. It follows that and

. Thus, averaging independent samples in intensity leads to an unbiased estimator of the underlying reflectity, reducing the fluctuations by a factor proportional to the number of samples available.

In order to stabilize the variance, i.e. to make it independent from the reflectivity, a logarithmic transformation is often applied (homomorphic transform, see

[8]). The log-speckle has an additive behaviour:

(3)

where follows a Fisher-Tippett distribution described by:

(4)

The log-speckle has a variance that does not depend on the log-reflectivity, i.e., it is stationary throughout the image: , where is the polygamma function of order [25]. The mean of the log-speckle is not zero: , with the digamma function. Averaging log-transformed intensities requires a compensation for to obtain an unbiased estimator of log-reflectivity.

The focusing of a SAR image involves a series of processing steps that, as a result, introduce a spatial correlation between neighboring pixels. Goodman’s speckle model does not take these correlations into account, requiring an adaptation of the algorithms relying on the fully-developed i.i.d assumptions [23][18].

Iv Self-supervised image restoration: from Gaussian denoising to speckle reduction

Iv-a Self-supervised learning

In the supervised learning setting, pairs

of noiseless and noisy images are available for training. A common approach to estimate the parameters of an estimator is to minimize the loss function:

(5)

where

is a random realization of the random vector

and is a random realization under the conditional distribution .

The self-supervised approach noise2noise introduced by Lehtinen et al. [1] considers only noisy pairs , where and are two independent realizations drawn under the same conditional distribution . The authors suggest replacing the unknown realization with the noisy observation given that it is much easier to obtain additional noisy measurements of a static scene rather than very high quality measurements (i.e., virtually noise-free images):

(6)

The rationale behind this substitution comes from the conditional expectation of the expansions of the squared norms in equations (5) and (6):

(7)
(8)

Provided that the noise is centered, i.e. , the two expansions differ only by a term that is constant with respect to the parameters . Therefore, if the training set is large enough, parameters estimated with the self-supervised procedure of (5) are equivalent to parameters estimated with the supervised procedure (6).

In practice, training sets are limited and it is therefore necessary to consider how fast the self-supervised estimator converges to the supervised estimator. Under non-Gaussian noise, other loss functions may be more efficient. This is in particular the case of the co-log-likelihood:

(9)

Among the M-estimators, i.e. methods to estimate parameters based on the minimization of a loss function over the training set, the maximum likelihood estimators are known to be efficient [26]. This is illustrated in the case of speckle in figure 1, where the root mean square error111since the log-speckle is not centered, a compensation is added to prevent a bias with the loss, which is replaced by of the log-intensity is reported for the and the log-likelihood loss functions. The minimizer of the log-likelihood loss converges more quickly to , which indicates that it should be preferred as a loss function for self-supervised training of a despeckling network and is confirmed in our experiments described in section V.

Figure 1: Evolution of the root mean square estimation error as a function of the number of samples.

Iv-B Application to SAR image despeckling

When and are noisy log-intensity images, section III recalled that the conditional distribution is a Fisher-Tippett distribution. The loss function in (9) takes the form, under a simplifying assumption of statistical independence between pixels:

(10)

where the constant offset and the multiplicative factor are dropped since they are irrelevant in the minimization problem (9), and the sum involves all the pixel values and of the image pair.

Beyond the adaptation of the loss function to the statistics of speckle, it is necessary to account for changes that occur between two SAR images of a scene. If an estimator is available to produce pre-estimations and of the log-reflectivity images corresponding to the two speckle-corrupted log intensities and , changes in the second image can be partially compensated by forming the image: which more closely resembles image .

The proposed SAR restoration method, named SAR2SAR since it extends on the original ideas of noise2noise, considers several SAR time series, each accurately co-registered, and performs the training of a despeckling network using both the idea of a self-supervised loss and of change compensation. Figure 2 summarizes the principle of the method: the restoration is performed in the log-domain by a deep network. Since the change compensation requires the availability of pre-estimated reflectivities, the training of the network is performed in 3 steps: (A) first on images with synthetically generated speckle (in the self-supervised fashion of equation (IV-B)), (B) then on pairs of images extracted randomly from a time-series, the second image being compensated for changes based on reflectivites estimated with the network trained in (A), (C) finally a refinement step is performed where the network weights in (B) are used to obtain a better compensation for changes.

Figure 2: The proposed despeckling algorithm SAR2SAR is based on a U-Net trained in a residual learning fashion to restore log-transformed SAR images. The training is done in 3 steps: (A) on images with simulated speckle, (B) on pairs of actual SAR images, with changes compensated for based on the network trained in (A), (C) a final refinement of the network is performed with a better compensation for changes.

V Experiments

In our set of experiments, our model is the U-Net [20] described in [1], trained in a residual fashion [27]. Images are fed to the network after a log transform. Thus, the network reproduces the noise, which is subtracted from the input image. The despeckled image is obtained as a result.

V-a Synthetic speckle noise

One of the main issues when using deep learning algorithms on SAR images is the scarcity of training data. To achieve the desired level of generalization and given that the application of the presented algorithm to time-series needs an accurate adaptation, training is initially carried out on images corrupted with synthetic speckle noise. At each iteration, two independent speckle realizations (following the model described in section III) are used to create two noisy images, one being the input image and the other one to compute the loss. The images are divided into patches of

pixels, with a stride of 32. 3014 batches of 4 images compose our training set. The network has been trained for 30 epochs using the Adam optimizer, with a learning rate set as 0.001 and decreased by a factor of 10 after the first 10 epochs. The loss function of our

SAR2SAR method is adapted to the distribution of SAR images using equation IV-B.

Creating synthetic images from noise-free references, moreover, allows a more reliable evaluation. Results of several despeckling filters are presented in table I. The PSNR values on the SAR2SAR are not only comparable to those obtained with SAR-CNN [our paper], but are superior to when is adopted (with the proper debiasing step, as discussed in section IV-B), justifying the adaptation of the loss. Results on image Lely are displayed in figure 3. While the use of SAR2SAR is motivated by its direct application on real SAR images, even on images corrupted with synthetic speckle noise it achieves state-of-the-art results.

Images Noisy SAR-BM3D NL-SAR MuLoG+BM3D SAR-CNN noise2noise SAR2SARA
Marais 1 10.050.0141 23.560.1335 21.710.1258 23.460.0794 24.650.0860 25.310.1077 25.730.1251
Limagne 10.870.0469 21.470.3087 20.250.1958 21.470.2177 22.650.2914 24.080.1099 24.450.1190
lely 15.570.1342 21.490.3679 20.400.2696 21.670.2445 23.470.2276 23.500.3852 23.600.4368
Lely 11.450.0048 21.660.4452 20.540.3303 22.250.4365 23.790.4908 23.250.3671 23.670.5415
Rambouillet 8.810.0693 23.780.1977 22.280.1132 23.880.1694 24.730.0798 23.730.3882 24.160.3846
Risoul 17.590.0361 29.980.2638 28.690.2011 30.990.3760 31.690.2830 29.930.2164 30.680.2302
Marais 2 9.700.0927 20.310.7833 20.070.7553 21.590.7573 23.360.8068 26.150.2114 26.630.2154
Average 12.00 23.17 21.99 23.62 24.91 25.13 25.56
Table I: Comparison of denoising quality in terms of PSNR on amplitude images. For each ground truth image, 20 noisy instances are generated. 1confidence intervals are given. Per-method averages are given at the bottom.
(a) Groundtruth
(b) Noisy
(c) SARBM3D
(d) MuLoG+BM3D
(e) SAR-CNN
(f) noise2noise
(g) SAR2SARA
Figure 3: Results of state-of-the art despeckling filters on an image corrupted with synthetic 1-look uncorrelated speckle

V-B Real SAR images

To fine-tune the network on real images, the SAR time series composing the training need to be denoised to generate the images used to compensate for changes. An estimation can be obtained by using the network trained on synthetic speckle, subsampling the images to reduce the effect of the correlation [18]. Training proceeds for 20 more epochs with a learning rate decreased by a factor of 100 w.r.t. the initial value. 5 time series of 53 (Limagne), 45 (Marais 1), 45 (Marais 2), 69 (Rambouillet) and 25 (Lely) dates compose the training set. 2896 image patches are organized into 724 batches of 4 patches each. Learning is thereby transferred to correlated speckle.

As the compensation images used at this step required a subsampling operation, they have a poor resolution that impact the results produced by SAR2SAR. To overcome this issue, an iterative process has been studied: every 10 epochs the compensation images given to the network are updated with the results of SAR2SAR itself. Given that, asymptotically, the function learned by the network tends to the identity, we found experimentally that one iteration is a good compromise between speckle reduction and improvement of the resolution (see table II). Results are shown in figure 4.

restoration method SAR2SARA SAR2SARB SAR2SARC
iter 1 iter 2 iter 3 iter 4 iter 5
Wasserstein distance 0.147 0.063 0.086 0.109 0.128 0.144 0.158
Table II: At each step of the iterative process, the Wasserstein distance between the theoretical and empirical distribution is computed.
(a) Real SAR
(b) SARBM3D
(c) MuLoG+BM3D
(d) SAR-CNN
(e) SAR2SARB
(f) SAR2SARC
Figure 4: Results of state-of-the art despeckling filters on a real Sentinel-1 SAR image. SAR2SARB refers to the first fine-tuning step, when images with a poorer resolution are used to compensate for changes.

Vi Discussion

Single-look SAR images are difficult to denoise due to the strong spatial correlation. The proposed SAR2SAR algorithm learns the statistics of real speckle noise directly from the data, by devising a self-supervised algorithm leveraging deep learning and multi-temporal stacks of images.

While providing state-of-the-art results on images with synthetic speckle noise, it is on real single-look images that SAR2SAR shows a clear improvement over existing despeckling algorithms. The methods developed under fully developed speckle model assumptions, indeed, need a careful adaptation in order to properly deal with correlated data [18]. If a subsampling step is applied, images with a poorer resolution are produced. A more careful pre-processing, however, needs knowledge of the sensor’s parameters and adds a computational burden. SAR2SAR learns the speckle model directly from the data, making it readily applicable on real images. The advantage of deep learning algorithms, moreover, is that they are computationally fast once they are trained.

The use of information to compensate for changes is the key step that allows exploitation of SAR stacks. From our experiments it turned out that training in a scenario under which temporal stability can be assumed was not leading to such good results. To do this, given an image at date as input, the closest image in time was chosen to do the training. But even in this condition, the no change hypothesis did not always hold, giving mitigated results.

The general formulation of SAR2SAR suggests that it can be extended to Ground Range Detected (GRD) images and to any sensor (e.g. TerraSAR-X), once the training data are collected. The weights of the trained model are released along with this article222https://github.com/emanueledalsasso/SAR2SAR, to allow testing of our method and to foster research on SAR image denoising.

References

  • [1] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189, 2018.
  • [2] A. Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, and K. P. Papathanassiou, “A tutorial on synthetic aperture radar,” IEEE TGRS, vol. 1, no. 1, pp. 6–43, 2013.
  • [3] C.-A. Deledalle, L. Denis, and F. Tupin, “Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,” IEEE Transactions on Image Processing, vol. 18, no. 12, p. 2661, 2009.
  • [4] ——, “Nl-insar: Nonlocal interferogram estimation,” IEEE TGRS, vol. 49, no. 4, pp. 1441–1452, 2010.
  • [5] C.-A. Deledalle, F. Tupin, and L. Denis, “Polarimetric sar estimation based on non-local means,” in 2010 IEEE IGARSS.   IEEE, 2010, pp. 2515–2518.
  • [6] S. Parrilli, M. Poderico, C. V. Angelino, and L. Verdoliva, “A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage,” IEEE TGRS, vol. 50, no. 2, pp. 606–616, 2012.
  • [7] C.-A. Deledalle, L. Denis, F. Tupin, A. Reigber, and M. Jäger, “Nl-sar: A unified nonlocal framework for resolution-preserving (pol)(in) sar denoising,” IEEE TGRS, vol. 53, no. 4, pp. 2021–2038, 2014.
  • [8] C.-A. Deledalle, L. Denis, S. Tabti, and F. Tupin, “MuLoG, or how to apply Gaussian denoisers to multi-channel SAR speckle reduction?” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4389–4403, 2017.
  • [9]

    X. Yang, L. Denis, F. Tupin, and W. Yang, “Sar image despeckling using pre-trained convolutional neural network models,” in

    2019 JURSE.   IEEE, 2019, pp. 1–4.
  • [10] G. Chierchia, D. Cozzolino, G. Poggi, and L. Verdoliva, “SAR image despeckling through convolutional neural networks,” in IGARSS, 2017.   IEEE, 2017, pp. 5438–5441.
  • [11] P. Wang, H. Zhang, and V. M. Patel, “SAR image despeckling using a convolutional neural network,” IEEE Signal Processing Letters, vol. 24, no. 12, pp. 1763–1767, 2017.
  • [12] ——, “Generative adversarial network-based restoration of speckled sar images,” in 2017 IEEE 7th International Workshop on CAMSAP.   IEEE, 2017, pp. 1–5.
  • [13] Q. Zhang, Q. Yuan, J. Li, Z. Yang, and X. Ma, “Learning a dilated residual network for sar image despeckling,” Remote Sensing, vol. 10, no. 2, p. 196, 2018.
  • [14] F. Lattari, B. Gonzalez Leon, F. Asaro, A. Rucci, C. Prati, and M. Matteucci, “Deep learning for sar image despeckling,” Remote Sensing, vol. 11, no. 13, p. 1532, 2019.
  • [15] A. B. Molini, D. Valsesia, G. Fracastoro, and E. Magli, “Towards deep unsupervised sar despeckling with blind-spot convolutional neural networks,” arXiv preprint arXiv:2001.05264, 2020.
  • [16] J. W. Goodman, “Some fundamental properties of speckle,” JOSA, vol. 66, no. 11, pp. 1145–1150, 1976.
  • [17] R. Abergel, S. Ladjal, F. Tupin, and J.-M. Nicolas, “A complex spectrum based SAR image resampling method with restricted target sidelobes and statistics preservation,” in 2017 IGARSS.   IEEE, 2017, pp. 5358–5361.
  • [18] E. Dalsasso, L. Denis, and F. Tupin, “How to handle spatial correlations in SAR despeckling? Resampling strategies and deep learning approaches,” hal-02538046, 2020.
  • [19] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.
  • [20] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2015, pp. 234–241.
  • [21] F. Argenti, A. Lapini, T. Bianchi, and L. Alparone, “A tutorial on speckle reduction in Synthetic Aperture Radar images,” IEEE Geoscience and remote sensing magazine, vol. 1, no. 3, pp. 6–35, 2013.
  • [22] R. Abergel, L. Denis, F. Tupin, S. Ladjal, C.-A. Deledalle, and A. Almansa, “Resolution-Preserving Speckle Reduction of SAR Images: the Benefits of Speckle Decorrelation and Targets Extraction,” in IGARSS 2019.   IEEE, 2019, pp. 608–611.
  • [23] A. Lapini, T. Bianchi, F. Argenti, and L. Alparone, “Blind speckle decorrelation for SAR image despeckling,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 2, pp. 1044–1058, 2013.
  • [24] A. Boulch, P. Trouvé, E. Koeniguer, F. Janez, and B. Le Saux, “Learning speckle suppression in sar images without ground truth: application to sentinel-1 time-series,” in IGARSS 2018.   IEEE, 2018, pp. 2366–2369.
  • [25] M. Abramowitz and I. A. Stegun, Handbook of mathematical functions: with formulas, graphs, and mathematical tables.   Courier Corporation, 1965, vol. 55.
  • [26] H. L. Van Trees, Detection, estimation, and modulation theory, part I: detection, estimation, and linear modulation theory.   John Wiley & Sons, 2004.
  • [27] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , 2016, pp. 770–778.