Synthetic Aperture Radar Image Formation with Uncertainty Quantification

07/13/2020 ∙ by Victor Churchill, et al. ∙ Dartmouth College 0

Synthetic aperture radar (SAR) is a day or night any-weather imaging modality that is an important tool in remote sensing. Most existing SAR image formation methods result in a maximum a posteriori image which approximates the reflectivity of an unknown ground scene. This single image provides no quantification of the certainty with which the features in the estimate should be trusted. In addition, finding the mode is generally not the best way to interrogate a posterior. This paper addresses these issues by introducing a sampling framework to SAR image formation. A hierarchical Bayesian model is constructed using conjugate priors that directly incorporate coherent imaging and the problematic speckle phenomenon which is known to degrade image quality. Samples of the resulting posterior as well as parameters governing speckle and noise are obtained using a Gibbs sampler. These samples may then be used to compute estimates, and also to derive other statistics like variance which aid in uncertainty quantification. The latter information is particularly important in SAR, where ground truth images even for synthetically-created examples are typically unknown. An example result using real-world data shows that the sampling-based approach introduced here to SAR image formation provides parameter-free estimates with improved contrast and significantly reduced speckle, as well as unprecedented uncertainty quantification information.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Synthetic aperture radar (SAR) is a widely-used imaging technology for surveillance and mapping. Because SAR is capable of all-weather day-or-night imaging, it overcomes several challenges faced by optical imaging technologies and is an important tool in modern remote sensing, [jakowatz2012spotlight]. Applications for SAR include areal mapping and analysis of ground scenes in environmental monitoring, remote mapping, and military surveillance, [andersson2012fast]. To collect SAR data, an airborne sensor traverses a flight path, periodically transmitting an interrogating waveform toward an illuminated region of interest. The emitted energy pulses impinge on targets in the illuminated region that scatter electromagnetic energy back to the sensor. The sensor measures and processes the reflected signal. The demodulated data, called a phase history, is passed on to an image formation processor. This paper concerns image formation, which produces a reconstruction of the two-dimensional electromagnetic reflectivity function of the illuminated ground scene from SAR phase history data. For an overview of SAR and basic image formation techniques including back projection, see e.g. [jakowatz2012spotlight, gorham2010sar].

Several issues pose challenges to forming artifact- and noise-free SAR images. First, large images and data prohibit use of traditional matrix-based methods for linear inverse problems, as even storing dense matrices of this size is problematic. In addition, SAR is a coherent imaging system, and is therefore affected by speckle, a multiplicative-noise-like phenomenon which causes grainy-looking images, [argenti2013tutorial]. The result of most existing methods is a single image, typically a maximum a posteriori (MAP) point estimate, that approximates the unknown ground truth. There are several issues with MAP estimates. First, the maximum is not categorically representative of the posterior – in general sampling is a better way to interrogate a density. Second, MAP estimates rely on user-defined parameters, but estimating these quantities would impose less bias. Finally, the MAP prediction is not probabilistic, and therefore provides no estimate of the statistical confidence with which we can trust the features in the image. In a Bayesian formulation, an entire joint density estimates the unknown quantities. We show that this more complete information is particularly important in SAR, where even in synthetically-created examples the ground truth reflectivity is unknown.

In this paper we therefore develop a parameter-free sampling-based SAR image formation framework with uncertainty quantification. A hierarchical Bayesian network structure,

[bardsley2012mcmc], allows our model to directly incorporate coherent imaging and the problematic speckle phenomenon into the prior. A joint posterior density estimate is computed for the image, the noise parameter, and the speckle parameter. By encouraging sparsity in the magnitude of the image we are able to reduce speckle and increase contrast. It is important to note that all parameters in the model are prescribed, requiring no user input. Conjugate priors are used so that the resulting posterior can be efficiently sampled using a Gibbs sampler. The samples obtained from the posterior density can be used in a variety of ways, including to compute estimates of the unknown reflectivity image as well as the noise parameter and crucially the speckle parameter, which typically requires post-processing to quantify, [argenti2013tutorial]. In addition, samples provide variance and other statistics to aid in uncertainty quantification.

The rest of this paper is organized as follows. Section II describes the hierarchical Bayesian model which directly incorporate coherent imaging, speckle, and sparsity using conjugate priors. Section III outlines an efficient sampling method for the resulting posterior. Section IV shows an example. The results suggest that the proposed method provides estimates with significantly better contrast and drastically reduced speckle compared with other methods, in addition to the unprecedented UQ. Finally, Section V gives a summary of the paper and directions for future research.

Ii Hierarchical Bayesian Model for SAR

We now formulate a hierarchical Bayesian model and joint posterior.

Ii-a Discrete model

The measured SAR phase history data can be modeled as a continuous non-uniform Fourier transform of the reflectivity function,

[sanders2017composite]. Here we use the discretized model

(1)

where is the vertically-concatenated phase history data,

is a two-dimensional discrete non-uniform Fourier transform matrix, and the vector

is the vertically-concatenated unknown reflectivity image matrix. Finally, represents model and measurement error. Note that by using the discrete Fourier transform in (1) we introduce both aliasing error and the Gibbs phenomenon.

The objective is to infer from . Observe that while SAR images are complex-valued, usually only the magnitude is viewed while the phase is omitted. However, recovering the phase is also important, [moore2018characterization], and should not be neglected. It is also common to modify (1) to include autofocusing for the purpose of phase error reduction, [scarnati2018joint]. While such modifications are not a primary concern in this investigation, they can be incorporated into the proposed method in a straightforward manner.

Throughout this paper we assume is complex circularly-symmetric white Gaussian noise. That is, i.i.d. for all , where is the noise variance. This yields the Gaussian likelihood function

(2)

which measures the goodness of fit of the model (1), where with the conjugate transpose of .

Ii-B Hierarchical prior

With the likelihood given by (2), the next step in computing the posterior is to specify a prior for the latent variable . We use the fact that SAR images are affected by the speckle phenomenon as a prior. Speckle, which occurs in all coherent imaging and is often misidentified and mischaracterized as noise, causes a complicated granular pattern of bright and dark spots throughout an image, [jakowatz2012spotlight]. Although speckle is in fact signal and not noise, it nonetheless degrades the image quality by lowering the contrast, and hence it is desirable to remove it. While speckle reduction is often tackled using denoising techniques, here instead we directly incorporate the speckle into the image formation model, so that it is properly characterized as part of the signal. Specifically we employ the fully-developed speckle model which is based on the assumption that the spatial resolution dimension is considerably larger than the wavelength and the illuminated surface is rough enough, [dong2014sar]. Details of this popular model are described below.

Assume and are respectively i.i.d. Gaussian with variance . That is, . By independence, .This is conveniently encoded by which means that is circularly-symmetric complex Gaussian with density

(3)

where is elementwise multiplication. Thus we see that the prior on the magnitude

is a Rayleigh probability distribution with mean proportional to

. This is the standard specification for fully-developed speckle. Because the changes in the magnitude of each pixel is proportional to , the speckle phenomenon has also been modeled as a multiplicative noise, [dong2014sar]. While efforts to reduce speckle abound, [argenti2013tutorial], here we address the speckle directly by including it in our model with the prior (3), and later estimating the associated speckle parameter through sampling rather than post-image-formation techniques. Note that by parametrizing with we are actually doubling the number of parameters in this model, which clearly provides a computational challenge (but not a methodological one). We will return to this issue later.

As mentioned previously, using the MAP estimate as the solution is limiting – first because it may not be representative of the posterior and second because it provides no information about the statistical confidence of the estimate of each recovered pixel value, or in any other recovered features of the image, [nagy2002image]. Finally, the regularization parameters for both the cost function and prior in the MAP estimate approach (analogous to and here) are user-specified. However they are truly unknown and therefore should be inferred. For these reasons we seek the joint posterior . More importantly, we will be estimating an entire density for the speckle parameter , which will lend clarity when determining whether or not the speckle reduction techniques are actually working.111Without a reference ground truth image, speckle statistics are typcically only estimated from small regions of already formed images, [argenti2013tutorial]. In order to calculate this posterior, we must define priors on and . To this end, we first invoke a conjugate Gamma prior for . That is,

with probability density function

(4)

Similarly a conjugate Gamma prior is invoked on each element of , i.e. . By independence, with

(5)

Note the dependence of (4) and (5) on parameters , , , and , which as in [bardsley2012mcmc, tipping2001sparse] are chosen rather than inferred. In [bardsley2012mcmc], analogous parameters in a real-valued model are chosen to reflect the uncertainty in the latent variable, making the prior uninformative. On the other hand in [tipping2001sparse] , resulting in an improper prior , which is peaked at zero and hence encourages sparsity.222To ensure numerical robustness in our implementation, we choose these parameters to be machine precision rather than in Algorithm 1. Sparsity-promoting image formation methods for SAR is popular, see e.g. [scarnati2018joint, dong2014sar, sanders2017composite, archibald2016image, ccetin2001feature].

For purpose of comparison, then, in the figures that follow we also show images formed using two popular sparsity-encouraging image formation methods: (1) regularization, where sparsity is promoted in the image itself; and (2) total variation (TV) regularization, where sparsity is promoted in the gradients in the image. Both methods have been extensively applied in SAR (see e.g. [scarnati2018joint, dong2014sar, sanders2017composite, archibald2016image, ccetin2001feature]). Importantly, choosing and in this way removes any need for user-defined parameters in this model.

Ii-C Posterior computation

The form of the posterior is achieved through the hierarchical Bayesian model described above, where the model parameters and are given priors with prior parameters , , and

), referred to as hyperparameters. Moving up to the final level of hierarchy in this model, the hyperparameter

is given a prior (called a hyperprior) with hyperhyperparameters

and

. By Bayes’ theorem, the joint posterior for

, , and is

(6)

The image formation algorithm in Section III will require the individual posteriors for each latent variable. Because of the conjugate priors used, these can be found analytically. We have

(7a)
(7b)
(7d)

Therefore each latent variable can be sampled from

(8a)
(8b)
(8c)

with .

Iii Sampling-based SAR Image Formation

Following the approach in [bardsley2012mcmc], we now set up a sampling-based image formation procedure to obtain approximate samples from (6). Since the density is not described by a known family of probability distributions, it cannot be directly sampled. However, because of the conjugate prior structure, we can easily apply a Gibbs sampler, [geman1984stochastic], which obtains approximate samples from the joint posterior (6) by sequentially sampling the individual posteriors for each variable given in (8

). As with other Markov chain Monte Carlo (MCMC) methods, Gibbs sampling creates a Markov chain of samples, each of which is correlated with the other samples.

In terms of efficiency, an issue occurs in sampling the individual posterior for given by (8a), where we must solve the following linear system determined by (7a) for :

(9)

with and , [bardsley2012mcmc]. Even storing the dense matrices and in real-world problems is not practical. However, because is a non-uniform discrete Fourier transform matrix, we can utilize existing libraries to quickly apply a non-uniform fast Fourier transforms (NUFFT), [fessler2003nonuniform]

. The NUFFT is performed by interpolating non-uniform Fourier mode quantities to a uniform grid so that the uniform FFT can be used,

[fessler2003nonuniform]. This is not without error of course, which mainly comes from the error accumulated when “gridding” non-uniform to uniform Fourier modes. Further work will be needed to meaningfully quantify this error for this application. For the current investigation, in order to apply efficiently, we employ a unitary operation. This means that the left-hand-side operation of (9) can be approximately diagonalized as

(10)

which can now be efficiently inverted using simple elementwise division. Clearly using the right hand side in (10) introduces additional error, along with that from modifying the non-uniform modes in order to make them conform with a uniform grid, oscillations due to the Gibbs phenomenon, and model and measurement error. A potentially more accurate method would be to use elementwise division by as a preconditioner in a conjugate gradient descent scheme, however this would be far less efficient. By combining (7), (8), and (10) we arrive at Algorithm 1, which produces samples for , , and , each of which are approximately drawn from the joint posterior. Notice that each sample requires two NUFFT applications.

  Initiate , , ;
  for  to  do
     Draw and ;
     Compute ;
     Compute ;
     Compute ;
  end for
Algorithm 1 Sampling from

Iii-a Chain convergence

It is generally unknown how quickly the chain formed in Algorithm 1

will converge. While there are several heuristic approaches available, here we follow

[gelman2013bayesian] to determine chain convergence. Multiple chains are computed using randomly chosen starting points based on the observation that the variance within a single chain will converge faster than the variance between chains. A statistic is computed for each element of each latent variable, the value of which is a measure of convergence of that individual parameter. The derivation of this statistic closely follows [bardsley2012mcmc]. Compute chains (in our implementation this is done in parallel) each of length , keeping only the latter samples. Let denote the th sample from the th chain for a single parameter, and define

where is the mean of the samples in the chain , is the mean of the samples in every chain, and

Hence is a measure of the variance between the chains while is a measure of the variance within each individual chain. The marginal posterior variance is then estimated by

(11)

which is an unbiased estimate under stationarity,

[gelman2013bayesian]. From this variance estimate, we compute the desired statistic

(12)

which tends to from above as . Once dips below for all sampled parameters, the samples can together be considered samples from the posterior (6), [gelman2013bayesian]. Note that other values can also be chosen as a tolerance for , [bardsley2012mcmc], but using seems reasonable when accounting for additional numerical errors. From the resulting samples of , , and , a variety of sample statistics can be computed which describe the joint posterior density and help to quantify the uncertainty in the data.

Fig. 1: Optical images of parking lot being imaged in GOTCHA dataset, [casteel2007challenge]. Note scene contains a variety of calibration targets, such as primitive reflectors like the tophat shown, a Toyota Camry, forklift, and tractor.

Iv Results

We now provide a real-world example that demonstrates the robustness of Algorithm 1

for SAR image formation. Since the ground truth reflectivity images is unknown, we are unable to compute standard error statistics such as the relative error. This is the case even in synthetically-created SAR examples, where the true reflectivity is still unknown. Therefore, the unprecedented uncertainty quantification information the proposed method provides is all the more valuable, as it is able to quantify how much we should trust pixel values and structures in the image even in the absence of ground truth. Throughout, reflectivity images

are displayed in decibels (dB): , with a minimum of dB and maximum of dB. Lesser or greater values are assigned the minimum or maximum.

Iv-a Data

The GOTCHA Volumetric SAR Data Set consists of SAR phase history data of a parking lot scene collected at X-band with a 640 MHz bandwidth with full azimuth coverage at 8 different elevation angles with full polarization, [casteel2007challenge]. This is a real-world SAR dataset captured by the Air Force Research Laboratory. The parking lot contains various vehicle targets including civilian vehicles, construction vehicles, calibration targets, primitive reflectors, and military vehicles. Figure 1 shows optical images of the targets. The center frequency is 9.6GHz and bandwidth is 640MHz. This public release data has been used extensively for testing new SAR image formation methods.

Fig. 2: Full images formed with (clockwise from top left) NUFFT; TV regularization; sparse sample mean; regularization.

Iv-B Image estimate

Figure 2 compares images of the parking lot scene using an inverse NUFFT, regularization, total variation (TV) regularization, and the proposed sparsity-based sampling method. The inverse NUFFT image corresponds to a maximum likelihood estimate, minimizing a least squares cost function. Clearly this does little to reduce speckle and noise and results in a grainy image. The regularization scheme encourages sparsity in the estimate, yet it is evident that much of the speckle remains. The TV regularization perhaps does the best at removing speckle, however it leaves block-like artifacts in its place, making it difficult to distinguish between signal and background. Recall that TV regularization is essentially an image denoising model – it aims to recover a piecewise constant image and also does not distinguish speckle from noise – which may explain the results. The images have pixels. Code from [sanders-imaging] is used to wrangle the GOTCHA data and perform image formation for the comparison methods. The sampling-based method, which also uses sparsity-encouraging parameters, appears to retrieve a significantly better estimate than the other methods in terms of noise and speckle reduction, as well as contrast improvement. There also appears to be no new artifacts, such as the block-like artifacts in the TV reconstruction.

The runtimes for each algorithm were s for the NUFFT, s for the regularization, and s for the sampling method. Each image was formed on Polaris, a shared memory computer operated by Dartmouth Research Computing with 40 cores, 64-bit Intel processors, and 1 TB of memory. Using such a large machine was necessary in order to store the samples (here for each of parameters).

Fig. 3: of the sample variance of .

Iv-C Uncertainty quantification

With the samples having been drawn, and image estimates computed, we now seek to visualize confidence information in order to provide important information about the certainty of these estimates. One way to do this is to look at the variance of the samples (or standard deviation) at each pixel. This can be helpful in forming a confidence estimate by acknowledging that roughly

standard deviations from the mean contains

of samples in a Gaussian distribution. Figure

3 shows of the sample variance of

. We see that high-magnitude pixels tend to vary more than low-magnitude pixels, which is indicative of tighter confidence intervals in the background, implying greater certainty that there are no targets in these regions. We can also display the

samples in a short movie. Using Twinkle, [nagy2002image], as inspiration, we are less confident in pixels or structures in the image that “twinkle” than those that are persistent throughout the movie.

Fig. 4: (Left) Sample mean of ; (right) histogram of samples.

Iv-D Speckle and noise estimates

Similar to the image estimates, the sampling-based image formation method produces samples of , the parameter governing speckle, and , the inverse noise variance. Figure 4 shows the sample mean of as well as a histogram of the scalar samples. The sample mean for was . We also have that the reciprocal values of this image provide an estimate for the mean speckle parameter. There are several observations that can be made from the estimate. Many features of the reflectivity image are visible in this estimate. Recall that the magnitude of each pixel is Rayleigh distributed with mean proportional to , hence changes in the magnitude of each pixel are proportional to . There is practically no speckle (on the order ) except at the various high-magnitude target reflectivities where the fully-developed speckle model is no longer appropriate, matching the speckle reduction we saw in the image estimate in Figure 2. This confirms that the sparsity-encouraging measures taken effectively reduced speckle. In addition to these estimates, we can also perform similar uncertainty quantification analysis for the by looking at the variance image or sample movie as well.

V Conclusion

In this paper, we developed a new framework for coherent SAR image formation. This task is challenging due to the problem size and the speckle phenomenon. Current methods also lack uncertainty quantification. Our framework uses a hierarchical Bayesian model with conjugate priors to directly incorporate fully-developed speckle. A parameter-free sparsity-encouraging sampling method is introduced to provide estimates of the image, the speckle, and the noise. The GOTCHA data set examples demonstrates that our method reduces speckle and noise significantly more than other commonly used methods in real world problems. Uncertainty quantification information unprecedented in SAR is also provided in the form of variance images and sample movies, indicating when the pixel values and features shown in an estimate can be trusted. Uncertainty quantification is also provided for the speckle and noise. Such information is of particular importance in SAR, where ground truth images even for synthetically-created examples are unknown.

Future work will focus on further accelerating the sampling method, as well as decreasing storage and memory requirements. This will enable image formation with more pixels, as well as multi-pass and three-dimensional imaging. Finally, comparisons with deep learning based SAR image formation (still in its nascent stages,

[mason2017deep]) will be necessary.

References