Log In Sign Up

Generative Adversarial Networks for Recovering Missing Spectral Information

Ultra-wideband (UWB) radar systems nowadays typical operate in the low frequency spectrum to achieve penetration capability. However, this spectrum is also shared by many others communication systems, which causes missing information in the frequency bands. To recover this missing spectral information, we propose a generative adversarial network, called SARGAN, that learns the relationship between original and missing band signals by observing these training pairs in a clever way. Initial results shows that this approach is promising in tackling this challenging missing band problem.


MisGAN: Learning from Incomplete Data with Generative Adversarial Networks

Generative adversarial networks (GANs) have been shown to provide an eff...

WaveFill: A Wavelet-based Generation Network for Image Inpainting

Image inpainting aims to complete the missing or corrupted regions of im...

Sensing-Throughput Tradeoffs with Generative Adversarial Networks for NextG Spectrum Sharing

Spectrum coexistence is essential for next generation (NextG) systems to...

Explicit Use of Fourier Spectrum in Generative Adversarial Networks

Generative Adversarial Networks have got the researchers' attention due ...

Spectral Image Visualization Using Generative Adversarial Networks

Spectral images captured by satellites and radio-telescopes are analyzed...

Bias and variance reduction and denoising for CTF Estimation

When using an electron microscope for imaging of particles embedded in v...

I Introduction

Over the past few decades, ultra-wideband (UWB) radar systems have been widely employed in various practical applications due to their penetration capability. For example, the U.S. Army has been developing UWB radar systems for detection of difficult targets in various applications such as foliage penetration [2], ground penetration [3], and sensing-through- the-wall [4]. To achieve penetration capability, these systems must operate in the low-frequency spectrum that spans from under MHz to several GHz. In addition to the low-frequency requirement for penetration, they must employ wide-bandwidth signals to achieve the desired resolution. However, the signal occupies a wide spectrum that is also shared by radio, TV, cellular phones, and other systems. The frequency allocation and use problem thus becomes a major challenge and only worsens over time as additional radar and communication systems that need the penetration feature must operate in this low-frequency spectral region.

There are two key challenges for any UWB system: 1) the system must operate in the presence of other systems and 2) the system must avoid transmitting energy in certain frequency bands that are specified by frequency management agencies. As a result, the receive data have a spectral content that includes multiple bands that are either corrupted (due to the presence of interference sources) or nonexistent (because of no transmission in the prohibited frequency bands). In this paper, we tackle the latter problem in which a large portion of the spectrum is notched due to the frequency allocation issue.

Conventional techniques usually detect the corrupted frequency bands by searching for spikes in the spectral domain. The fast Fourier transform (FFT) bins that correspond to the contaminated frequency bands are zeroed out. This technique results in severe sidelobes in the time or spatial domains of the output data and imagery due to the sharp transitions (frequency samples with no information) in the frequency domain. To overcome these limitations, Do et. al.

[5] proposed a technique to recover missing spectral information using sparse representation. It is based on the assumption that the full spectrum data and corrupted versions are similarly sparsely represented by a full spectrum dictionary and a missing band dictionary, respectively. Its limitation is a lack in the ability to distinguish near-by targets at fine resolution. Furthermore, the missing frequency bands are required a priori.

Recently, a class of generative model in neural network literature, namely, Generative Adversarial Network (GAN)


, has produced remarkable results in various applications in computer vision, speech processing, and other fields. A standard GAN takes a random noise vector as an input and generates samples that resemble real data. There are also many works that feed GAN with conditions, such that the generated image samples are not only realistic but also match the constraints imposed by the conditions. Some works conditioned GAN on discrete class labels

[7, 8], while many other works synthesized images by conditioning GAN on images for the tasks such as domain transfer [9, 10]

, image super-resolution

[11, 12], image synthesis from surface normal maps [14], and style transfer [13].

In this paper, we propose a GAN framework to recover missing spectral information in multiple frequency bands of UWB synthetic aperture radar (SAR) data that are either corrupted or nonexistent. Specifically, we propose a generator loss function that encourages the network to seek solutions on the SAR image manifold that are consistent with data in the frequency domain. Our proposed method can be seen as a variant of a conditional GAN framework, but conditioned on the spectral domain.

The network is trained by observing various spectral missing patterns. The advantage of this technique is twofold. First, all computational complexity is at the training phase. The testing phase only consists of some simple matrix multiplication. Second, to recover a SAR image from its frequency corrupted version, the trained network requires zero information of the missing band locations. This is an advantage of our proposed method over traditional spectral recovery techniques in which missing frequencies are required a priori. To our knowledge, this is the first GAN-based framework for recovering missing spectral information in UWB radar systems.

Ii Method

We aim to reconstruct a SAR image from its missing band version . In our framework, we adopt a GAN structure. We train the network by minimizing a standard discriminator loss and a generator loss specifically designed for this missing spectral problem. The training data include a set of image pairs, each consisting of an uncorrupted image and its frequency-corrupted counterpart. Each corrupted image is obtained by notching out certain frequency bands of the original image. Original images are not available in the testing phase.

Our goal is to train a generator , parameterized by that reconstructs a SAR image from its frequency-corrupted version. Given a set of training data , we train the generator by solving


Then a SAR image can be recovered from its missing band counterpart as


We describe our generator loss in detail in Section II-B. It conditions on the frequency domain of the generated sample and forces the generator to favor solutions on the SAR image manifold.

Ii-a Generative Adversarial Networks

GANs are neural networks for training generative models in an adversarial manner. A GAN consists of two networks, a generator and a discriminator . The generative network learns a mapping from a low-dimensional representation space to a high-dimensional space. The purpose of is to generate samples that resemble the training data. The discriminator maps an input to a likelihood. Its role is to distinguish between the sample generated by and the sample from the data distribution.

Directly applying standard GANs to the missing spectral recovery problem fails to reconstruct original images, as they produce samples that are inconsistent with the input data in the frequency domain. We therefore formulate our generator loss to favor solutions that contain available frequencies in the corrupted images. This guarantees consistency between the generated sample and the original image. Moreover, the input in our generator is a corrupted image instead of a low-dimensional encoding as in traditional GANs. This allows our network to learn a mapping from a corrupted input to a desired solution.

Fig. 1:

SARGAN architecture. The generator produces an estimate of a full spectrum image from its corrupted version to full the discriminator. The discriminator tries to distinguish this estimate with the original image. In a successfully trained SARGAN, the generator produces estimates that are close to the full spectrum image, thus successfully fool the generator.

Ii-B Generator Loss

We encourage the generator to seek for solutions on the SAR image manifold that are consistent with the input. To do so, we formulate the generator loss as a weighted sum of a content loss component and an adversarial loss component.

A SAR image and its missing band counterpart is related by:


Here, is the Fourier matrix, and is a binary masking matrix defined as:


In other words, the masking matrix notches out missing frequency bands and preserves the available frequencies in the original image. As missing local information in the frequency domain results in a global deviation in the time domain, imposing data consistency in the time domain fails to recover notched spectral information. We therefore define a content loss that requires generated samples to preserve available frequencies in the input images:


where the loss is defined as , for a given matrix . Note that the loss can be replaced by other losses such as . In our experiments, we find that the loss results in a faster convergent rate and more robust reconstruction than the loss.

To further improve the reconstruction quality, we impose an adversarial loss to the generator. This encourages the generator to fool the discriminator by seeking solutions on the SAR image manifold:


The generator loss is defined as a weighted sum of these two losses:


where is a positive constant controlling the tradeoff between the two terms.

Ii-C Discriminator Loss

We adopt a standard discriminator network which we train to solve the following optimization problem:

This allows one to train a generator to produce realistic SAR images from corrupted inputs to fool a discriminator, which is trained to differentiate reconstructed SAR images from original ones. Our generator is thus encouraged to favor solutions on the SAR image manifold.

Iii Results

In this section, we demonstrate SARGAN for the spectral recovery problem using SAR data from the U.S. Army Research Laboratory (ARL) UWB SAR system.

This SAR database consists of targets (metal and plastic mines, 155-mm unexploded ordinance [UXO], etc.) and clutter objects (a soda can, rocks, etc.) buried under rough ground surfaces. The electromagnetic (EM) radar data are simulated based on the full-wave computational EM method known as finite-difference, time-domain (FDTD) software [26], which was developed by ARL. The software was validated for a wide variety of radar signature calculation scenarios [27], [28]. Our volumetric rough ground surface grid with the embedded buried targets was generated by using the surface root-mean-square (rms) height and the correlation length parameters. The targets are flush buried at a 2-3 cm depth. Fig. 2 (left) shows original SAR raw data (using VV polarization) of some targets that are buried under a perfectly smooth ground surface. Each target is imaged at a random viewing aspect angle and an integration angle of .

In our experiment, the SAR radar is configured in side-looking mode. It travels in the horizontal direction, transmits impulses to the imaging area, and receives backscattered radar signals from the targets. In this scene, there might be many point targets that have different amplitudes and are located randomly throughout the scene. For demonstrating purposes, we use the raw data in a case where there is a random point target on the scene. The left image in Fig. 2 shows the full spectrum raw data for this simulation scenario. The data bandwidth is from MHz to GHz, which contains of the signal energy. It serves as the baseline image for performance comparison purposes.

Fig. 2: Raw data in time domain of target versus aspect angle.

Next, we consider the spectral notches case due to the frequency allocation restriction. In our experiments, we randomly zero out frequency sub-bands of the spectrum, each equivalent to times the frequency resolution which is equal to MHz. These random frequency bands can be overlapped and sum up to of the data spectrum. Fig. 3 demonstrates the aforementioned randomly notching procedure in the frequency domain of the data.

Fig. 3: Spectrum of the raw data with 90% missing in the bandwidth.

The middle image in Fig.  2 shows the raw data with of the spectrum being notched. Fig. 4 presents the downrange profiles of the data. The large amount of missing frequencies results in severe sidelobes in the data. Recovering the original data is therefore challenging in this situation.

Fig. 4: Normalized down-range profiles in dB scale of the raw data. The spectrally notched data show severe sidelobes, whereas the data reconstructed using SARGAN follow the ground-truth very well. The test reconstructed result was obtained after

training epochs.

We use SARGAN to recover missing spectrum information under this setup. We use a four-layer fully connected neural network for the generator, and a three-layer fully connected neural network for the discriminator. The first and last layers of the generator have the same dimensions as the input data. The two hidden layers are of length . The dimension of the first layer of the discriminator is equal to that of the input data. Its hidden layer has nodes and its output has one node, which guesses whether the input is a true image or one produced by the generator. In our experiments, we use a stable alternative to GANs, called Wasserstein GAN (WGAN).

The training data are obtained as follows. From a full spectrum raw data, we produce several randomly spectrally notched version of that data. Each such notched data matrix together with the full spectrum data constitute a training pair. We then train our network using these training pairs. In the testing phase, the full spectrum raw data are unavailable. Our goal is to recover it from a corrupted version that is not included in the training data. The locations of the notched band are unknown to the network. This is significantly different than other traditional spectral recovery techniques in which missing frequencies are required a priori.

Fig. 5 shows the normalized down-range profiles in dB of the recovered data, produced by the generator of SARGAN, after each epochs. It can be seen that after epochs, the generated samples already well approximate the original full spectrum data. The normalized downrange profile of the test reconstructed data using SARGAN after training epoches are shown in Fig. 4. The recovered data closely follow the original data whereas the corrupted version show severe sidelobes.

Fig. 5: Normalized down-range profiles in dB of the reconstructed data during the first epochs. Each figure shows the testing result after each epochs.
Corrupted Data SNR (dB) Recovery SNR (dB) Recovery Gain (dB)
8.15 23.99 15.84
TABLE I: Recovery Performance of SARGAN on Spectrally Notched Data (SNR)

Fig. 6 visualizes the generator loss during the training phase. It can be seen that the network converges after around epochs, which matches the above down-range profile visualization.

Fig. 6: Generator loss values during the training phase. The network converges after roughly iterations.

To further qualitatively evaluate SARGAN, we compute the Signal-to-Noise Ratio (SNR) in dB scale between the original data and the data recovered using SARGAN using the formula:




We also compare that to the SNR between the original data and the corrupted data. Table I shows the SNR in these two cases, and the performance gain in dB scale obtained by SARGAN. It can be seen that our proposed method reduced the sidelobe level by more than dB. This matches the normalized down-range profiles shown in Fig. 4. Remarkable, SARGAN obtains this performance gain without any information on the missing band locations. Popular methods such as FFT and sparse recovery fail in this case.

Iv Conclusion

We proposed a Generative Adversarial Network framework, called SARGAN, to tackle the missing spectral information recovery problem. A well-trained SARGAN is expected to produce a good estimate of the full spectrum SAR data from its spectrally notched counterpart without any spectral information. In the training phase, the network is encouraged to learn the relationship between a set of full and corrupted spectrum data pairs. This relationship is captured in our proposed generator loss function, which forces SARGAN to favor solutions on the SAR data manifold which are consistent with the input data in the frequency domain.

Using the real UWB SAR database from database, we show that the proposed framework can successfully recover the information from the missing frequency bands. Remarkably, it obtains more than dB gain without knowing the missing frequency locations. To our knowledge, it is the first method obtaining such performance gain in this situation.


  • [1] H. Kopka and P. W. Daly, A Guide to LaTeX, 3rd ed.   Harlow, England: Addison-Wesley, 1999.
  • [2] Nguyen, L. H., Kapoor, R., Sichina, J., ?Detection algorithms for ultrawideband foliage-penetration radar,? Proceedings of SPIE Vol. 3066, pp. 165-176 (1997).
  • [3] L. Nguyen, K. Kappra, D. Wong, R. Kapoor, and J. Sichina, ?Mine field detection algorithm utilizing data from an ultrawideband wide- area surveillance radar,? Proc. SPIE Int. Soc. Opt. Eng. 3392, 627 (1998).
  • [4] Nguyen, L., Ressler, M., Sichina, J., ”Sensing through the wall imaging using the Army Research Lab ultra-wideband synchronous impulse reconstruction (UWB SIRE) radar,” Proceedings of SPIE Vol. 6947, 69470B (2008).
  • [5] L. H. Nguyen T. Do ”Recovery of missing spectral information in ultra-wideband synthetic aperture radar (SAR) data ” Radar Conference (RADAR) 2012 IEEE pp. 0253 0256 May. 2012.
  • [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672?2680, 2014.
  • [7] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversar- ial Nets. In NIPS, 2016.
  • [8] M. Mirza and S. Osindero. Conditional Generative Adver- sarial Nets. arXiv preprint arXiv:1411.1784, 2014.
  • [9] D.Yoo,N.Kim,S.Park,A.S.Paek,andI.S.Kweon.Pixel- Level Domain Transfer. In ECCV, 2016.
  • [10]

    P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to- Image Translation with Conditional Adversarial Networks. In CVPR, 2017.

  • [11] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In ECCV, 2016.
  • [12] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunning- ham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-Realistic Single Image Super-Resolution Us- ing a Generative Adversarial Network. In CVPR, 2017.
  • [13] C. Li and M. Wand. Precomputed Real-Time Texture Syn- thesis with Markovian Generative Adversarial Networks. In ECCV, 2016.
  • [14] X. Wang and A. Gupta. Generative Image Modeling Using Style and Structure Adversarial Networks. In ECCV, 2016.