High-Contrast Limited-Angle Reflection Tomography

by   Ajinkya Kadu, et al.

Inverse scattering is the process of estimating the spatial distribution of the scattering potential of an object by measuring the scattered wavefields around it. In this paper, we consider a limited-angle reflection tomography of high contrast objects that commonly occurs in ground-penetrating radar, exploration geophysics, terahertz imaging, ultrasound, and electron microscopy. Unlike conventional transmission tomography, the reflection regime is severely ill-posed since the measured wavefields contain far less spatial frequency information of the target object. We propose a constrained incremental frequency inversion framework that requires no side information from a background model of the object. Our framework solves a sequence of regularized least-squares subproblems that ensure consistency with the measured scattered wavefield while imposing total-variation and non-negativity constraints. We propose a proximal Quasi-Newton method to solve the resulting subproblem and devise an automatic parameter selection routine to determine the constraint of each subproblem. We validate the performance of our approach on synthetic low-resolution phantoms and with a mismatched forward model test on a high-resolution phantom.



There are no comments yet.


page 4

page 7

page 8


Diffraction Tomography, Fourier Reconstruction, and Full Waveform Inversion

In this paper, we study the mathematical imaging problem of diffraction ...

In-situ multi-scattering tomography

To recover the three dimensional (3D) volumetric distribution of matter ...

Statistical Inversion Using Sparsity and Total Variation Prior And Monte Carlo Sampling Method For Diffuse Optical Tomography

In this paper, we formulate the reconstruction problem in diffuse optica...

Joint fan-beam CT and Compton scattering tomography: analysis and image reconstruction

The recent development of energy-resolving cameras opens the way to new ...

DeepNIS: Deep Neural Network for Nonlinear Electromagnetic Inverse Scattering

Nonlinear electromagnetic (EM) inverse scattering is a quantitative and ...

Limited-angle tomographic reconstruction of dense layered objects by dynamical machine learning

Limited-angle tomography of strongly scattering quasi-transparent object...

PgNN: Physics-guided Neural Network for Fourier Ptychographic Microscopy

Fourier ptychography (FP) is a newly developed computational imaging app...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Inverse scattering addresses the problem of reconstructing an image of the scattering potential of an object by probing it with electromagnetic or acoustic waves of finite bandwidth. An incident wavefield propagating inside the object induces multiple scattering of the waves that are generally measured on the boundary of the material. The scattered waves carry information about the spatial distribution of the scattering potential of the material, which has led to applications in numerous fields, such as, non-destructive testing [laurens2005non], optical tomography [arridge1999optical], geophysical imaging [sirgue2010thematic, virieux2009overview], ground penetrating radar [witten1994ground], medical imaging [yuan2007three, haynes2010large], and electron microscopy [humphry2012ptychographic, zewail2010four, barton1988photoelectron].

(a) (b) (c)
Fig. 1: Three acquisition scenarios in inverse scattering, (a) full-view, (b) Transmission, and (c) Reflection. is the domain of interest, is a transmission domain, and is a receiver domain. A single experiment consists of - a set of transmitters in sending a wavefield into , and scattered wavefield being measured by set of receivers in .

A scattering experiment consists of a transmission domain , an object domain and a receiver domain , where is the dimension of the scene. A set of transmitters located in sends incident waves into the scene that interact with an object in . This interaction leads to scattering of the incident waves. The scattered waves are then measured at the set of receivers located in

. Based on the location of transmission and receiver domain with respect to the object, we can classify the acquisition scheme into three different types:

(i) full-view, where and surround the domain ; (ii) transmission mode, where and are located on opposite sides of the object; and (iii) reflection mode, where and are co-located. Figure 1 illustrates these acquisition schemes. The full-view mode provides the most information about the spatial distribution of the object. The transmission mode offers less information than that of full-view, but it reduces the cost of the experiment due to the requirement of fewer transmitters and receivers. Tomographic imaging in this acquisition mode, known as transmission tomography, has found applications in many areas, for example, X-ray tomography in medicine and non-destructive testing.

The reflection mode generally arises due to a limitation in the ability to access different sides of the material, as in the case of underground imaging. We focus our presentation on the reflection tomography scenario where the problem is severely ill-posed. The ill-posedness arises due to restricted measurements and the limited availability of low spatial-frequency content in the measured wavefields. We discuss this further in Section II-C. The underground imaging setup often appears in ground penetrating radar, seismic imaging, and ultrasound imaging.

The spatial scattering potential of a material can be described by its contrast level. The contrast indicates the power of interaction of an object material with a probing wave. A low-contrast material is semi-transparent, meaning that the interaction of the waves with it induces weak scattering. A high-contrast material strongly interacts with waves inducing multiple scattering events. In this paper, we classify objects according to their contrast level, with a contrast below 1 being low, a contrast ranging from 1 to 10 being medium-contrast, and a contrast above 10 to be high. In general, the contrast varies with the frequency of the wave, but here we assume it to be independent of frequency.

I-a Related Work

Numerous techniques have been proposed for solving the inverse scattering problem in the reflection regime. Earlier approaches deal with iteratively linearizing the scattering model using straight-ray theory, the Born approximation, and the Rytov approximation [spencer1962general, born2013principles, devaney1981inverse, baysal1983reverse]. However, such linear models fail to account for the complex interaction between the wavefield and the material properties that result in multiple scattering. As a result, these methods require an accurate initial target model to enable the inversion and generally suffer from poor reconstruction quality, especially when the material is inhomogeneous or contains highly scattering objects. Recently, the nonlinear interaction between the wavefield and the object has been incorporated into the inversion process using the wave equation (for example, [colton2012inverse, bao2005inverse, bao2015inverse, borges2017high]). The inverse problem that deals with the wave-equation based scattering model is known as full-waveform inversion (FWI) [tarantola1984inversion, pratt1999seismic, virieux2009overview]. FWI has been applied in multiple domains and across all modes of acquisition. A considerable amount of research has focused on full-view tomography and transmission tomography with FWI. Since we are mainly interested in reflection tomography, we do not address the literature for other modes of tomography.

Reflection tomography with FWI has been heavily investigated in the geophysical community. Since the problem is nonlinear and nonconvex, the convergence of the inversion depends heavily on the initial model [mulder2008exploring, symes2008migration]. Various approaches have been proposed in the past decade to mitigate the effect of an initial model on the reconstructed solution [warner2016adaptive, van2013mitigating, biondi2012tomographic, van2015penalty, bharadwaj2016full]. While these methods work well in the low contrast regime, they require regularization and additional constraints in the high contrast regime to deliver good reconstruction [esser2016constrained, esser2018total, asnaashari2013regularized]. Our work is also inspired by the TV-regularization strategies proposed in [esser2018total]. However, since the total variation parameter may be unknown [esser2018total], we develop a framework to estimate this parameter from the noise level in the data.

The sequential workflow has a long history in the geophysical imaging literature. It was introduced in [Bunks1995] under the name multiscale full-waveform inversion. We work with a regularized version of this multiscale approach. Since, we add one frequency at the time, as opposed to a frequency batch in [Bunks1995], our approach is more robust against local minima (more discussions in section III-A).

I-B Contributions and Outline

We develop an inversion framework for high-contrast limited-angle reflection tomography. Our contributions are three-fold:

  • Formulation: We adopt a regularized sequential approach based on incremental frequency inclusion. We keep the low frequencies in the cost function to avoid potential local minima. For a total of frequencies in the data, we solve constrained nonlinear least-squares problems sequentially.

  • Regularization and Optimization: We introduce a combination of non-negative and total-variation regularization for the contrast function. Note that both the regularizers are non-differentiable. To solve the regularized nonlinear least-squares problem, we propose a proximal Quasi-Newton (prox-QN) method that is computed using a primal-dual method.

  • Parameter Estimation: We develop a strategy for estimating the total-variation constraint parameter from the noise-level in the data.

We introduce the forward and the inverse scattering problem in Section II. Here, we also discuss the challenges of reflection tomography. In section III, we present the details of our sequential approach and describe the optimization framework as well as the regularization strategies. We validate the proposed method on numerical phantoms and compare it with other methods in Section V, and conclude the paper in Section VI.

Ii Inverse Scattering Problem

We begin by presenting the scattering model that describes the relationship between the wavefield and the contrast function. Next, we formulate the discrete inverse problem to reconstruct the contrast function from the set of measured scattered wavefields. Finally, we discuss some challenges in estimating the contrast of an object in the reflection regime.

Ii-a Forward problem

The forward scattering problem constructs a mapping from a contrast function (determined by materials in the object) to the scattered waves measured at receivers. A wave-equation governs this mapping in the time-domain. For simplicity, we restrict our discussion to scalar waves, but the map can naturally be constructed for the other types of scattering problems with some modifications (see, for example, [born2013principles]).

Consider the setup shown in Figure 1 where an object is located in a bounded domain , where denotes the dimension. The object has a spatial distribution of permittivity given by , where denotes the spatial co-ordinate. The object lies in a homogeneous background (free space) of permittivity . We define the contrast function (or the relative permittivity) of the object as the difference of the permittivity of the object from the background, i.e., . We illuminate this object using the waves generated from a source function . Subsequently, the scattered wavefield is measured inside the receiver domain .

The total wavefield in the object domain is related to the contrast function by the Lippmann-Schwinger integral equation


where is the Green function, is an input wavefield, represents the wavenumber in vacuum, is the frequency and denotes the speed of light in vacuum. We assume that is real, or in other words, the object is lossless. The input wavefield in (1) depends on the source function as


Finally, the scattered wavefield measured in the receiver domain, is given by


We provide detailed derivation of equations (1), (2), and (3) in Appendix A. The forward problem finds the measurements from the known source function , the contrast function , and the Green function . In essence, it consists of solving equation (2), the Lippmann-Schwinger equation (1), and finally, the data equation (3). Generally, we pre-compute the input wavefield for each wavenumber , since it is independent of the contrast function.

In the discrete setting, the scattering equation (1) and data equation (3) reduce to the following system of linear equations for each transmitter illumination and the wavenumber:


where and are the total and input wavefields, respectively, denotes the number of grid points used to discretize the domain , denotes the discretized contrast function, while and are the Green functions of the domain and receivers, respectively. Let be the number of receivers that discretizes the receiver domain , then is the noise-free scattered wavefield measured at the receivers. The critical step in the forward problem involves estimating the wavefield by inverting the matrix , where denotes the identity operator. As the discretization dimension increases, explicitly forming the matrix and computing its inverse become prohibitively expensive. Therefore, a functional form of along with the conjugate-gradient method (CG) are often used to perform the inversion. We note here that the convergence of CG depends on the conditioning of the operator , which becomes ill-conditioned for large wavenumber and high-contrast media, i.e., for large values of .

Ii-B Inverse problem

An inverse scattering problem is defined as the estimation of the contrast function given the measurement of the scattered wavefield at receivers for each input wavefield generated from transmitters. Assuming that the measurements are contaminated by white Gaussian noise, we can formulate the discrete inverse problem as a constrained least-squares problem:


where and denote the index sets for frequencies and transmitters respectively, represents the number of frequencies, and represents the number of transmitters. We assume that the is ordered according to the frequencies (in an increasing order). For the rest of this paper, denotes the Euclidean norm (if there is no subscript). In general, problem (5) is ill-posed and admits multiple solutions. Therefore, spatial regularization in the form of a penalty function is often added to make the solution space smaller.

Let us introduce, for each frequency , a data matrix , a wavefield matrix and the input wavefield matrix . Hence, cost function and the constraint for each frequency takes the form

where denotes the Frobenius norm. It is possible to eliminate the wavefields by satisfying the constraints, . Such reduced cost-function at frequency is given by


With the incorporation of this reduced form, the regularized version of (5) now reads


Since both the cost function and constraints are nonlinear, we resort to iterative methods to find a feasible solution to the regularized least-squares optimization problem shown above.

Ii-C Transmission vs Reflection

A critical distinction between the transmission and reflection modes in inverse scattering manifests itself in the spatial frequency content that can be captured by the measured wavefields. In the transmission regime, the received measurements generally capture large amount of the lower spatial frequencies of the target distribution compared to the reflection regime. This is due to the fact that a probing pulse in the transmission mode is modulated by the complete object before reaching the receivers. On the other hand, the measured wavefields in the reflection mode are modulated by the discontinuities in the object permittivity that lead to reflections of the wavefields back to the receivers.

In order to illustrate this phenomenon, we simulate two sets of measurements of the scattered wavefield from the same object, observed in the transmission and reflection modes through the measurement operators and , respectively. The object is illuminated from its left side by a flat spectrum pulse containing 2, 3, and 5GHz frequency components. We want to identify the amount of spatial frequency content that is encoded in each of and without being affected by the nonconvexity of the inverse problem (5). Therefore, we provide the true scattered wavefields for each frequency, which reduces (5) to a convex linear inverse problem in . Consequently, we solve the convex form of (5) to compute in each of the transmission and reflection modes and plot in Figure 2 the spatial frequency content (2D Fourier coefficients) of the reconstructed contrast in each of the transmission and reflection modes. Notice how the recovered contrast in the reflection mode exhibits very little energy around the low spatial frequency subbands in the Fourier plane. This is in stark contrast to the transmission mode where a significant portion spectral energy of the recovered contrast corresponds to the low spatial frequencies. The illustration above helps motivate the argument that the received measurements of the scattered wavefields in the reflection tomography mode encode very little spatial frequency information about the target object. Since the goal of tomographic imaging is to estimate the spatial distribution the scattering potential of an object, the weak acquisition of spatial frequency information renders the problem severely ill-posed when compared to transmission tomography.

Fig. 2: Comparison of the spatial frequency content of the received wavefields between the transmission mode and the reflection mode from a transmitted pulse containing 2GHz, 3GHz, and 5GHz frequency components.

Iii Regularized Multiscale Approach

In this section, we present an incremental frequency inversion method that does not require a smooth initial model of the target image for successful recovery. We also discuss the regularization and the optimization strategy to solve the resulting problem.

Iii-a Sequential Workflow

The least-squares cost function in (7) provides a natural separation across frequencies. Moreover, the topology of the nonconvex cost function varies drastically between frequencies and can be leveraged to find a good local minimum. We illustrate this behavior using a simple cylindrical model for the target with a constant reflectivity as shown in Figure 3(a). The true target has a reflectivity and is illuminated with transmitters located at a y-position of -0.6m. We plot in Figure 3(b) the value of the data-fidelity cost function for various values. Notice that for higher-frequency wavefields, the cost function exhibits many local minima that are farther away from the global minimizer than for the low-frequency wavefields.

A popular approach in the exploration geophysics community is to solve a sequence of inverse problems starting with a low-frequency batch, and then sliding linearly towards the high frequencies keeping the batch-size fixed. In Figure 3(c), we plot such cost function for various frequency batches. We observe that the higher frequency batch has many local minima. The sliding approach works only when we get close to the global minimizer during the low-frequency batch inversions. A more robust approach would be to keep the low-frequencies as regularizer when inverting with high-frequency data. We plot the cost function in Figure 3(d). Notice that the cost functions are almost convex even when dealing with high-frequncy data.

(a) (b) (c) (d)
Fig. 3: (a) Illustration of a cylindrical object with true reflectivity equal to measured by five co-located transmitters and receivers. Topology of the cost function per frequency (b), per frequency batch of size 10 (c), and incremental freqency batch (d) relative to the estimated reflectivity .

The observations above led us to use an incremental frequency inversion framework where the model of the object’s permittivity is sequentially updated as higher frequencies are included in the inversion. Given a measured wavefield containing frequency components indexed in increasing order from 1 to , our framework iteratively estimates the model from low to high-frequency while keeping the low-frequency cost function as a regularizer for high-frequency inversions.


Therefore, instead of solving a single nonconvex minimization problem in (7), we solve subproblems sequentially according to (8), where the sequence of solutions moves us closer to the global minimizer of (7).

Iii-B Regularization

In this section we provide details on the total variation norm and the non-negativity constraints we use to regularize the problem, as well as their implementation through a proximal operator.

Iii-B1 Total-variation

The Total-Variation (TV) norm of a compactly supported function is formally defined as

where denotes the set of continuously differentiable functions of compact support contained in . This norm measures the total change in the function over a finite domain [rudin1992nonlinear]. If is differentiable, then we can express the total-variation using an integral

where denotes the norm or Manhattan norm. As a result, regularization with a TV norm promotes piecewise constant approximation of the true model [unser2017splines]. In a discrete two dimensional setting, the TV-norm is represented as


The and are the finite difference operators in x and y directions, and are the identity operators. We adopt the TV regularization in its constrained form [esser2018total], such that,


where is an indicator function to the set , and is a constraint parameter. The second line in (10) expresses the discretized version of the TV regularization function using the constrained -ball. We note here that the proximal for an indicator function to set

corresponds to the projection of a vector onto the set

. Efficient algorithms exist for the projection onto the -norm ball (see, for example, [condat2016fast]).

Iii-B2 Non-negative Constraints

Since the contrast function is defined as the relative permittivity of an object (with respect to vacuum), it will always be non-negative. Hence, we impose this prior information using a regularization

where denotes the indicator to a non-negative orthant. The proximal operator for this function corresponds the projection of a vector onto a non-negative orthant. In specific, the projection operator is

Iii-B3 Implementation

In order to impose the non-negative + TV constraints, we define the proximal operator:


with . The proximal operator becomes a projection onto the intersection of the sets: the TV-norm ball set and the non-negative orthant set. Although there is no simple analytical expression for this proximal operator, it can be evaluated efficiently using various splitting methods, e.g., the alternating direction method of multipliers (ADMM) [boyd2011distributed] and/or primal-dual method [chambolle2011first]. Here, we use the primal-dual method, which we derive for the sum of three convex functions in Appendix C. Algorithm 1 describes the primal-dual method to solve (11).


Algorithm 1 Proximal for Non-negative + Total-Variation
5:while  do
13:end while

Iii-C Proximal Quasi-Newton Method

To solve each subproblem in (8), i.e.,


we propose a proximal Quasi-Newton (prox-QN) method. For simplicity of illustration, we enumerate the steps in Algorithm 2, but provide a complete derivation in Appendix D.

3:for  to  do
4:     compute the gradient .
5:     compute the approximate Hessian
6:     solve from equation (D.5)(a)
7:     define
10:     check optimality conditions
11:end for
Algorithm 2 Prox-QN method for solving (12)

The algorithm consists of two loops. The inner loop, implicit in step 4 and described in Appendix D, finds the search direction, while the outer loop computes the next iterate based on the computed search direction and step length. At every sequence of the outer loop, we compute the gradient using an adjoint-state method, and form the approximate Hessian with the L-BFGS procedure. The procedure to compute the gradient is explained in Appendix B. Once we have the gradient and approximate Hessian at the current iterate, we compute the search direction using the primal-dual method (see (D.6)). Next, we search for the feasible step length using the backtracking linesearch. Finally, we compute the next iterate from the optimal search direction and the feasible step length.

Iv Estimating the constraint parameter

Recall that for each subproblem (12) in our proposed framework, we are solving a TV-constrained nonlinear least-squares problem where the constraint parameter should bound the total variation of the solution. Naturally, the choice of constraint parameter would significantly affect the reconstruction performance.

In order to estimate for each new subproblem, we develop a parameter estimation routine inspired by the approach in [BergFriedlander:2011] for sparse optimization with linear least squares constraints. Suppose that we have an initial estimate of obtained at the frequency corresponding to the subproblem for which the TV norm , specifically,


where is as defined in (6), and the constraints are satisfied for all . At subproblem , the cost is added to the objective function, resulting in the potentially unsatisfied constraint


where . To overcome the nonconvexity of the objective function due to (14), we linearize the objective function around by estimating , thus reducing to a convex least squares cost function in , i.e.,

where is the data mismatch cost function defined in (6). Consequently, we may now define a value function for the subproblem as


where is the data residual at the frequency, and is the vector formed by concatenating all the vectors , such that, . The function is defined as , with being the transposed pseudo-inverse of the finite difference operator defined in (9). Note that (15) shows the primal and dual problems for computing the value function .

The dual problem in (15) conveniently shows that the maximum is achieved when is at its minimum . Moreover, the gradient of with respect to is easily computed as . Therefore, we can compute the update for using a Newton root finding step, such that,


where is the upper bound on the norm of the noise up to the frequency bin. Finally, we note that at the zeroth iteration, the parameter can be set to zero, resulting in a homogeneous solution for .

V Numerical Experiment

In this section, we describe the experimental setup for the reflection tomography. We evaluate our method on two numerical phantoms and compare it with two other approaches. We also experiment with a partially non-inverse-crime dataset in Section V-E.

(a) Setup (b) Phantom 1 (c) Phantom 2 (d) Phantom 3
Fig. 4: (a) Reflection tomography setup for all the numerical experiments. The dotted region denotes the object domain . The transmitters and receivers are collocated at  m. (b), (c), (d) are the three numerical phantoms used for the experimentation.

V-a Experimental details

We consider an experimental setup illustrated in Figure 4(a). The domain is 1 m 1 m and extends in x-direction from  m to  m and in y-direction from  m to  m. There are total of five transmitters and receivers located on a line  m. Each transmitter illuminates a flat spectrum pulse occupying the frequency band MHz. All 5 receivers are activated for each transmitter. We consider three frequency bands: i) a low frequency band consisting of  MHz with , ii) a medium frequency band consisting of  MHz with , and iii) a high frequency band consisting of  MHz with . Hence, in total, we consider 47 frequencies between 10 MHz and 2000 MHz.

We work with 3 phantoms shown in Figure 4(b)-(d). All phantoms have a length of 1 m in both x and y directions. Phantom1 is a Shepp-Logan phantom which resembles the brain. It is a well-known phantom in the image processing and tomography community. Here, we discretize it on grid. It has total of 4 contrast values, namely . Phantom 2 resembles an underground scene. It has layer structure in the background whose contrast ranges from 0.1 to 0.5. A square-type hole (of contrast of 0) is embedded in a rhombus-type structure with a contrast of 1. This phantom also has a resolution of . We use these low resolution phantoms to compare our method with other exisiting methods and to check the robustness against the noise.

Phantom 3 is a high-resolution phantom depicting another underground scene. It has a resolution of . It contains 3 horizontal layers of contrast . The phantom consists of two circular pipes of outer diameter 0.4 m and 0.24 m with a thickness of 0.6 m and 0.5 m respectively. A large pipe has an inner region filled with a high contrast material of permittivity 1 and a small pipe has a vaccum inside. We use this phantom to perform a partially non-inverse-crime test as described in Section V-E.

V-B Comparison with other methods

We restrict ourself to the two classical methods. For fair comparison we modify these methods to add the prescribed regularization. We do not compare with linearized methods like Born approximation and Rytov approximation as these methods have shown to fail for high-contrast imaging [ma2018accelerated].


: The CISOR algorithm aims to solve (5) by taking all frequencies at once [ma2018accelerated]. As opposed to TV-norm penalization, we use the proposed regularization, i.e., we regularize it with non-negative and total-variation constraints with known value. The TV constraint parameter is set to the total-variation of the true model. The problem is solved using a prox-QN method with a maximum of 5000 iterations or until convergence (norm of the gradient below ).


Recursive linearization (RL) method was introduced in [chen1997inverse], and has been a standard while working with multi-frequency data. The method enjoys the computational benefit of solving a single constraint (i.e., solving a single linear system of equations) at a time, but might suffer in the high-contrast regime. It solves the sequence of problems

with an initial guess to each subproblem being the solution of the previous subproblem. We modify the cost function to include the regularization. Similarly to the CISOR, we consider non-negative and TV regularization with known value. Each subproblem is solved using a prox-QN method with a maximum of 500 iterations.


This method corresponds to the proposed sequential framework with known value. It solves the problem described in (8). We use a prox-QN method to solve each subproblem with a maximum of 500 iterations or until converge.


This method corresponds to the proposed sequential framework with estimation of at each iteration. It solves the problem described in (8) with the estimation from (16). Here, we assume that the noise-level is known. We use a prox-QN method to solve each subproblem with a maximum of 500 iterations or until converge.

For all the methods the initial model corresponds to a contrast of 0 everywhere.

V-C Performance Measures

We use the following measures to evaluate the performance of the proposed methods and to compare with other methods.


The data residual (DR) measures the distance of the modeled data for reconstructed model with the actual data in the euclidean sense. For multi-frequency data the DR takes the following form

where is the reconstructed solution. Here, denotes the Frobenius norm for the matrix . DR must be close to the noise-level for a method to be considered good.


The signal-to-noise ratio (SNR) for the reconstructed model

with respect to the ground truth is

A reconstruction is considered good if it has high SNR. This measure is only available if we know the ground truth.

V-D Exact-model experiments

To evaluate our approach, we perform both noise-free and noisy experiments, in which the exact model is known. In the noise-free experiments, we compare our methods with the other two methods (CISOR and RL). In noisy ones, we only examine the robustness of our methods against various levels of noise.

V-D1 Noise-free experiment

We consider Phantom 1 and 2 for this experiment. We produce three types of phantoms by scaling these phantoms with a maximum contrast () of , which we consider to be low, medium, and high-contrast phantoms, respectively. For the simulations we use the reflection tomography setup illustrated in Figure 4(a) with noiseless data. We examine the performance of the methods SF- and SF-, and compare it with the CISOR and RL method. Figures 5 and 6 show the reconstructions for various contrast levels for Phantom 1 and 2,respectively. We see that SF- consistently performs well except in the case of = 100 for Phantom 1, where all the methods fail. The reason for the failure is that Phantom 1 is ideal for transmission or full-view tomography and not for reflection tomography. For an underground scene (depicted by Phantom 2) we see that the proposed methods performs well with the reflection tomography. We tabulate the values for the performance measures in Table I. We conclude that the SF- and SF- perform superior to the existing methods (CISOR and RL).


Fig. 5: Comparison of proposed methods with other methods on Phantom 1.

Fig. 6: Comparison of proposed methods with other methods on Phantom 2.
Phantom 1 Phantom 2
1 DR 0.87 45.89 0.74 2.36 0.32 29.16 0.05 0.06
SNR 14.73 3.87 15.12 9.19 27.63 8.84 42.79 18.22
10 DR 28.08 75.27 8.78 11.13 945.16 260.43 3.77 24.75
SNR 2.17 1.94 3.83 4.47 0.16 11.08 47.07 18.00
100 DR 295.41 97.12 2.72 4.95 344.59 69.02 5.76 1.52
SNR 0.27 1.15 2.60 3.08 0.11 10.72 17.18 14.42
TABLE I: Comparison of methods on Phantom 1 and Phantom 2

V-D2 Noisy experiment

We consider Phantom 1 and 2, with the scaling . We add a Gaussian noise of relative energy , and , and examine the performance of SF- and SF- on these noise levels. Figure 7 shows the reconstructions using these methods for relative noise energy and various levels of contrast values. The performance measures are tabulated in Table II. We observe that SF- and SF- are robust against high noise in the low-contrast phantoms. SF- is also stable for moderate level of noise in high-contrast regime.

Phantom 1 Phantom 2

Fig. 7: Noise robustness of proposed methods on Phantom 1 and Phantom 2 with noise
Phantom 1 Phantom 2
1 DR 9.02 18.45 15.46 25.54 9.61 19.34 9.62 20.20
SNR 10.61 8.29 6.54 5.71 27.05 22.44 15.63 12.44
10 DR 13.39 24.15 16.53 21.80 20.48 16.99 19.77 25.37
SNR 3.46 2.25 2.89 3.19 26.51 24.28 17.26 11.03
100 DR 11.44 20.60 10.73 20.70 11.51 19.86 9.95 20.09
SNR 2.10 1.83 2.88 2.67 15.29 11.53 14.57 10.94
TABLE II: Noise-Robustness of SF- and SF-

V-E Inexact-model experiment

To verify the robustness of our approach, we consider Phantom 3 for this test that has a resolution of , using an inexact model for the reconstruction. In particular, we first discretize the model on a high-resolution grid of size . We use the nearest-neighbor algorithm for the rescaling to a high-resolution grid. We generate the data on the high-resolution grid. As a sanity check, we look at the difference between the data for high-resolution and low-resolution model, and found the relative difference is less than . We test SF- and SF- methods with this high-resolution dataset. We assume a noise level of for SF-, while we set to be the TV-value of the ground truth (low-resolution model) for SF-. The reconstruction results for SF- and SF- are presented in Figure 8. SF- has DR of 0.35, and SNR of 20.84, while SF- has a DR of 11.52, and SNR of 14.61. We observe that SF- is able to reconstruct the ground scene accurately: the top and the bottom regions of the pipes are retrieved to high precision. However, the method suffers from reconstructing the left and right sides of these pipes, as shown in Figure 8(a). SF- is able to locate the high-contrast and the low-contrast objects in the pipes but fails to get the boundary of the pipes accurately. Figures 8(b) and (d) plot the TV values of the models generated during the iterative process. Starting with a low TV-value model, SF- generates a sequence of models with the next model having the higher or the same TV value as the previous model. This does not hold for the SF- method. We see that it generates a model with low TV value for low-frequency batches, while it overshoots the true TV value during high-frequency batches (corresponding to high iteration number).

(a) SF- (b) TV (c) SF- (d) TV
Fig. 8: (a) and (c) are the reconstruction results for SF- and SF- respectively. (b) and (d) are the evolution of TV-value of the generated sequence of iterates for the SF- and SF-. The red dotted line shows the TV-value of the ground truth.

Vi Conclusions

We consider limited-angle reflection tomography of high-contrast objects and show that the tomography problem is severely ill-posed due to the absence of low-frequency content and multiple scattering of waves. To find a feasible solution to this ill-posed problem, we develop a regularized multiscale approach. We pose the imaging problem as a nonlinear least-squares problem with constraints. The cost function includes the wave-based modeling that accounts for multiple scattering and a regularization term that includes non-negativity and total variation constraints. The total cost function is decomposed according to the frequency, and we observe that the low-frequencies promote smoothness while higher frequencies add details in the reconstruction. Hence, we solve a sequence of subproblems, where the subproblem has a constrained cost function measured over the first frequencies. We propose a proximal-Quasi-Newton method to solve the resulting constrained problem. The underlying proximal operations are performed using a primal-dual approach. We also propose an automatic strategy to update the TV-constraint parameter based on the noise-level in the data. Through numerical experiments, we demonstrate that our methodologies outperform the existing methods and is robust against moderate noise. The proposed techniques can retrieve high-contrast object (contrast up to 100) for scenes similar to the underground.

Appendix A Scattering Formalism

Consider a scattering setup illustrated in Figure 1. The scene (free-space with permittivity ) has a dimension . The transmitter domain sends a pulse with a source function , which generates an incident wavefield everywhere. This incident wavefield interacts with an object in domain and generates a total wavefield . The scattered wavefield is then measured in the receiver domain .

The total wavefield is a superposition of an incident field and a scattered field ,


The incident wavefield is the field in the absence of the scatterer, while the scattered field takes the presence of object into account. The incident wavefield satisfies the Helmholtz equation

where denotes the wavenumber. It is convenient to consider the above equation for inside and outside the object domain :


Similarly, the total wavefield satisfies the Helmholtz equation, and we can express it inside and outside the domain as follows,


where is the permittivity of the object. Now, from the equations (A.1), (A.2) and (A.3), the governing equation for the scattered wavefield reads

These equations can be compactly written as


where is a contrast function that is equal to the difference between the permittivity, , inside the object domain and outside. We supplement the scattered wavefield equation (A.4) with the Sommerfeld radiation condition

where . Equation (A.4) can be converted to an equivalent integral equation by introducing the free space Green function. The free space Green function satisfies

together with the Sommerfeld radiation conditions. Here, is a dirac-delta function. The explicit representation for the Green function reads

where , and is the zero-order Hankel function of second kind. Hence, the integral representation for the input wavefield is

and similarly, for the scattered wavefield is

Noting that the scattered wavefield is the difference of the total wavefield and the input wavefield (see (A.1)) and restricting our observations to the object domain , we arrive at the well-known Lippmann-Schwinger equation

The equation above describes the relation between the total-wavefield and the contrast function inside the object domain . The scattered wavefield is then measured in the receiver domain resulting in the following data equation:

Appendix B Gradient Computation

In this section, we derive a gradient for an equality constrained cost function


where is a real-valued function and is a set valued function. We assume that both the functions and are smooth and hence, differentiable. For the constrained problem (B.1), the Lagrangian reads


where is a Lagrange multiplier corresponding to the constraints, and represents the conjugate transpose of the vector with complex entries. The stationary point of the Lagrangian , denoted by , satisfies

The first condition gives rise to an adjoint equation


while the second condition is the states equation


The states equation generates a wavefield for a given parameter value . The adjoint equation calculates the Lagrange multiplier (also called adjoint wavefield) correspoding to wavefield for given . Tthe gradient of is now retrieved from the partial derivative of the Lagrangian with respect to ,


This method is known as the adjoint-state method [plessix2006review].

Inverse scattering Example

For an inverse scattering problem, represents the misfit function between the simulated and the measured wavefields and is a Lippmann-Schwinger equation,

At a given value of , the adjoint system for the Lippmann-schwinger equation is


Here, is the adjoint wavefield and the is obtained satisfying the constraints at given value of :


Once the forward wavefield and the adjoint wavefield are computed, the gradient is


Computing the gradient requires solving the forward (B.7) and the adjoint (B.6) systems only once each.

Appendix C Primal-Dual Method

We consider a class of optimization problems


where is a differentiable closed convex function. and are closed non-differentiable convex functions. We assume that the proximal operators for the functions and are inexpensive. denotes a structured matrix. For example, in TV regularization, represents a discrete gradient operator. We assume that the matrix may be potentially non-invertible, such is the case in TV regularization. In this section, we derive a primal-dual algorithm to find an optimal solution to problem (C.1).

C-a Preliminaries

A set-valued operator , that maps a point to sets is monotone if

The operator , with is called as the resolvent of the operator , where is an identity operator. The value of the resolvent is the unique solution of the monotone inclusion . A resolvent of a monotone operator is a non-expansive operator. An operator is non-expansive if

A proximal operator (also known as prox-operator) of a closed convex function is the resolvent with , a sub-differential of a function . The prox-operator reads as

and it maps to the unique solution of the optimization problem

A (convex) conjugate of a general function is

A conjugate of a function is always convex. The prox-operator of a function and its conjugate is related by the Moreau identity,


C-B Fixed point method

A fixed point of an operator is defined as the set of points such that . A fixed point method finds one such point by generating a sequence of iterates with of form

for a given initial point . The iterates converge to one of the fixed point if is a non-expansive operator.

Now recall that the resolvent of a monotone operator is a non-expansive operator, i.e., . Also, it can be easily seen that the zeros of the monotone operator are the fixed points of its resolvent. Hence, the fixed point iterations takes the following form to find the zeros of a monotone operator :

A more efficient scheme to find the zero of is a preconditioned fixed-point method. This iteration scheme generates a sequence

with as a symmetric positive-definite linear operator. This sequence can be simplified to


C-C Primal-Dual algorithm

To compute monotone operator for (C.1), we look at its first-order optimality condition. It states that a zero-vector must be in the subdifferential of the cost function, i.e.,


where and are the respective subdifferentials of functions and . Let’s consider variables in the subdifferential of and in the subdifferential of ,


The equations in (C.5) can be restated as follows.


where and are the convex conjugate of the functions and respectively. From equations (C.4) and (C.6), we can write the optimality conditions in the form of the following system



is the identity matrix. It is easy to show that the operator

in (C.7) is a monotone operator. Consider a preconditioner operator

with , the preconditioned fixed-point iteration scheme in (C.3) results in the following primal-dual algorithm:

If the proximal operators of functions and are simple, then the each iteration can be computed efficiently.

Appendix D Proximal Quasi-Newton method

In this section, we discuss the Quasi-Newton (QN) method and its proximal version (prox-QN). Assuming the cost function is twice differentiable, QN aims to solve


by generating a sequence based on the quadratic approximation to the fuction at every iterate of the sequence. The procedure is as follows:


Here, is (an approximation of) the Hessian of function at . This method differs from the Newton method, as the former relies on an approximation, while the latter computes the exact Hessian. If is a convex function, the QN method converges to a global minimum. If is non-convex, the QN can only guarantee the convergence to a local optimum.

We are interested in adapting the QN method to solve problems of form