# Bayesian approach for inverse obstacle scattering with Poisson data

We consider an acoustic obstacle reconstruction problem with Poisson data. Due to the stochastic nature of the data, we tackle this problem in the framework of Bayesian inversion. The unknown obstacle is parameterized in its angular form. The prior for the parameterized unknown plays key role in the Bayes reconstruction algorithm. The most popular used prior is the Gaussian. Under the Gaussian prior assumption, we further suppose that the unknown satisfies the total variation prior. With the hybrid prior, the well-posedness of the posterior distribution is discussed. The numerical examples verify the effectiveness of the proposed algorithm.

## Authors

• 4 publications
• 3 publications
• ### Bayesian Approach to Inverse Acoustic Scattering from Disks and Line Cracks with Phaseless Data

This paper is concerned with inverse acoustic scattering problem of infe...
07/29/2019 ∙ by Zhipeng Yang, et al. ∙ 0

• ### Bayesian Approach to Inverse Time-harmonic Acoustic Scattering from Sound-soft Obstacles with Phaseless Data

This paper concerns the Bayesian approach to inverse acoustic scattering...
06/03/2020 ∙ by Zhipeng Yang, et al. ∙ 0

• ### Expectation Propagation for Poisson Data

The Poisson distribution arises naturally when dealing with data involvi...
10/18/2018 ∙ by Chen Zhang, et al. ∙ 0

• ### A parametric Bayesian level set approach for acoustic source identification using multiple frequency information

The reconstruction of the unknown acoustic source is studied using the n...
07/19/2019 ∙ by Zhiliang Deng, et al. ∙ 0

• ### Quality-Bayesian approach to inverse acoustic source problems with partial data

A quality-Bayesian approach, combining the direct sampling method and th...
04/09/2020 ∙ by Zhaoxing Li, et al. ∙ 0

• ### Variational Bayesian Approach and Gauss-Markov-Potts prior model

In many inverse problems such as 3D X-ray Computed Tomography (CT), the ...
08/28/2018 ∙ by Camille Chapdelaine, et al. ∙ 0

• ### Reconstruction and uniqueness of moving obstacles

We study the uniqueness and accuracy of the numerical solution of the pr...
09/01/2013 ∙ by Kamen M. Lozev, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Inverse obstacle scattering problems occur in many real-world applications such as radar, sonar and non-destructive testing [2]. One main object of this kind of problems is to find the shape and location of the unknown obstacle, which usually can not be observed directly. Due to the widely applications, these prolems have attracted many researchers’ attention. Especially, a large number of papers put emphasis on designing some effect numerical algorithms to recover the unknown information according to some related observed data [1, 2]

. The observed data are contaminated by the noise inevitably. The Gaussian process has many theoretical and practical advantages since it can mimic the effect of many random processes. And therefore a lot of noise models can be characterized by the Gaussians. However, in many applications, the response variable of interest is a count, that is, takes on nonnegative integer values. For count data, it is well modeled by a Poisson process. The Poisson process is usually used to describe the occurrences of events and the time points at which the events occur in a given time interval, such as the occurrence of natural disasters and the arrival times of customers at a service center. In the obstacle scattering problem, the counts of photons at a remote closed surface are detected. As seen in

[3, 9, 10], the counts of photons have interacted with the unknown object and can be well modeled by a Poisson process. In view of this, we consider to use the Poisson data to recover the unknown shape.

The stochastic nature of the data undoubtedly results in the uncertainty of the reconstruction. Therefore, besides the reconstruction of the unknown object itself, its uncertainty requires the extra concerns. Probabilistic thinking provides a natural way to quantify the uncertainty. Recently, Bayesian inference method has become a popular tool for this purpose

[7]

. In Bayesian approach, the main idea is to translate the prior knowledge on the noise and on the unknowns to prior probability laws and then use the forward model to transmit the prior information to the posterior distribution from which we can deduce any information about the unknowns. A rigorous Bayesian framework is developed for the inverse problems with the infinite dimensional unknowns

[8]

. From the existed works, we can see that the prior for the unknowns is crucial in Bayesian approach and even determines whether or not it works. A difficult subject is how to code the experience prior information in some distribution formula. It is well-known that, subject to certain mild conditions, the sum of a set of random variables, has a distribution that becomes increasingly Gaussian as the number of terms in the sum increases. This point allows us to use Gaussians to model the prior for many real physical parameters. In this paper, we characterize the unknown by two kinds of prior knowledges: the Gaussian and the hybrid priors. Upon these priors, we discuss the well-posedness of the posterior distribution with the Poisson data.

The remainder of this paper is organized in the following: In Section 2, we describe the exterior acoustic scattering problem. In Section 3, the Bayesian approach and the well-posedness are discussed. In the final, some numerical examples are given.

## 2 Acoustic obstacle scattering with Poisson data

For a perfect cylindrical conductor with smooth cross section , the scattering of polarized, transverse magnetic time harmonic electromagnetic waves is described by the Helmholtz equation

 △u+k2u=0inR2∖D, (2.1)

where is the wave number, the total field, the scattered field and the incident wave. We let , where is the incident direction. The following boundary conditions are imposed

 u∣∂D=0, (2.2) limr→∞√r(∂us∂r−ikus)=0,r=|x|. (2.3)

The condition (2.2) corresponds to the sound soft boundary condtion and (2.3) is the so-called Sommerfeld radiation condition. The Sommerfeld radiation condition characterizes the outgoing wave and enables us to have the following asymptotic behaviour for the scattered wave :

 us(x)=eik|x||x|12{u∞(^x)+O(1|x|)}as% |x|→∞, (2.4)

where and is the far field pattern. The direct scattering problem is to find the scattered field for given incident field and obstacle . It is known that there exists a unique solution if is Lipschitz continuous [2].

We focus on the reconstruction of the unknown domain by some observed data: the photon counts of the scattered electromagnetic field far away from the obstacle. Since at large distances the photon density is approximately proportional to [9], the obstacle reconstruction problem can be described by the operator equation

 F(D)=τ|u∞|2, (2.5)

where is a constant corresponding the noise level. We suppose that the domain is starlike, i.e., the boundary is parameterized by

 ∂D=q(t)[cos(t),sin(t)]T,t∈[0,2π). (2.6)

In this setting, the function becomes the unknown and therefore the equation (2.5) is equivalent to

 G(q)=τ|u∞|2. (2.7)

The operator equation can be given by the single-double layer potential theories [2]. Let be of class . Define the single layer potential operator

 (Sφ)(x)=2∫∂DΦ(x,~x)φ(~x)ds(~x),x∈∂D, (2.8)

and the double layer potential operator

 (Kφ)(x)=2∫∂D∂Φ(x,~x)∂ν(~x)φ(~x)ds(~x),x∈∂D, (2.9)

where is the fundamental solution, is the Hankel function of the first kind of order zero and is normal direction. It can be seen in [2] that and are bounded from into . With the single-double layer potentials, the scattered field can be written as

 us(x)=∫∂D{∂Φ(x,~x)∂ν(~x)−iηΦ(x,~x)}φ(~x)ds(~x),x∈R2∖¯¯¯¯¯D, (2.10)

where is the real coupling parameter and the unknown density function. Then the main goal for the direct scattering problem is to determine the unknown density function according to the sound soft boundary condition by

 (I+K−iηS)φ=−2uion∂D. (2.11)

By (2.11), the far field pattern further can be written as

 u∞(^x,d)=e−iπ4√8πk∫∂D(kν(~x)⋅^x+η)e−ik^x⋅~xφ(~x)ds(~x). (2.12)

Equations (2.11), (2.12) together with (2.6) give the forward operator .

## 3 Bayesian inversion

Denote and . We consider the observation is a Poisson point process [9] in with parameters . The goal of statistical inversion is to explore the posterior distribution , i.e., the distribution of conditional on the value of . Bayes’ formula shows that

 dμydμpr=π(y|q), (3.1)

where is the prior distribution for the unknown and the likelihood function.

The Poisson noise yields the likelihood function

 π(y|q)=m∏j=1λyjje−λjyj!. (3.2)

Following [7, 8], the likelihood function can be written in the form of

 π(y|q)∝exp(−Λ(q,y)), (3.3)

where

 Λ(q,y)=m∑j=1(G(q))j−yjlog(G(q))j. (3.4)

The prior is given before acquiring the data. Since the denotes the length, an additional restriction should be imposed: the unknown must be positive. For this purpose, we consider the following two ways [1, 10]

 q(t)=exp(r(t)), (3.5) q(t)=a2(erf(r(t))+b), (3.6)

where and are constants and is the error function

 erf(r(t))=2√π∫r(t)0exp(−t2)dt.

These two ways guarantee that is strictly positive. With these parameterizations, we now infer the new unknown parameter . We still use to denote the prior distribution for . Without any confusion, the forward operator is still denoted by . The prior needs to be chosen carefully and it is not unique. We give different priors for (3.5) and (3.6).

For (3.5), the prior is given by the Gaussian

 μpr=μ1=N(0,C1), (3.7)

where is the covariance function and should be chosen carefully. In [1], a convenient prior to use is

 r′′(t)∼N(0,A−s),s>0, (3.8)

where with the definition domain

 D(A):={v∈H2[0,2π]:∫2π0v(t)dt=0}. (3.9)

In this prior assumption, it can be verified almost surely with , i.e., [1, 7]. In this prior assumption, the well-posedness of the posterior distribution for the Gaussian noise case has been discussed in [1].

For (3.6), the prior distribution is chosen as denoted by . To guarantee the period, we use the covariance function [5, 6]

 C2(s,t)=exp(−2sin2(t−s2)l2), (3.10)

where defines the characteristic length-scale. The covariance example can be viewed as mapping the one-dimensional input variable to the two-dimensional . The distance between and is given

 |r(t)−r(s)|2=(cost−coss)2+(sint−sins)2=4sin2t−s2.

The covariance kernel (3.10) can be obtained by taking the squared exponential kernel in space

 C2(r(t),r(s))=exp(−|r(t)−r(s)|22l2).

We apply a hybrid prior proposed in [10]. On the basis of the Gaussian prior , we further assume that the unknown satisfies the total variation prior as in [10]

 dμTVdμ2(r)∝exp(−∥r∥TV):=exp(−R(r)), (3.11)

where is the TV seminorm,

 ∥r∥TV=ζ∫2π0|drdt|dt (3.12)

with a positive constant . It follows immediately that the posterior distribution has this form

 dμydμ2∝exp(−Λ(r,y)−R(r)):=exp(−Ψ(r,y)). (3.13)

Next, we discuss the well-posedness of the posterior distribution with the hybrid prior. This well-posedness is characterized in the sense of Hellinger metric, which is defined by

 dH(μy,μ~y)=(12∫(√dμydμ2−√dμ~ydμ2)2dμ2)12.

Here, we need to use the following Fernique theorem to some functionals are bounded for Gaussian measures.

###### Theorem 3.1 (Fernique).

If is a Gaussian measure on Banach space , so that , then there exists such that

 ∫Xexp(α∥r∥2X)dμ<∞.
###### Lemma 3.2.

For fixed , there exists such that

 |G(r)|⩽M(1+∥r∥X). (3.14)

for all .

###### Proof.

According to (2.6) and (2.12), we have

 u∞(^x)= e−iπ4√8πk∫2π0(kν(t)⋅^x+η)e−ik^x⋅(qcost,qsint)T~φ(t)√q2+q′2dt,

where . Hence it follows that

 |u∞(^x)|2⩽M1∥∥q2+q′2∥∥∞≤M1(∥q∥2∞+∥q′∥2∞).

By (3.6), we get

 ∥q∥∞≤a2(∥erf(r)∥∞+b)≤a2(1+b), ∥q′∥∞≤a√π∥r′exp(−r2)∥∞≤a√π∥r′∥∞≤M2∥r∥X.

This completes the proof.

###### Lemma 3.3.

For every , there exists constants and such that, for all , and with ,

 I2≤Λ(r,y)≤I1. (3.15)
###### Proof.

For convenient, denote

 Λ(r,y) =⟨λ,1⟩−⟨y,logλ⟩,

where is the inner product in . It is obvious from (3.6) that

 a2(b−1)≤q(t)≤a2(b+1).

Therefore, the boundedness of the forward operator implies that

 0<λ––≤λ≤¯¯¯λ (3.16)

for two constant vectors

, . And it follows immediately that for a positive constant . Furthermore, we obtain for by the Cauchy-Schwarz inequality

 Λ(r,y)≥⟨λ––,1⟩−|logλ||y|≥⟨λ––,1⟩−lmaxs:=I2, Λ(r,y)≤⟨¯¯¯λ,1⟩+|logλ||y|≥⟨¯¯¯λ,1⟩+lmaxs:=I1.
###### Remark 1.

It should be noted that the lower bound in (3.16) probably has component. In this case, we can make a shift for the forward operator by

 ~G(r)=G(r)+e,

where is a positive constant vector. Then we discuss the new problem of the form

 ~G(r)=~λ:=τ|u∞|2+e.

To avoid the addition statement, we still use as the forward operator.

###### Theorem 3.4.

Assume that is the Gaussian measure with covariance function (3.10). For with , there exists such that

 dH(μy,μ~y)≤M|y−~y|. (3.17)
###### Proof.

First, it should be pointed out that the posterior measure is absolute continuous about the prior measure . This will guarantee the existence of the Radon-Nikodym derivative . In fact, for any measurable subset , it holds that

 μy(X1)=∫X1exp(−Ψ(r,y))dμ2≤∫X1dμ2.

Therefore implies that . Define

 A(y)=∫Xexp(−Ψ(r,y))dμ2.

It is obvious that

 A(y)≤1.

By Lemma 3.3, we have

 A(y)≥∫∥r∥X≤1exp(−Ψ(r,y))dμ2 =∫∥r∥X≤1exp(−Λ(r,y)−R(r))dμ2 =∫∥r∥X≤1exp(−m∑j=1{(G(r))j−yjlog(G(r))j}−∥r∥TV)dμ2 ≥∫∥r∥X≤1exp(−I1)exp(−∥r∥TV)dμ2 ≥exp(−I1)∫∥r∥X≤1exp(−ζ∫2π0|r′(t)|dt)dμ2 ≥exp(−I1)∫∥r∥X≤1exp(−2πζ∥r∥X)dμ2 ≥exp(−I1)exp(−2πζ)∫∥r∥X≤1dμ2.

Since is the Gaussian measure, the unit ball has positive measure. This shows that is strictly positive. This together with this absolute continuous shows that the posterior distribution is well-defined. The Hellinger metric holds that

 d2H(μy,μ~y)=12∫(√dμydμ2−√dμ~ydμ2)2dμ2 (3.18) =12∫⎧⎪⎨⎪⎩(exp(−Ψ(r,y))A(y))12−(exp(−Ψ(r,~y))A(~y))12⎫⎪⎬⎪⎭2dμ2 =12∫⎧⎪⎨⎪⎩(exp(−Ψ(r,y))A(y))12−(exp(−Ψ(r,~y))A(y))12⎫⎪⎬⎪⎭2dμ2 +12∫⎧⎪⎨⎪⎩(exp(−Ψ(r,~y))A(y))12−(exp(−Ψ(r,~y))A(~y))12⎫⎪⎬⎪⎭2dμ2 ≤12A(y)∫{exp(−Ψ(r,y)2)−exp(−Ψ(r,~y)2)}2dμ2 +12|A(y)−12−A(~y)−12|2∫exp(−Ψ(r,~y))dμ2:=Σ1+Σ2.

For , since

is bounded, it suffices to estimate the integral part. According to Lemma

3.2 and Fernique therorem, we have

 ∫{exp(−Ψ(r,y)2)−exp(−Ψ(r,~y)2)}2dμ2 ≤14∫|Ψ(r,y)−Ψ(r,~y)|2dμ2 =14∫|m∑j=1(~yi−yi)(logG(r))i|2dμ2 ≤14|y−~y|2∫|logG(r)|2dμ2 ≤M24|y−~y|2∫log2(1+∥r∥X)dμ2≤~M|y−~y|2. (3.19)

Since and are bounded from below, it follows that

 |A(y)−12−A(~y)−12|2 ≤Mmax(A(y)−3,A(~y)−3)|A(y)−A(~y)|2≤M|y−~y|2. (3.20)

The estimation (3.17) can be obtained from (3.18), (3.19) and (3.20).

###### Remark 2.

The requirement in Fernique theorm is guaranteed naturally since it has the exponential covariance function (3.10). This case for with the form (3.5) is similar to (3.6). Actually, we just need to take and verify the condition .

## 4 Numerical test

In this section, we demonstrate some numerical examples to show the effectiveness of the Bayesian method for the inverse obstacle scattering problem with the Poisson data. We are ready to use the MCMC method to generate samples to explore the posterior distribution in (3.1).

It is clear that the operator in (3.9) has the eigen-system . With this, the prior for (3.5) can be generated by the Karhunen-Loève expansion. The truncated expansion is used to generate the prior samples. The corresponding derivatives can be computed accurately. For (3.6), the samples for with the covariance (3.10

) are period. We use the fast Fourier transform (FFT) to implement the numerical computations for the derivatives of the samples. The posterior samples for the two ways are both generated by the Metropolis-Hastings algorithm.

We just list the hybrid prior case in the following. For using (3.5) and the corresponding prior, the process is similar.

• Initialize and further generate in (3.6).

• For iteration do

• Propose and further generate .

• Acceptance probability:

 α(^r;ri−1)=min{1,exp(−Ψ(^r,y))exp(−Ψ(ri−1,y))}, υ∼Uniform(0,1).
• Accept or reject the proposal in the following way

 ri={^r,ifυ>α,ri−1,else.

endfor

Consider the following obstacles

 1:Peanut:q(t)=3(cos2t+0.25sin2t),x1(t)=q(t)cost,x2(t)=q(t)sint; 2:Kite:(x1,x2)=(cost+0.65cos2t−0.65,1.5sint); 3:Drop:x1=−1+2sin(t/2),x2=−sin(t); 4:Cloverleaf:q(t)=1+0.3cos(4t),x1(t)=q(t)cost,x2(t)=q(t)sin(t),

where .

In Fig. 1, we plot the Poisson data for . Using these data, we reconstruct the obstacles displayed in Fig. 2 and 3. The posterior sample means are used as the approximations. In Fig. 2, we plot the confidence interval for the peanut shape.

## References

• [1] T. Bui-Thanh and O. Ghattas, An Analysis of Infinite Dimensional Bayesian Inverse Shape Acoustic Scattering and Its Numerical Approximation. SIAM/ASA Journal on Uncertainty Quantification 2, no. 1, 203-222, 2014.
• [2] D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory (Third Edition), Springer, New York, 2013.
• [3] T. Hohage and F. Werner, Inverse problems with Poisson data: statistical regularization theory, applications and algorithms, Inverse Problems 32 (2016) 093001 (56pp).
• [4] Z. Li, Z. Deng and J. Sun, Limited aperture inverse scattering problems using Bayesian approach and extended sampling method, prepared.
• [5]

C. Rasmussen and C. Williams, Gaussian Processes for Machine Learning, The MIT Press Cambridge, Massachusetts London, England, 2006.

• [6] A. Solin and S. Särkkä, Explicit link between periodic covariance functions and state space models,
• [7] A. Stuart, Uncertainty Quantification in Bayesian Inversion, https://homepages.warwick.ac.uk/ masdr/TALKS/stuartICM.pdf.
• [8] A. Stuart, Inverse problems: a Bayesian perspective, Acta Numerica, 19: 451-559, 2010.
• [9] F. Werner, Inverse problems with Poisson data: Tikhonov-type regularization and iteratively regularized Newton methods, Dissertation, Göttingen, 2011.
• [10] Q. Zhou, X. Zhang and J. Li, Bayesian inference and uncertainty quantification for image reconstruction with Poisson data,