 # An ensemble Kalman filter approach based on level set parameterization for acoustic source identification using multiple frequency information

The spatial dependent unknown acoustic source is reconstructed according noisy multiple frequency data on a remote closed surface. Assume that the unknown function is supported on a bounded domain. To determine the support, we present a statistical inversion algorithm, which combines the ensemble Kalman filter approach with level set technique. Several numerical examples show that the proposed method give good numerical reconstruction.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The unknown source problems occur widely in many real applications, e.g., remote sensing, radar detection , brain image etc. In this paper, we consider a reconstruction problem of acoustic sources from remote measurements of the acoustic field. For an acoustic source with density supported on region , the pressure of the radiated time-harmonic wave can be modeled  by

 (△+k2)u(x,k)=S(k,x),x∈Rd, (1.1) limr→∞rd−12{∂u∂r−iku}=0,r=|x|, (1.2)

where is the wave number, is the radial frequency, is the speed of sound, is the spatial dimension and (1.2) is the Sommerfeld radiation condition. Here, we discuss the case of and the source function is separable with respect to space and frequency, i.e.,

 S(k,x)=A(k)f(x),suppf⊂D. (1.3)

In real settings, the source is usually unknown and needs to be determined from some observed data. And the data receiver devices are generally located on a remote closed surface . We can collect the information of on for many frequencies or wave numbers . For , we have the potential representation of acoustic pressure 

 u(k,x) =∫DS(k,y)H(1)0(k|x−y|)dy =A(k)∫Df(y)H(1)0(k|x−y|)dy, (1.4)

where is the cylindrical Hankel function. The data are collected on finite points for finite wave numbers . In Figure 1, we show the measure points by the circles on the boundary of the square. By virtue of (1.4), we can write

 Hkmf+ηm=[u(xj,km)]Nj=1:=bmform=1,2,⋯M, (1.5)

where depends on wave number and is the noise. Denote and . Further assume that the function is known a priori to have the form

 f(x)=n∑l=1wlIDl(x); (1.6)

here denotes the indicator function of subset , are subsets of such that and (), the are known positive constants. In this setting, the regions determine the unknown function and therefore become the primary unknowns. This problem has been discussed in [2, 13], from which we can see that the source can be determined uniquely according to the multiple frequency data. Many very successful approaches have been proposed to solve the reconstruction problem of the acoustic source [3, 14, 15, 26, 32, 35, 41, 43]. The main techniques in these works include iteration and sampling algorithms. And coping with the ill-posedness is usually addressed through the use of regularization.

For this kind of geometry shape problems, the other very powerful tool is the level set method , which was originally introduced by Osher and Sethian 

. It is devised as a simple and versatile method for computing and analyzing the motion of an interface. In this method, the interface is supposed to move in the normal direction with some speed, and then a partial differential equation, in particular a Hamilton-Jacobi equation, about the level set function is established. Due to the high efficiency, it has developed to be one of the most popular tools for many disciplines, such as image processing, computer graphics, computational geometry. Besides the aspect of high efficiency in computational schemes, the crucial advantage of the level set approach lies in that it allows for topological changes to be detected during the course of algorithms, which is impossible with classical methods based on curve parameterizations. For shape reconstruction inverse problems, level set method has also been widely studied

[1, 6, 10, 20, 37].

Recently, statistical perspective has been widely applied in the field of inverse problems [21, 36, 39]

. In this framework, the unknown, noise and data are viewed as random variables. Moreover, the core issue for inverse problems in the statistics framework is how to characterize the posterior distribution, e.g., evaluate posterior moments or other posterior expectations. Based on the Bayes’ formula, plenty of algorithms have been proposed to characterize the posterior distribution. The most widely used class of approaches include Markov chain Monte Carlo (MCMC)

[36, 40]

, maximum a posterior estimation (MAP)

, ensemble Kalman filter (EnKF)  etc. These methods has been applied in many inverse problems, e.g., the electrical impedance tomography [12, 22, 23, 34], the Darcy flow [5, 19], fluid mechanics [9, 17], optical flow , inverse scattering  etc. A natural idea to solve the interface reconstruction problem is to combine the statistical perspective and the level set approach. To the authors’ knowledge, the studies in this field are rare. Since the level set mapping is not continuous, it is difficult to establish the well-posedness of the Bayesian level set posterior distribution according to the theory in . To get the well-posedness, the Gaussian prior assumption on the level set function is imposed in  and the almost surely continuous of the forward operator on the level set function is obtained. And subsequently the posterior well-posedness is given . In , the hierarchical level set Bayesian algorithm is proposed to solve the kind of problems with unknown prior parameter. In , an EnKF coupling level set is considered and applied in EIT problem. One can refer to [27, 28, 31, 38] for some more real applications, e.g., oil reservoir problem, image segmentation problem etc.

## 2 Level set approach

In this section, we sketch the level set approach for inverse shape reconstruction problems.

According to [10, 11, 16], we can characterize the region by the following way

 Dl={x∈D∣cl−1≤φ(x)

where is called the level set function and are constants. Using this characterization and the assumption (1.6), we can define the level set map

 G(φ)(x)=f(x)=n∑l=1wlIDl(x). (2.2)

It is clear that the mapping is not an injection since there exist many different determine the same shape. However, every given function uniques specifies a function . We combine (2.2) and (1.5) to give

 Hkm∘G(φ)=[u(km,xj)]Nj=1form=1,2,⋯M. (2.3)

This can be rewritten in a compact form

 K(φ)+η=b, (2.4)

where and is an

dimensional column vector.

Introduce a time variable and define

 ∂D(t)={x:φ(x,t)=0}. (2.5)

The level set function evolves dynamically given by the convection equation [6, 37]

 ∂φ∂t=−v⋅∇φ=−v|∇φ|⋅∇φ|∇φ|:=−v|∇φ|, (2.6)

which reflects that varies along with the direction of the negative gradient with velocity . According to , we take the variate as

 v=J(f)T(b−Hf), (2.7)

where is the Jacobian of at . For an initial guess , we have the following initial value problem for of Hamilton-Jacobi system

 ∂φ∂t=J(f)T(Hf−b)|∇φ|, (2.8) φ(x,0)=φ0. (2.9)

An explicit discretized version of the above system yields

 φn+1=M(φn) (2.10)

with initial guess .

## 3 EnKF for the level set algorithm

We now treat problem (2.4) in the frame of statistical inversion, which looks all involved quantities as random variables and explores the posterior distribution on given . We restate the inverse problem as an identification problem of system state. The system evolves in dynamics (2.10) with unknown initial guess. We assume the initial state obeys the distribution , where is the prior mean and is the prior covariance operator. Observations are collected at time denoted by . Thereby, we have the following state identification problem

 φn+1=M(φn),n=0,1,⋯,Nt (3.1) φ0∼μ0, (3.2) K(φn+1)+ηn+1=bn+1,n=0,1,⋯,Nt, (3.3)

where are the i.i.d. Gaussian noise, and is the noise covariance matrix. We focus on the posterior distribution , the distribution of , given . Bayes rule gives a characterization of via the ratio of its density with respect to that of the prior

 dμ0∣1:Ntdμ0(φ0)=dμ(b1:Nt∣φ0)dμ(b1:Nt). (3.4)

And therefore, the posterior density satisfies

 π0∣1:Nt∝π(b1:Nt∣φ0)π0. (3.5)

From the angle of filter, we are concerned with obtaining the posterior distribution via

, the distribution associated with the probability measure on the random variable

. Suppose that we get the present system state together with the prior covariance . The operators characterize model uncertainty and are design parameters. With these information as the prior for the next state , we estimate the new state by blending the observed data into the dynamics. However, there is, in general, no easily usable closed form expression for the density of . We can break this into a two-step process: prediction step and analysis step. In the prediction step, we use the present state distribution to predict the new state according to the dynamics as

 ^φn+1|n=M(φn). (3.6)

Using the prediction together with the design parameter , we give the prior distribution and further obtain the analysis state distribution according to the observational process (3.3)

 dμn+1|n+1d^μn+1|n(φ)=exp(−12|K(φ)−bn+1|2Γ). (3.7)

When the design parameter is fixed, this filter corresponds to the 3DVar process. Naturally, we also can estimate the prior covariance via samples. To characterize the posterior distribution, the ensemble Kalman filter (EnKF) approach represents the filtering distribution through an ensemble of particles. The prediction sample can be obtained for each particle. Then the prediction sample mean is taken as the prior mean and the covariance of prediction samples is taken as the prior covariance. In details, suppose that at the -th step, we have samples , is the number of samples. For each particle, we have a prediction

 ^φ(j)n+1=M(φ(j)n),j=1:J. (3.8)

Define the empirical covariance as

 ^Ξn=1JJ∑j=1(^φ(j)n+1−¯φn+1)⊗(^φ(j)n+1−¯φn+1), (3.9)

where denotes the ensemble mean. The analysis state for each particle is then obtained by minimizing the following functional

 J(φ) =12∫Ω(φ−^φ(j)n+1)^Ξ−1n(φ−^φ(j)n+1)dx +12|K(φ)−bn+1|2Γ,j=1:J. (3.10)

Due to the nonlinearity of the observational operator in (3), the numerical implementation of the minimization becomes difficulty. To overcome this point, iterative EnKF approaches are proposed in [17, 18, 42]. Define the state vector . We modify the prediction process as

 ^ςn+1=Θ(ςn) (3.11)

by

 ^φn+1=M(φn),^fn+1=G(^φn+1). (3.12)

Accordingly, the observation operator is given by . Formally, the minimization function about the state vector has the form of

 J(ς)=12∥ς−^ςn+1∥2^Ξςn+12|Oς−bn+1|2Γ, (3.13)

where is the prior covariance about . The minimizer can be given by

 ς =((^Ξςn)−1+O∗Γ−1O)−1((^Ξςn)−1^ςn+1+O∗Γ−1bn+1) =(I+^ΞςnO∗Γ−1O)−1(^ςn+1+^ΞςnO∗Γ−1bn+1). (3.14)

Write the prior covariance matrix as

 ^Ξςn=⎛⎝^Ξφφn^Ξφfn^Ξfφn^Ξffn⎞⎠. (3.15)

The component form of (3.14) has

 φ=^φn+1+^ΞϕfnH∗(H^ΞffnH∗+Γ)−1(bn+1−H^fn+1), (3.16) f=(I+^ΞffnH∗Γ−1H)−1(^fn+1+^ΞffnH∗Γ−1bn+1). (3.17)

We list this algorithm here.

Algorithm 1:

• Initial particles: Let be the initial samples drawn from the prior distribution . And obtain the initial samples of according to .

• Prediction: Evaluate

 ^φ(j)n=M(φ(j)n−1),j=1:J, (3.18) ^f(j)n=G(^φ(j)n),j=1:J. (3.19)

Compute , .

• Analysis: Define , by

 ^Ξφfn=1JJ∑j=1(^φ(j)n−¯φn)⊗(^f(j)n−¯fn), (3.20) ^Ξffn=1JJ∑j=1(^f(j)n−¯fn)⊗(^f(j)n−¯fn). (3.21)

Update each particle via (3.16) and (3.17). Compute the mean of the parameter update

 φn=1JJ∑j=1φ(j)n, (3.22) fn=1JJ∑j=1f(j)n (3.23)

and moreover check the stopping rule.

In other way, the state vector is defined by . Accordingly, the observation operator is given as . In this setting, the EnKF approach is listed in the follows.

Algorithm 2:

1. Initial particles: Let be the initial samples drawn from the prior distribution .

2. Prediction: Evaluate

 ^φ(j)n=M(φ(j)n−1),j=1:J, (3.24) ^bn(j)=K(^φ(j)n). (3.25)

Compute , .

3. Analysis: Define , by

 ^Ξφbn=1JJ∑j=1(^φ(j)n−¯φn)⊗(^bn(j)−¯bn) (3.26) ^Ξbbn=1JJ∑j=1(^bn(j)−¯bn)⊗(^bn(j)−¯bn). (3.27)

Update each particle by

 φ(j)n=^φ(j)n+^Ξφbn(^Ξbbn+Γ)−1(bn−^bn(j)). (3.28)

Compute the mean of the parameter update

 φn=1JJ∑j=1φ(j)n. (3.29)

and moreover check the stopping rule.

## 4 Prior distribution

From the above description, we see that the prior distribution plays key role in the statistical inversion. In this paper, we choose the Whittle-Matérn fields as the prior distribution. They are stationary Gaussian random fields with the autocorrelation function

 C(x)=21−νΓ(ν)(|x|l)νKν(|x|l),x∈Rd, (4.1)

where is the smoothness parameter, is the modified Bessel function of the second kind of order and is the length-scale parameter. The integer value of determines the mean square differentiability of the underlying process, which matters for predictions made using such a model. This distribution is discussed by many authors [25, 33, 34]. A method for explicit, and computationally efficient, continuous Markov representations of Gaussian Matérn fields is derived . The method is based on the fact that a Gaussian Matérn field on can be viewed as a solution to the stochastic partial differential equation (SPDE)

 (I−l2△)(ν+d/2)/2φ=√αl2W, (4.2)

where

is the Gaussian white noise and the constant

is

 α:=σ22dπd/2Γ(ν+d/2)Γ(ν).

Here

is the variance of the stationary field. The operator

is a pseudo-differential operator defined by its Fourier transform. When

, we can solve (4.2) by the finite element method with suitable boundary condition. The stochastic weak solution of the SPDE (4.2) is found by requiring that

 (ψ,(I−l2△)(ν+d/2)/2φ)=(ψ,√αl2W), (4.3)

where . We use the linear element to solve the weak formulation with Dirichlet boundary condition, i.e., .

## 5 Numerical test

In the numerical tests, we consider frequencies varying from Hz to kHz and ms. These parameter settings are given in . Figure 1 displays the settings of our problem. The data are taken on the circle dots of the boundary . The noise is taken as . The prior samples are generated in the disk domain . We assume that the support lies in the disk. The disk domain is decomposed into triangular elements and the potential integral is calculated using the Gaussian integral formulation on each triangle. Moreover we solve (4.2) in the same mesh using the linear finite element. To avoid inverse crime as far as possible, we use the finer mesh in the inverse process than in the data generation. The time derivative in the Hamilton-Jacobi system is discretized by the forward Euler scheme. The Euler step size is taken as . This sample size and the iteration step are taken as and respectively. The prior parameter is set to . We display the numerical reconstruction of for several different cases. In Figures 2, 3 and 4, we give the numerical results for the cases of single domain, two separately domains and Taichi graph respectively. For each case, the data noise level are taken as and . It should be noted that we can use (3.22) or (3.23) to generate the estimation of the unknown source when applying Algorithm 1. We show the numerical comparisons for using different estimation way in Algorithm 1 and Algorithm 2. And it can be seen that whatever approximation way, we almost get similar numerical result for all examples. In addition, it is obvious that the numerical effect is better for smaller noise level. Figure 2: Single domain: Line 1: True solution f; Lines 2-4: The reconstructions of f for δ=0.1; Lines 5-7: The reconstructions of f for δ=0.01. Lines 2 and 5 give the approximation using Algorithm 2. Lines 3 and 6 show the reconstruction using Algorithm 1 via (3.22). Lines 4 and 7 are the numerical results by (3.23). Figure 3: Separate domains: Line 1: True solution f; Lines 2-4: The reconstructions of f for δ=0.1; Lines 5-7: The reconstructions of f for δ=0.01. Lines 2 and 5 give the approximation using Algorithm 2. Lines 3 and 6 show the reconstruction using Algorithm 1 via (3.22). Lines 4 and 7 are the numerical results by (3.23). Figure 4: Taichi graph: Line 1: True solution f; Lines 2-4: The reconstructions of f for δ=0.1; Lines 5-7: The reconstructions of f for δ=0.01. Lines 2 and 5 give the approximation using Algorithm 2. Lines 3 and 6 show the reconstruction using Algorithm 1 via (3.22). Lines 4 and 7 are the numerical results by (3.23).

## References

•  A. Aghasi, M. Kilmer, and E. Miller, Parametric level set methods for inverse problems, SIAM J. Imaging Sciences, 4(2), (2011), 618-650.
•  A. Alzaalig, G. Hu, X. Liu and J. Sun, Fast acoustic source imaging using multi-frequency sparse data, arXiv:1712.02654.
•  G. Bao, S. Lu, W. Rundell and B. Xu, A recursive algorithm for multi-frequency acoustic inverse source problems, SIAM J. Numer. Anal., 53 (2015), 1608-1628.
•  D. Béréziat and I. Herlin, Solving ill-posed image processing problems using data assimilation, Numer Algor (2011) 56: 219–252.
•  A. Beskos, A. Jasra, E. Muzaffer, A. Stuart, Sequential Monte Carlo methods for Bayesian elliptic inverse problems, Stat Comput, 2015, DOI 10.1007/s11222-015-9556-7.
•  M. Burger, A level set method for inverse problems, Inverse problems, 17(5): 1327-1355, 2001.
•  T. Bui-Thanh and O. Ghattas, An analysis of infinite dimensional Bayesian inverse shape acoustic scattering and its numerical approximation, SIAM/ASA Journal on Uncertainty Quantification, 2, 2014, 203-222.
•  N. Chada, M. Iglesias, L. Roininen and A. Stuart, Parameterizations for ensemble Kalman inversion, Inverse Problems 34 (2018) 055009 (31pp).
•  S. Cotter, M. Dashti, J. Robinson and A.M. Stuart, Bayesian inverse problems for functions and applications to fluid mechanics. Inverse Problems 25, 2009, 115008.
•  O. Dorn and D. Lesselier, Level set methods for inverse scattering, Inverse Problems 22 (2006) R67–R131.
•  M. Dunlop, M. Iglesias and A. Stuart, Hierarchical Bayesian level set inversion, Statistics and Computing, November 2017, Volume 27, Issue 6, 1555-1584.
•  M. Dunlop and A. Stuart, The Bayesian formulation of EIT: analysis and algorithms, Inverse Problems and Imaging, 10(4), 2016, 1007–1036.
•  M. Eller and N. Valdivia, Acoustic source identification using multiple frequency information, Inverse Problems 25, 115005(20pp), 2009.
•  R. Griesmaier, Multi-frequency orthogonality sampling for inverse obstacle scattering problems, Inverse Problems, 27 (2011), 085005.
•  R. Griesmaier and C. Schmiedecke. A factorization method for multi-frequency inverse source problems with spars far field measurements, SIAM J. Imag. Sci., (2017), to appear
•  M. Iglesias, Y. Lu and A. Stuart, A Bayesian Level Set Method for Geometric Inverse Problems, Interfaces and Free Boundaries 18 (2016), 181-217.
•  M. Iglesias, K. Law and A. Stuart, Ensemble Kalman methods for inverse problems, Inverse Problems 29 (2013) 045001 (20pp).
•  M. Iglesias, A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems, Inverse Problems 32 (2016) 025002 (45pp).
•  M. Iglesias, K. Lin, A. Stuart, Well-posed Bayesian geometric inverse problems arising in subsurface flow, Inverse Problems, 30 2014, 114001.
•  K. Ito, K. Kunisch and Z. Li, Level-set function approach to an inverse interface problem, Inverse Problems 17 (2001) 1225–1242.
•  J. Kaipio, E. Somersalo, Statistical and Computational Inverse Problems, Springer Verlag, London, 2005.
•  J. Kaipio, V. Kolehmainen, M. Vauhkonen and E. Somersalo, Inverse problems with structural prior information, Inverse Problems, 15(3), 1999, 713–729.
•  J. Kaipio, V. Kolehmainen, E. Somersalo and M. Vauhkonen, Statistica l inversion and Monte Carlo sampling methods in electrical impedance tomography, Inverse Problems 16(5), 2000, 1487-1522.
•  J. van der Kruk, C. Wapenaar, J. Fokkema and P. van den Berg, Improved three-dimensional image reconstruction technique for multi-component ground penetrating radar data, Subsurface Sensing Technologies and Applications, 4(1), 61-99, 2003.
•  F. Lindgren and H. Rue and J. Lindstrm, An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approch, J. R. Statist. Soc. B, 73, (2011), 423-498.
• 

X. Liu and J. Sun, Reconstruction of Neumann eigenvalues and support of sound hard obstacles, Inverse Problems, 30 (2014), 065011.

•  R. Lorentzen, K. Flornes and G. Nævdal, History matching channelized reservoirs using the ensemble Kalman filter. SPE Journal 17 (2012), 137–151.
•  R. Lorentzen, G. Nævdal, and A. Shafieirad, Estimating facies fields by use of the ensemble Kalman filter and distance functions – applied to shallow-marine environments. SPE Journal 3 (2012), 146–158.
•  S. Osher and R. Fedkiw, The Level Set Method and Dynamic Implicit Surfaces. Springer, New York (2003). Zbl pre01873119
•  S. Osher and A. Sethian, Fronts propagating with curvature-dependent speed: algorithms based on Hamilton–Jacobi formulations. J. Comput. Phys. 79 (1988), 12–49. Zbl 0659.65132 MR 89h:80012
•  J. Ping and D. Zhang, History matching of channelized reservoirs with vector-based level-set. SPE Journal 19 (2014), 514–529.
•  R. Potthast, A study on orthogonality sampling, Inverse Problems, 26 (2010), 074015.
• 

C. Rasmussen and C. Williams, ”Gaussian Processes for Machine Learning”, Adaptive Computation and Machine Learning, The MIT Press, 2006.

• 

L. Roininen, J. Huttunen and S. Lasanen, Whittle-Matérn priors for Bayesian statistical inversion with applications in electrical impedance tomography, Inverse Problems and Imaging, 8(2), 2014, 561–586.

•  J. Sun, An eigenvalue method using multiple frequency data for inverse scattering problems, Inverse Problems, 28 (2012), 025012.
•  A. Stuart, Inverse problems: A Bayesian perspective, Acta Numerica (2010), 451–559.
•  F. Santosa, A level-set approach for inverse problems involving obstacles, ESAIM: Control, Optimization and Calculus of Variations, 1: 17-33, 1996.
•  X. Tai and T. Chan, A survey on multiple level set methods with applications for identifying piecewise constant functions, International Journal of Numerical Analysis and Modeling 1 (2004), 25–47.
•  A. Tarantola, Inverse Problem Theory and Methods for Model Parameter Estimation, SIAM, 2005.
•  Z. Wang, J. Bardsley, A. Solonen, T. Cui and Y. Marzouk, Bayesian inverse problems with priors: a randomize-then-optimize approch, SIAM J. Sci. Comput., 39(5), 2017, S140-S166.
•  X. Wang, Y. Guo, D. Zhang and H. Liu, Fourier method for recovering acoustic sources from multi-frequency far-field data, Inverse Problems, 33 (2017), 035001.
•  X. Yang and Z. Deng, An ensemble Kalman filter approach based on operator splitting for solving nonlinear Hammerstein type ill-posed operator equations, Modern Physics Letters B, 32(28), (2018), 1850335.
•  R. Zhang and J. Sun, Efficient finite element method for grating profile reconstruction, J. Comput. Phys., 302 (2015), 405-419.