1 Introduction
The unknown source problems occur widely in many real applications, e.g., remote sensing, radar detection [24], brain image etc. In this paper, we consider a reconstruction problem of acoustic sources from remote measurements of the acoustic field. For an acoustic source with density supported on region , the pressure of the radiated timeharmonic wave can be modeled [13] by
(1.1)  
(1.2) 
where is the wave number, is the radial frequency, is the speed of sound, is the spatial dimension and (1.2) is the Sommerfeld radiation condition. Here, we discuss the case of and the source function is separable with respect to space and frequency, i.e.,
(1.3) 
In real settings, the source is usually unknown and needs to be determined from some observed data. And the data receiver devices are generally located on a remote closed surface . We can collect the information of on for many frequencies or wave numbers . For , we have the potential representation of acoustic pressure [13]
(1.4) 
where is the cylindrical Hankel function. The data are collected on finite points for finite wave numbers . In Figure 1, we show the measure points by the circles on the boundary of the square. By virtue of (1.4), we can write
(1.5) 
where depends on wave number and is the noise. Denote and . Further assume that the function is known a priori to have the form
(1.6) 
here denotes the indicator function of subset , are subsets of such that and (), the are known positive constants. In this setting, the regions determine the unknown function and therefore become the primary unknowns. This problem has been discussed in [2, 13], from which we can see that the source can be determined uniquely according to the multiple frequency data. Many very successful approaches have been proposed to solve the reconstruction problem of the acoustic source [3, 14, 15, 26, 32, 35, 41, 43]. The main techniques in these works include iteration and sampling algorithms. And coping with the illposedness is usually addressed through the use of regularization.
For this kind of geometry shape problems, the other very powerful tool is the level set method [29], which was originally introduced by Osher and Sethian [30]
. It is devised as a simple and versatile method for computing and analyzing the motion of an interface. In this method, the interface is supposed to move in the normal direction with some speed, and then a partial differential equation, in particular a HamiltonJacobi equation, about the level set function is established. Due to the high efficiency, it has developed to be one of the most popular tools for many disciplines, such as image processing, computer graphics, computational geometry. Besides the aspect of high efficiency in computational schemes, the crucial advantage of the level set approach lies in that it allows for topological changes to be detected during the course of algorithms, which is impossible with classical methods based on curve parameterizations. For shape reconstruction inverse problems, level set method has also been widely studied
[1, 6, 10, 20, 37].Recently, statistical perspective has been widely applied in the field of inverse problems [21, 36, 39]
. In this framework, the unknown, noise and data are viewed as random variables. Moreover, the core issue for inverse problems in the statistics framework is how to characterize the posterior distribution, e.g., evaluate posterior moments or other posterior expectations. Based on the Bayes’ formula, plenty of algorithms have been proposed to characterize the posterior distribution. The most widely used class of approaches include Markov chain Monte Carlo (MCMC)
[36, 40], maximum a posterior estimation (MAP)
[36], ensemble Kalman filter (EnKF) [17] etc. These methods has been applied in many inverse problems, e.g., the electrical impedance tomography [12, 22, 23, 34], the Darcy flow [5, 19], fluid mechanics [9, 17], optical flow [4], inverse scattering [7] etc. A natural idea to solve the interface reconstruction problem is to combine the statistical perspective and the level set approach. To the authors’ knowledge, the studies in this field are rare. Since the level set mapping is not continuous, it is difficult to establish the wellposedness of the Bayesian level set posterior distribution according to the theory in [36]. To get the wellposedness, the Gaussian prior assumption on the level set function is imposed in [16] and the almost surely continuous of the forward operator on the level set function is obtained. And subsequently the posterior wellposedness is given [16]. In [11], the hierarchical level set Bayesian algorithm is proposed to solve the kind of problems with unknown prior parameter. In [8], an EnKF coupling level set is considered and applied in EIT problem. One can refer to [27, 28, 31, 38] for some more real applications, e.g., oil reservoir problem, image segmentation problem etc.2 Level set approach
In this section, we sketch the level set approach for inverse shape reconstruction problems.
According to [10, 11, 16], we can characterize the region by the following way
(2.1) 
where is called the level set function and are constants. Using this characterization and the assumption (1.6), we can define the level set map
(2.2) 
It is clear that the mapping is not an injection since there exist many different determine the same shape. However, every given function uniques specifies a function [10]. We combine (2.2) and (1.5) to give
(2.3) 
This can be rewritten in a compact form
(2.4) 
where and is an
dimensional column vector.
Introduce a time variable and define
(2.5) 
The level set function evolves dynamically given by the convection equation [6, 37]
(2.6) 
which reflects that varies along with the direction of the negative gradient with velocity . According to [37], we take the variate as
(2.7) 
where is the Jacobian of at . For an initial guess , we have the following initial value problem for of HamiltonJacobi system
(2.8)  
(2.9) 
An explicit discretized version of the above system yields
(2.10) 
with initial guess .
3 EnKF for the level set algorithm
We now treat problem (2.4) in the frame of statistical inversion, which looks all involved quantities as random variables and explores the posterior distribution on given . We restate the inverse problem as an identification problem of system state. The system evolves in dynamics (2.10) with unknown initial guess. We assume the initial state obeys the distribution , where is the prior mean and is the prior covariance operator. Observations are collected at time denoted by . Thereby, we have the following state identification problem
(3.1)  
(3.2)  
(3.3) 
where are the i.i.d. Gaussian noise, and is the noise covariance matrix. We focus on the posterior distribution , the distribution of , given . Bayes rule gives a characterization of via the ratio of its density with respect to that of the prior
(3.4) 
And therefore, the posterior density satisfies
(3.5) 
From the angle of filter, we are concerned with obtaining the posterior distribution via
, the distribution associated with the probability measure on the random variable
. Suppose that we get the present system state together with the prior covariance . The operators characterize model uncertainty and are design parameters. With these information as the prior for the next state , we estimate the new state by blending the observed data into the dynamics. However, there is, in general, no easily usable closed form expression for the density of . We can break this into a twostep process: prediction step and analysis step. In the prediction step, we use the present state distribution to predict the new state according to the dynamics as(3.6) 
Using the prediction together with the design parameter , we give the prior distribution and further obtain the analysis state distribution according to the observational process (3.3)
(3.7) 
When the design parameter is fixed, this filter corresponds to the 3DVar process. Naturally, we also can estimate the prior covariance via samples. To characterize the posterior distribution, the ensemble Kalman filter (EnKF) approach represents the filtering distribution through an ensemble of particles. The prediction sample can be obtained for each particle. Then the prediction sample mean is taken as the prior mean and the covariance of prediction samples is taken as the prior covariance. In details, suppose that at the th step, we have samples , is the number of samples. For each particle, we have a prediction
(3.8) 
Define the empirical covariance as
(3.9) 
where denotes the ensemble mean. The analysis state for each particle is then obtained by minimizing the following functional
(3.10) 
Due to the nonlinearity of the observational operator in (3), the numerical implementation of the minimization becomes difficulty. To overcome this point, iterative EnKF approaches are proposed in [17, 18, 42]. Define the state vector . We modify the prediction process as
(3.11) 
by
(3.12) 
Accordingly, the observation operator is given by . Formally, the minimization function about the state vector has the form of
(3.13) 
where is the prior covariance about . The minimizer can be given by
(3.14) 
Write the prior covariance matrix as
(3.15) 
The component form of (3.14) has
(3.16)  
(3.17) 
We list this algorithm here.
Algorithm 1:

Initial particles: Let be the initial samples drawn from the prior distribution . And obtain the initial samples of according to .

Prediction: Evaluate
(3.18) (3.19) Compute , .
In other way, the state vector is defined by [17]. Accordingly, the observation operator is given as . In this setting, the EnKF approach is listed in the follows.
Algorithm 2:

Initial particles: Let be the initial samples drawn from the prior distribution .

Prediction: Evaluate
(3.24) (3.25) Compute , .

Analysis: Define , by
(3.26) (3.27) Update each particle by
(3.28) Compute the mean of the parameter update
(3.29) and moreover check the stopping rule.
4 Prior distribution
From the above description, we see that the prior distribution plays key role in the statistical inversion. In this paper, we choose the WhittleMatérn fields as the prior distribution. They are stationary Gaussian random fields with the autocorrelation function
(4.1) 
where is the smoothness parameter, is the modified Bessel function of the second kind of order and is the lengthscale parameter. The integer value of determines the mean square differentiability of the underlying process, which matters for predictions made using such a model. This distribution is discussed by many authors [25, 33, 34]. A method for explicit, and computationally efficient, continuous Markov representations of Gaussian Matérn fields is derived [25]. The method is based on the fact that a Gaussian Matérn field on can be viewed as a solution to the stochastic partial differential equation (SPDE)
(4.2) 
where
is the Gaussian white noise and the constant
isHere
is the variance of the stationary field. The operator
is a pseudodifferential operator defined by its Fourier transform. When
, we can solve (4.2) by the finite element method with suitable boundary condition. The stochastic weak solution of the SPDE (4.2) is found by requiring that(4.3) 
where . We use the linear element to solve the weak formulation with Dirichlet boundary condition, i.e., .
5 Numerical test
In the numerical tests, we consider frequencies varying from Hz to kHz and ms. These parameter settings are given in [13]. Figure 1 displays the settings of our problem. The data are taken on the circle dots of the boundary . The noise is taken as . The prior samples are generated in the disk domain . We assume that the support lies in the disk. The disk domain is decomposed into triangular elements and the potential integral is calculated using the Gaussian integral formulation on each triangle. Moreover we solve (4.2) in the same mesh using the linear finite element. To avoid inverse crime as far as possible, we use the finer mesh in the inverse process than in the data generation. The time derivative in the HamiltonJacobi system is discretized by the forward Euler scheme. The Euler step size is taken as . This sample size and the iteration step are taken as and respectively. The prior parameter is set to . We display the numerical reconstruction of for several different cases. In Figures 2, 3 and 4, we give the numerical results for the cases of single domain, two separately domains and Taichi graph respectively. For each case, the data noise level are taken as and . It should be noted that we can use (3.22) or (3.23) to generate the estimation of the unknown source when applying Algorithm 1. We show the numerical comparisons for using different estimation way in Algorithm 1 and Algorithm 2. And it can be seen that whatever approximation way, we almost get similar numerical result for all examples. In addition, it is obvious that the numerical effect is better for smaller noise level.
References
 [1] A. Aghasi, M. Kilmer, and E. Miller, Parametric level set methods for inverse problems, SIAM J. Imaging Sciences, 4(2), (2011), 618650.
 [2] A. Alzaalig, G. Hu, X. Liu and J. Sun, Fast acoustic source imaging using multifrequency sparse data, arXiv:1712.02654.
 [3] G. Bao, S. Lu, W. Rundell and B. Xu, A recursive algorithm for multifrequency acoustic inverse source problems, SIAM J. Numer. Anal., 53 (2015), 16081628.
 [4] D. Béréziat and I. Herlin, Solving illposed image processing problems using data assimilation, Numer Algor (2011) 56: 219–252.
 [5] A. Beskos, A. Jasra, E. Muzaffer, A. Stuart, Sequential Monte Carlo methods for Bayesian elliptic inverse problems, Stat Comput, 2015, DOI 10.1007/s1122201595567.
 [6] M. Burger, A level set method for inverse problems, Inverse problems, 17(5): 13271355, 2001.
 [7] T. BuiThanh and O. Ghattas, An analysis of infinite dimensional Bayesian inverse shape acoustic scattering and its numerical approximation, SIAM/ASA Journal on Uncertainty Quantification, 2, 2014, 203222.
 [8] N. Chada, M. Iglesias, L. Roininen and A. Stuart, Parameterizations for ensemble Kalman inversion, Inverse Problems 34 (2018) 055009 (31pp).
 [9] S. Cotter, M. Dashti, J. Robinson and A.M. Stuart, Bayesian inverse problems for functions and applications to fluid mechanics. Inverse Problems 25, 2009, 115008.
 [10] O. Dorn and D. Lesselier, Level set methods for inverse scattering, Inverse Problems 22 (2006) R67–R131.
 [11] M. Dunlop, M. Iglesias and A. Stuart, Hierarchical Bayesian level set inversion, Statistics and Computing, November 2017, Volume 27, Issue 6, 15551584.
 [12] M. Dunlop and A. Stuart, The Bayesian formulation of EIT: analysis and algorithms, Inverse Problems and Imaging, 10(4), 2016, 1007–1036.
 [13] M. Eller and N. Valdivia, Acoustic source identification using multiple frequency information, Inverse Problems 25, 115005(20pp), 2009.
 [14] R. Griesmaier, Multifrequency orthogonality sampling for inverse obstacle scattering problems, Inverse Problems, 27 (2011), 085005.
 [15] R. Griesmaier and C. Schmiedecke. A factorization method for multifrequency inverse source problems with spars far field measurements, SIAM J. Imag. Sci., (2017), to appear
 [16] M. Iglesias, Y. Lu and A. Stuart, A Bayesian Level Set Method for Geometric Inverse Problems, Interfaces and Free Boundaries 18 (2016), 181217.
 [17] M. Iglesias, K. Law and A. Stuart, Ensemble Kalman methods for inverse problems, Inverse Problems 29 (2013) 045001 (20pp).
 [18] M. Iglesias, A regularizing iterative ensemble Kalman method for PDEconstrained inverse problems, Inverse Problems 32 (2016) 025002 (45pp).
 [19] M. Iglesias, K. Lin, A. Stuart, Wellposed Bayesian geometric inverse problems arising in subsurface flow, Inverse Problems, 30 2014, 114001.
 [20] K. Ito, K. Kunisch and Z. Li, Levelset function approach to an inverse interface problem, Inverse Problems 17 (2001) 1225–1242.
 [21] J. Kaipio, E. Somersalo, Statistical and Computational Inverse Problems, Springer Verlag, London, 2005.
 [22] J. Kaipio, V. Kolehmainen, M. Vauhkonen and E. Somersalo, Inverse problems with structural prior information, Inverse Problems, 15(3), 1999, 713–729.
 [23] J. Kaipio, V. Kolehmainen, E. Somersalo and M. Vauhkonen, Statistica l inversion and Monte Carlo sampling methods in electrical impedance tomography, Inverse Problems 16(5), 2000, 14871522.
 [24] J. van der Kruk, C. Wapenaar, J. Fokkema and P. van den Berg, Improved threedimensional image reconstruction technique for multicomponent ground penetrating radar data, Subsurface Sensing Technologies and Applications, 4(1), 6199, 2003.
 [25] F. Lindgren and H. Rue and J. Lindstrm, An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approch, J. R. Statist. Soc. B, 73, (2011), 423498.

[26]
X. Liu and J. Sun, Reconstruction of Neumann eigenvalues and support of sound hard obstacles, Inverse Problems, 30 (2014), 065011.
 [27] R. Lorentzen, K. Flornes and G. Nævdal, History matching channelized reservoirs using the ensemble Kalman filter. SPE Journal 17 (2012), 137–151.
 [28] R. Lorentzen, G. Nævdal, and A. Shafieirad, Estimating facies fields by use of the ensemble Kalman filter and distance functions – applied to shallowmarine environments. SPE Journal 3 (2012), 146–158.
 [29] S. Osher and R. Fedkiw, The Level Set Method and Dynamic Implicit Surfaces. Springer, New York (2003). Zbl pre01873119
 [30] S. Osher and A. Sethian, Fronts propagating with curvaturedependent speed: algorithms based on Hamilton–Jacobi formulations. J. Comput. Phys. 79 (1988), 12–49. Zbl 0659.65132 MR 89h:80012
 [31] J. Ping and D. Zhang, History matching of channelized reservoirs with vectorbased levelset. SPE Journal 19 (2014), 514–529.
 [32] R. Potthast, A study on orthogonality sampling, Inverse Problems, 26 (2010), 074015.

[33]
C. Rasmussen and C. Williams, ”Gaussian Processes for Machine Learning”, Adaptive Computation and Machine Learning, The MIT Press, 2006.

[34]
L. Roininen, J. Huttunen and S. Lasanen, WhittleMatérn priors for Bayesian statistical inversion with applications in electrical impedance tomography, Inverse Problems and Imaging, 8(2), 2014, 561–586.
 [35] J. Sun, An eigenvalue method using multiple frequency data for inverse scattering problems, Inverse Problems, 28 (2012), 025012.
 [36] A. Stuart, Inverse problems: A Bayesian perspective, Acta Numerica (2010), 451–559.
 [37] F. Santosa, A levelset approach for inverse problems involving obstacles, ESAIM: Control, Optimization and Calculus of Variations, 1: 1733, 1996.
 [38] X. Tai and T. Chan, A survey on multiple level set methods with applications for identifying piecewise constant functions, International Journal of Numerical Analysis and Modeling 1 (2004), 25–47.
 [39] A. Tarantola, Inverse Problem Theory and Methods for Model Parameter Estimation, SIAM, 2005.
 [40] Z. Wang, J. Bardsley, A. Solonen, T. Cui and Y. Marzouk, Bayesian inverse problems with priors: a randomizethenoptimize approch, SIAM J. Sci. Comput., 39(5), 2017, S140S166.
 [41] X. Wang, Y. Guo, D. Zhang and H. Liu, Fourier method for recovering acoustic sources from multifrequency farfield data, Inverse Problems, 33 (2017), 035001.
 [42] X. Yang and Z. Deng, An ensemble Kalman filter approach based on operator splitting for solving nonlinear Hammerstein type illposed operator equations, Modern Physics Letters B, 32(28), (2018), 1850335.
 [43] R. Zhang and J. Sun, Efficient finite element method for grating profile reconstruction, J. Comput. Phys., 302 (2015), 405419.
Comments
There are no comments yet.