1 Introduction
Let be the spatial dimension. Let be a function in the class . For , define the operator as
Let , for some given finite number , be a function in the class . Consider the following initial value problem
(1.1) 
where is the initial source function. Let be an open and bounded domain in with smooth boundary . Let be a positive number. Define and The reader can find the conditions on and that guarantee the existence, uniqueness and regularity results for (1.1) in many textbooks about PDEs, see e.g. [8, 12]. Studying the these properties is of (1.1) is out of the scope of this paper. We consider them as assumptions. The main aim of this paper is to solve the following inverse problem:
Problem 1.1 (An inverse source problem for nonlinear hyperbolic equations).
Determine the source function , from the lateral Cauchy data
(1.2) 
for all
Problem 1.1 can be considered as the nonlinear version of the important problem arising from biomedical imaging, called thermo/photoacoustics tomography. The experiment leading to this problem is as follows, see [35, 36, 46]. One sends nonionizing laser pulses or microwave to a biological tissue under inspection (for instance, woman’s breast in mamography). A part of the energy will be absorbed and converted into heat, causing a thermal expansion and a subsequence ultrasonic wave propagating in space. The ultrasonic pressures on a surface around the tissue are measured. Finding some initial information of the pressures from these measurements yields the structure inside this tissue. The current techniques to solve the problem of thermo/photoacoustics tomography are only for linear hyperbolic equation. We list here some widely used methods. In the case when the waves propagate in the free space, one can find explicit reconstruction formulas in [11, 13, 43, 45], the time reversal method [20, 17, 18, 49, 50], the quasireversibility method [10, 41] and the iterative methods [19, 47, 48]. The publications above study thermo/photoacoustics tomography for simple models for nondamping and isotropic media. The reader can find publications about thermo/photoacoustics tomography for more complicated model involving a damping term or attenuation term [3, 2, 14, 1, 9, 16, 33, 34, 42]. Up to the author’s knowledge, this inverse source problem when the governing equation is general and nonlinear like (1.1) is not solved yet.
Natural approaches to solve nonlinear inverse problem are based on optimization. These approaches are local in the sense that they only deliver solutions if good initial guesses of the solutions are given. The local convergence does not always hold unless some additional conditions are imposed. We refer the reader to [15] for a condition that guarantees the local convergence of the optimization method using Landweber iteration. The question how to compute the solutions to nonlinear problems without requesting a good initial guess is very interesting, challenging and significant in the scientific community. There is a general framework to globally solve nonlinear inverse problem, called convexification. The main idea of the convexification method is to employ some suitable Carleman weight functions to convexify the mismatch functional. The convexified phenomenon is rigorously proved by employing the wellknown Carleman estimates. Several versions of the convexification method [4, 23, 24, 25, 27, 29, 31, 30, 32, 40] have been developed since it was first introduced in [28]. Especially, the convexification was successfully tested with experimental data in [21, 22, 30]. Although effective, the convexification method has a drawback. It is time consuming. We therefore propose a new method based on the fixedpoint iteration, the contraction principle and a suitable Carleman estimate to globally solve nonlinear inverse problems. This new method is considered as the second generation of globally convergent numerical methods to solve nonlinear inverse problems while the convexification method is the first one.
Our approach to solve Problem 1.1 is to reformulate it to be of the form for some operator . The general framework to find the fixed point of in this paper is to construct a sequence by the same the sequence defined in the classical proof of the contraction principle. Take any initial solution to the nonlinear inverse problem, denoted by and then recursively define for . In general, the convergence of to the fixed point of is not guaranteed. However, we discovered recently in [39], see also [5, 6], that if we include some suitable Carleman weight function in the definition of then the convergence holds true. In the current paper, we define by including the Carleman weight function in [26, Lemma 6.1] and [7, Theorem 1.10.2] into the classical quasireversibility method, which was first introduced in [37]. The proof of the contractional behavior of relies on the Carleman estimate in [26, Lemma 6.1] and [7, Theorem 1.10.2]. The main theorems in this paper confirm that the recursive sequence above converges to the true solution. The stability with respect to the noise contained in the given data is of the Hölder type.
2 An iterative method to solve Problem 1.1
Problem 1.1 can be reduced to the problem of computing a function defined on satisfying the following problem involving a nonlinear hyperbolic equation, the lateral Cauchy data and an initial condition
(2.1) 
In the case of success, one can set for all as the computed solution to Problem 1.1. Due to the presence of the nonlinearity , numerically solving (2.1) is extremely challenging. Conventional methods to do so are based on optimization. That means, one minimizes some mismatch functionals, for e.g.,
subject to the boundary and initial constraints in the last three lines of (2.1). The computed solution to (2.1) is set as the obtained minimizer. However, due to the nonlinearity , such a functional is not convex. It might have multiple minima and ravines. Therefore, the success of the methods based on optimization depends on the good initial guess. In reality, good initial guesses are not always available. Motivated by this challenge, we propose to construct a fixedpoint like iterative sequence based on the quasireversibility method and a Carleman estimate in [26, Lemma 6.1] and [7, Theorem 1.10.2]. The convergence of this sequence to compute solution to (2.1) will be proved by the contraction principle and a Carleman estimate. Our Carlemanbased iterative algorithm to numerically solve (2.1) is as follows.
Let be a point in For and , define the Carleman weight function
(2.2) 
Let where is the smallest integer that is greater than or equal to . Then, is continuously embedded into The convergence of our method to solve (2.1) is based on the following Carleman weighted norms.
Definition 2.1 (Carleman weighted Sobolev norms).
Assume that the set of admissible solutions
is nonempty. We construct a sequence that converges to the solution to (2.1). Take an arbitrary function . Assume by induction that is known. We set where is the unique minimizer of defined as
(2.4) 
See (2.3) for the norm of the regularity term . Due to the compact embedding from to the functional is weakly lower semicontinuous. The presence of the regularization term guarantees that is coercive. Hence, has a minimizer on the close and convex set in . The uniqueness of is due to the strict convexity of . An alternative way to obtain the existence and uniqueness of the minimizer is to argue similarly to the proofs of Theorem 2.1 in [4] or Theorem 4.1 in [30]. We will prove that the sequence converges to the solution to (2.1). This suggests Algorithm 1 to solve Problem 1.1.
Remark 2.1.
Remark 2.2.
One can replace the homogeneous initial condition for in (1.1) by the nonhomogenous one for all for some given function . The analysis leading to the numerical method and the proof for the convergence of the method will be the same.
Remark 2.3.
Besides Algorithm 1, one can apply the convexification method, first introduced in [28], to globally solve (2.1) and Problem 1.1. In fact, by combining the arguments in [4, 10, 30], one can prove that:

for any large ball of where is an arbitrarily large number, the functional
is strictly convex in the large ball of for all sufficiently large and sufficiently small;

the unique minimizer of this functional in is a good approximation of the desired solution.
In theory, both Algorithm 1 and the convexification method deliver reliable solution to (2.1) without requesting initial guess. It is interesting to verify the efficiency of the convexification method while that of Algorithm 1 is confirmed in this paper. Developing the convexification method for (2.1) and verify it numerically serve as one of our future research topics.
3 The convergence theorem
In this section, we employ a Carleman estimate, see [26, Lemma 6.1] and [7, Theorem 1.10.2] to prove the convergence of the sequence obtained by Algorithm 1 to the true solution to (2.1).
3.1 The statement of the convergence theorem
Since is a bounded domain, it is contained in a large disk for some positive number . Before stating the convergence theorems, we recall an important Carleman estimate. For and , define
(3.1) 
The following lemma plays a key role in our analysis.
Lemma 3.1 (Carleman estimate).
Assume that the point in the definition of the Carleman weight function in (2.2) is in the set and that the coefficient in (1.1) satisfies the condtion
(3.2) 
Then, there exists a sufficiently small number such that for any , one can choose a sufficiently large number and a number such that for all and for all , the following pointwise Carleman estimate holds true
(3.3) 
for . The vector valued function and the function satisfy
(3.4) 
and
(3.5) 
In particular, if either or .
We refer the reader to [26, Lemma 6.1] and [7, Theorem 1.10.2] for the proof of Lemma 3.1. The proof of this Carleman weight function in the case is given in [38]. In those publications, the Carleman estimate (3.3) holds true for the function in where
Hence, to obtain Lemma 3.1, we need to extend the function to .
Let be the number in Lemma 3.1. Fix and . Since , we can choose both of these two numbers sufficiently small; say for e.g., and where , such that
(3.6) 
Remark 3.1.
With the choice of and above, the Carleman estimate (3.3) holds true for all
We now state the main theorem of the paper. Let be a noise level. By noise level , we assume that there is a function in satisfying
(3.7) 
where and are the noiseless version of the data and respectively. The true solution to (2.1), with and replaced by and respectively, is denoted by . That means, and
Theorem 3.1 (The convergence of to as the noise tends to ).
Corollary 3.1.
In practice, the condition that might not hold true. In this case, we need to know in advance a number such that the true solution to (2.1) belongs to the ball . This requirement does not weaken our result since can be arbitrarily large. In order to compute , we proceed as follows. Due to the Sobolev embedding theorem, for some constant . Let be a function in the class and satisfy
Then, it is obvious that solves
(3.10) 
where
We now compute by solving (3.10) using Algorithm 1 and Theorem 3.1 for .
Remark 3.2.
The auxiliary step above and the requirement about the knowledge of the number is only for the theoretical purpose. In our computations in Section 4, we obtain satisfactory numerical results by applying Algorithm 1 directly on (2.1) rather than (3.10). Although the condition does not satisfy, we do not experience any difficulty.
3.2 The proof of Theorem 3.1
Throughout the proof, is a generic constant depending only on , , , , and . In the proof, we use the Carleman weighted Sobolev norms defined in Definition 2.1 with some different values of , and . We split the proof into several steps.
Step 1. Define
Fix . Since is the minimizer of , defined in (2.4), on , by the variational principle, we have for all
(3.11) 
On the other hand, since is the true solution to (2.1), for all , we have
(3.12) 
Combining (3.11) and (3.12), for all , we have
(3.13) 
Due to (3.7), the function
(3.14) 
is in . Using this test function for the identity (3.13), we have
Thus,
(3.15) 
Using the inequality , we deduce from (3.15) that
(3.16) 
Step 2. Due to Remark 3.1, the Carleman estimate (3.3) holds true for the function in the whole set . Integrating (3.3) on , we have
(3.18) 
Since for all . By the divergence theorem, we have
(3.19) 
Using the Fundamental theorem of Calculus, we have
(3.20) 
Since for all , we have . Using (3.5), we deduce from (3.20) that
(3.21) 
It follows from (3.18) and (3.21) that
(3.22) 
Using the trace theory on we can find a constant such that
(3.23) 
Combining (3.22) and (3.23) gives
(3.24) 
Step 3. It follows from (3.17) and (3.24) that
(3.25) 
Recall from (3.14) that and the inequality . Fix such that It follows from (3.25) that
(3.26) 
Therefore,
(3.27) 
Since , where . For any . If then it is obvious that . In this case, it follows from (3.27) that
(3.28) 
where Applying (3.28) with replaced by , we obtain (3.8). The proof is complete.
4 Numerical study
For simplicity, we numerically solve Problem 1.1 in the 2dimensional case. We set . To solve the forward problem, we approximate by the square where is a large number. We then employ the explicit scheme to solve (1.1) on . In our computation, . We set the final time . In this setting, the nonlinear wave generated by an unknown source supported inside does not have enough time to propagate to Therefore, the computed wave is the restriction of the solution to (1.1) on . Our implementation is based on the finite difference method, in which the step size in space is and the step size in time is These values of step sizes in space and time are chosen such that . Thus, the computed wave by the explicit scheme is reliable.
In this section, we set the noise level That means the data we use are given by
for all
and rand is the function taking uniformly distributed random number in the range
In all of our numerical examples, the stopping number in Step 2 of Algorithm 1 is set to be 7. It is not necessary to set as a bigger number. This is because of the fast convergence of the exponential rate as for some . See Theorem 3.1 and (3.8).Implementing Algorithm 1 is straight forward. The most complicated part is Step 3. That is how to minimize a function defined in (2.4). This step is equivalent to the problem solving a linear equation subject to some boundary and initial constraints by the quasireversibility method. We employ the optimization package builtin in MATLAB with the command “lsqlin” for this step. The implementation is very similar to the one described in [44, Section 6]. We do not repeat the details of implementation here. Although the norm in the regularization term is , in our computational program, we use a simpler norm to simplify the computational scripts. This change does not effect the satisfactory outputs. The ”artificial” parameters for all tests below are: , and . These parameters are numerically chosen such that the reconstruction result is satisfactory for test 1. Then, we use these values for all other tests.
Comments
There are no comments yet.