Carleman estimates and the contraction principle for an inverse source problem for nonlinear hyperbolic equation

08/07/2021 ∙ by Loc H. Nguyen, et al. ∙ UNC Charlotte 0

The main aim of this paper is to solve an inverse source problem for a general nonlinear hyperbolic equation. Combining the quasi-reversibility method and a suitable Carleman weight function, we define a map of which fixed point is the solution to the inverse problem. To find this fixed point, we define a recursive sequence with an arbitrary initial term by the same manner as in the classical proof of the contraction principle. Applying a Carleman estimate, we show that the sequence above converges to the desired solution with the exponential rate. Therefore, our new method can be considered as an analog of the contraction principle. We rigorously study the stability of our method with respect to noise. Numerical examples are presented.



There are no comments yet.


page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Let be the spatial dimension. Let be a function in the class . For , define the operator as

Let , for some given finite number , be a function in the class . Consider the following initial value problem


where is the initial source function. Let be an open and bounded domain in with smooth boundary . Let be a positive number. Define and The reader can find the conditions on and that guarantee the existence, uniqueness and regularity results for (1.1) in many textbooks about PDEs, see e.g. [8, 12]. Studying the these properties is of (1.1) is out of the scope of this paper. We consider them as assumptions. The main aim of this paper is to solve the following inverse problem:

Problem 1.1 (An inverse source problem for nonlinear hyperbolic equations).

Determine the source function , from the lateral Cauchy data


for all

Problem 1.1 can be considered as the nonlinear version of the important problem arising from bio-medical imaging, called thermo/photo-acoustics tomography. The experiment leading to this problem is as follows, see [35, 36, 46]. One sends non-ionizing laser pulses or microwave to a biological tissue under inspection (for instance, woman’s breast in mamography). A part of the energy will be absorbed and converted into heat, causing a thermal expansion and a subsequence ultrasonic wave propagating in space. The ultrasonic pressures on a surface around the tissue are measured. Finding some initial information of the pressures from these measurements yields the structure inside this tissue. The current techniques to solve the problem of thermo/photo-acoustics tomography are only for linear hyperbolic equation. We list here some widely used methods. In the case when the waves propagate in the free space, one can find explicit reconstruction formulas in [11, 13, 43, 45], the time reversal method [20, 17, 18, 49, 50], the quasi-reversibility method [10, 41] and the iterative methods [19, 47, 48]. The publications above study thermo/photo-acoustics tomography for simple models for non-damping and isotropic media. The reader can find publications about thermo/photo-acoustics tomography for more complicated model involving a damping term or attenuation term [3, 2, 14, 1, 9, 16, 33, 34, 42]. Up to the author’s knowledge, this inverse source problem when the governing equation is general and nonlinear like (1.1) is not solved yet.

Natural approaches to solve nonlinear inverse problem are based on optimization. These approaches are local in the sense that they only deliver solutions if good initial guesses of the solutions are given. The local convergence does not always hold unless some additional conditions are imposed. We refer the reader to [15] for a condition that guarantees the local convergence of the optimization method using Landweber iteration. The question how to compute the solutions to nonlinear problems without requesting a good initial guess is very interesting, challenging and significant in the scientific community. There is a general framework to globally solve nonlinear inverse problem, called convexification. The main idea of the convexification method is to employ some suitable Carleman weight functions to convexify the mismatch functional. The convexified phenomenon is rigorously proved by employing the well-known Carleman estimates. Several versions of the convexification method [4, 23, 24, 25, 27, 29, 31, 30, 32, 40] have been developed since it was first introduced in [28]. Especially, the convexification was successfully tested with experimental data in [21, 22, 30]. Although effective, the convexification method has a drawback. It is time consuming. We therefore propose a new method based on the fixed-point iteration, the contraction principle and a suitable Carleman estimate to globally solve nonlinear inverse problems. This new method is considered as the second generation of globally convergent numerical methods to solve nonlinear inverse problems while the convexification method is the first one.

Our approach to solve Problem 1.1 is to reformulate it to be of the form for some operator . The general framework to find the fixed point of in this paper is to construct a sequence by the same the sequence defined in the classical proof of the contraction principle. Take any initial solution to the nonlinear inverse problem, denoted by and then recursively define for . In general, the convergence of to the fixed point of is not guaranteed. However, we discovered recently in [39], see also [5, 6], that if we include some suitable Carleman weight function in the definition of then the convergence holds true. In the current paper, we define by including the Carleman weight function in [26, Lemma 6.1] and [7, Theorem 1.10.2] into the classical quasi-reversibility method, which was first introduced in [37]. The proof of the contractional behavior of relies on the Carleman estimate in [26, Lemma 6.1] and [7, Theorem 1.10.2]. The main theorems in this paper confirm that the recursive sequence above converges to the true solution. The stability with respect to the noise contained in the given data is of the Hölder type.

The paper is organized as follows. In Section 2, we introduce our iteration. In Section 3, we prove the convergence of the sequence generated in Section 2. In Section 4, we present some numerical tests. Section 5 is for concluding remarks.

2 An iterative method to solve Problem 1.1

Problem 1.1 can be reduced to the problem of computing a function defined on satisfying the following problem involving a nonlinear hyperbolic equation, the lateral Cauchy data and an initial condition


In the case of success, one can set for all as the computed solution to Problem 1.1. Due to the presence of the nonlinearity , numerically solving (2.1) is extremely challenging. Conventional methods to do so are based on optimization. That means, one minimizes some mismatch functionals, for e.g.,

subject to the boundary and initial constraints in the last three lines of (2.1). The computed solution to (2.1) is set as the obtained minimizer. However, due to the nonlinearity , such a functional is not convex. It might have multiple minima and ravines. Therefore, the success of the methods based on optimization depends on the good initial guess. In reality, good initial guesses are not always available. Motivated by this challenge, we propose to construct a fixed-point like iterative sequence based on the quasi-reversibility method and a Carleman estimate in [26, Lemma 6.1] and [7, Theorem 1.10.2]. The convergence of this sequence to compute solution to (2.1) will be proved by the contraction principle and a Carleman estimate. Our Carleman-based iterative algorithm to numerically solve (2.1) is as follows.

Let be a point in For and , define the Carleman weight function


Let where is the smallest integer that is greater than or equal to . Then, is continuously embedded into The convergence of our method to solve (2.1) is based on the following Carleman weighted norms.

Definition 2.1 (Carleman weighted Sobolev norms).

For all and we define the norm


The notation in (2.3) is understood in the usual sense. That means, being a

dimensional vector of nonnegative integers,


Assume that the set of admissible solutions

is nonempty. We construct a sequence that converges to the solution to (2.1). Take an arbitrary function . Assume by induction that is known. We set where is the unique minimizer of defined as


See (2.3) for the norm of the regularity term . Due to the compact embedding from to the functional is weakly lower semi-continuous. The presence of the regularization term guarantees that is coercive. Hence, has a minimizer on the close and convex set in . The uniqueness of is due to the strict convexity of . An alternative way to obtain the existence and uniqueness of the minimizer is to argue similarly to the proofs of Theorem 2.1 in [4] or Theorem 4.1 in [30]. We will prove that the sequence converges to the solution to (2.1). This suggests Algorithm 1 to solve Problem 1.1.

1:   Choose a number that represents the maximal number of iteration. Choose an arbitrary initial solution
2:  for  to  do
3:      Minimize the function , defined in (2.4), on .
4:     Set as the obtained minimizer.
5:  end for
6:  Set the computed solution to (2.1) as
7:  Set the computed solution to Problem 1.1 as for all
Algorithm 1 A procedure to compute the numerical solution to Problem 1.1
Remark 2.1.

The presence of the Carleman weight functions and that of the weight function in the regularization term in (2.4) play the key role in proving the convergence of the sequence to the true solution to See Theorem 3.1 for details.

Remark 2.2.

One can replace the homogeneous initial condition for in (1.1) by the non-homogenous one for all for some given function . The analysis leading to the numerical method and the proof for the convergence of the method will be the same.

Remark 2.3.

Besides Algorithm 1, one can apply the convexification method, first introduced in [28], to globally solve (2.1) and Problem 1.1. In fact, by combining the arguments in [4, 10, 30], one can prove that:

  1. for any large ball of where is an arbitrarily large number, the functional

    is strictly convex in the large ball of for all sufficiently large and sufficiently small;

  2. the unique minimizer of this functional in is a good approximation of the desired solution.

In theory, both Algorithm 1 and the convexification method deliver reliable solution to (2.1) without requesting initial guess. It is interesting to verify the efficiency of the convexification method while that of Algorithm 1 is confirmed in this paper. Developing the convexification method for (2.1) and verify it numerically serve as one of our future research topics.

3 The convergence theorem

In this section, we employ a Carleman estimate, see [26, Lemma 6.1] and [7, Theorem 1.10.2] to prove the convergence of the sequence obtained by Algorithm 1 to the true solution to (2.1).

3.1 The statement of the convergence theorem

Since is a bounded domain, it is contained in a large disk for some positive number . Before stating the convergence theorems, we recall an important Carleman estimate. For and , define


The following lemma plays a key role in our analysis.

Lemma 3.1 (Carleman estimate).

Assume that the point in the definition of the Carleman weight function in (2.2) is in the set and that the coefficient in (1.1) satisfies the condtion


Then, there exists a sufficiently small number such that for any , one can choose a sufficiently large number and a number such that for all and for all , the following pointwise Carleman estimate holds true


for . The vector valued function and the function satisfy




In particular, if either or .

We refer the reader to [26, Lemma 6.1] and [7, Theorem 1.10.2] for the proof of Lemma 3.1. The proof of this Carleman weight function in the case is given in [38]. In those publications, the Carleman estimate (3.3) holds true for the function in where

Hence, to obtain Lemma 3.1, we need to extend the function to .

Let be the number in Lemma 3.1. Fix and . Since , we can choose both of these two numbers sufficiently small; say for e.g., and where , such that

Remark 3.1.

With the choice of and above, the Carleman estimate (3.3) holds true for all

We now state the main theorem of the paper. Let be a noise level. By noise level , we assume that there is a function in satisfying


where and are the noiseless version of the data and respectively. The true solution to (2.1), with and replaced by and respectively, is denoted by . That means, and

Theorem 3.1 (The convergence of to as the noise tends to ).

Assume that is a finite number and that the function satisfies (3.2). Fix and let be such that (3.6) holds true. Fix Then, there exist depending only on , , , , and such that for all , we have


provided that where Here, is a positive number depending only on , , and and .

Corollary 3.1.

By the trace theory, it follows from (3.8) that for all


where and are two positive numbers depending only on , , and . In (3.9), , is the true source function, and

In practice, the condition that might not hold true. In this case, we need to know in advance a number such that the true solution to (2.1) belongs to the ball . This requirement does not weaken our result since can be arbitrarily large. In order to compute , we proceed as follows. Due to the Sobolev embedding theorem, for some constant . Let be a function in the class and satisfy

Then, it is obvious that solves



We now compute by solving (3.10) using Algorithm 1 and Theorem 3.1 for .

Remark 3.2.

The auxiliary step above and the requirement about the knowledge of the number is only for the theoretical purpose. In our computations in Section 4, we obtain satisfactory numerical results by applying Algorithm 1 directly on (2.1) rather than (3.10). Although the condition does not satisfy, we do not experience any difficulty.

We present the proof of this theorem in the next subsection. The proof is motivated by [5, 6, 39].

3.2 The proof of Theorem 3.1

Throughout the proof, is a generic constant depending only on , , , , and . In the proof, we use the Carleman weighted Sobolev norms defined in Definition 2.1 with some different values of , and . We split the proof into several steps.

Step 1. Define

Fix . Since is the minimizer of , defined in (2.4), on , by the variational principle, we have for all


On the other hand, since is the true solution to (2.1), for all , we have


Combining (3.11) and (3.12), for all , we have


Due to (3.7), the function


is in . Using this test function for the identity (3.13), we have



Using the inequality , we deduce from (3.15) that


Since , we can find a constant such that

Using this and (3.16), we have


Step 2. Due to Remark 3.1, the Carleman estimate (3.3) holds true for the function in the whole set . Integrating (3.3) on , we have


Since for all . By the divergence theorem, we have


Using the Fundamental theorem of Calculus, we have


Since for all , we have . Using (3.5), we deduce from (3.20) that


It follows from (3.18) and (3.21) that


Using the trace theory on we can find a constant such that


Combining (3.22) and (3.23) gives


Step 3. It follows from (3.17) and (3.24) that


Recall from (3.14) that and the inequality . Fix such that It follows from (3.25) that




Since , where . For any . If then it is obvious that . In this case, it follows from (3.27) that


where Applying (3.28) with replaced by , we obtain (3.8). The proof is complete.

4 Numerical study

For simplicity, we numerically solve Problem 1.1 in the 2-dimensional case. We set . To solve the forward problem, we approximate by the square where is a large number. We then employ the explicit scheme to solve (1.1) on . In our computation, . We set the final time . In this setting, the nonlinear wave generated by an unknown source supported inside does not have enough time to propagate to Therefore, the computed wave is the restriction of the solution to (1.1) on . Our implementation is based on the finite difference method, in which the step size in space is and the step size in time is These values of step sizes in space and time are chosen such that . Thus, the computed wave by the explicit scheme is reliable.

In this section, we set the noise level That means the data we use are given by

for all

and rand is the function taking uniformly distributed random number in the range

In all of our numerical examples, the stopping number in Step 2 of Algorithm 1 is set to be 7. It is not necessary to set as a bigger number. This is because of the fast convergence of the exponential rate as for some . See Theorem 3.1 and (3.8).

Implementing Algorithm 1 is straight forward. The most complicated part is Step 3. That is how to minimize a function defined in (2.4). This step is equivalent to the problem solving a linear equation subject to some boundary and initial constraints by the quasi-reversibility method. We employ the optimization package built-in in MATLAB with the command “lsqlin” for this step. The implementation is very similar to the one described in [44, Section 6]. We do not repeat the details of implementation here. Although the norm in the regularization term is , in our computational program, we use a simpler norm to simplify the computational scripts. This change does not effect the satisfactory outputs. The ”artificial” parameters for all tests below are: , and . These parameters are numerically chosen such that the reconstruction result is satisfactory for test 1. Then, we use these values for all other tests.

Regarding the initial solution , we have mentioned in Step 1 of Algorithm 1 that can be any function in . That means we can take any function in such that , and A natural way to compute such a function is to find a function satisfying