A Carleman-based numerical method for quasilinear elliptic equations with over-determined boundary data and applications

We propose a new iterative scheme to compute the numerical solution to an over-determined boundary value problem for a general quasilinear elliptic PDE. The main idea is to repeatedly solve its linearization by using the quasi-reversibility method with a suitable Carleman weight function. The presence of the Carleman weight function allows us to employ a Carleman estimate to prove the convergence of the sequence generated by the iterative scheme above to the desired solution. The convergence of the iteration is fast at an exponential rate without the need of an initial good guess. We apply this method to compute solutions to some general quasilinear elliptic equations and a large class of first-order Hamilton-Jacobi equations. Numerical results are presented.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 13

page 14

page 15

page 16

page 17

page 18

04/19/2021

Regularity for quasilinear vectorial elliptic systems through an iterative scheme with numerical applications

We consider an iterative procedure to solve quasilinear elliptic systems...
01/04/2021

Iterated numerical homogenization for multi-scale elliptic equations with monotone nonlinearity

Nonlinear multi-scale problems are ubiquitous in materials science and b...
04/13/2021

Numerical viscosity solutions to Hamilton-Jacobi equations via a Carleman estimate and the convexification method

We propose a globally convergent numerical method, called the convexific...
12/27/2017

Guarded and Unguarded Iteration for Generalized Processes

Models of iterated computation, such as (completely) iterative monads, o...
01/29/2020

Constructing a variational quasi-reversibility method for a Cauchy problem for elliptic equations

In the recent developments of regularization theory for inverse and ill-...
06/25/2019

Advances in Implementation, Theoretical Motivation, and Numerical Results for the Nested Iteration with Range Decomposition Algorithm

This paper studies a low-communication algorithm for solving elliptic pa...
05/29/2020

Recovery of a Time-Dependent Bottom Topography Function from the Shallow Water Equations via an Adjoint Approach

We develop an adjoint approach for recovering the topographical function...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The main aim of this paper is to develop a numerical method based on Carleman estimates to solve quasilinear elliptic PDEs with over-determined boundary data. We consider this new method the second generation of Carleman-based numerical methods while the first generation is called the convexification, which will be mentioned in detail later. Let be an open and bounded domain in , , with smooth boundary . Let and be two smooth functions defined on . Let be a function in the class Let be , symmetric, and positive definite, that is,

for some fixed . The following problem is of our interests.

Problem 1.1.

Assume that the over-determined boundary value problem

(1.1)

has a solution in . Compute the function .

Problem 1.1 is motivated by a class of nonlinear inverse problems in PDEs. One important goal of inverse problems is to reconstruct the internal structure of a domain from boundary measurements, which allow us to impose both Dirichlet and Neumann data of the unknown in (1.1). Recently, a unified framework to solve such inverse problems was developed by the research group of the first and second authors, which has two main steps. In the first step, by introducing a change of variables, one derives a PDE of the form (1.1) from the given inverse problem, in which and can be computed directly by the given boundary data. In the second step, one numerically solves (1.1) to find . The knowledge of directly yields that of the solution to the corresponding inverse problem under consideration. See [KhoaKlibanovLoc:SIAMImaging2020, LeNguyen:2020] and the references therein for some works in this framework. Moreover, this unified framework was successfully tested with experimental data in [VoKlibanovNguyen:IP2020, Khoaelal:IPSE2021, KlibanovLeNguyen:preprint2021]. Another motivation to study Problem 1.1 is to seek solutions to Hamilton-Jacobi equations under the circumstance that the Neumann data of the unknown can be computed by its Dirichlet data and the given form of the Hamiltonian, see e.g., [KlibanovNguyenTran:preprint2021, Assumption 1.1 and Remark 1.1]. Since inverse problems are out of the scope of this paper, we only focus on the applications in solving quasilinear elliptic PDEs and first-order Hamilton-Jacobi equations.

A natural approach to solve (1.1) is based on optimization. That means one sets the computed solution to (1.1) as a minimizer of a mismatch functional, e.g.,

subject to the Cauchy boundary conditions and The methods based on optimization are widely used in the scientific community, especially in computational mathematics, physics and engineering. Although effective and popular, the optimization-based approaches have some drawbacks:

  1. In general, it is not clear that the obtained minimizer approximates the true solution to (1.1).

  2. The mismatch functional is not convex, and it might have multiple minima and ravines (see an example in [ScalesSmithFischerLjcp1992] for illustration). To deliver reliable numerical solutions, one must know some good initial guesses of the true solutions.

  3. The computation is expensive and time consuming.

Drawbacks # 1 and #2 can be treated by the convexification method, which is designed to globalize the optimization methods. The main idea of the convexification method is to employ some suitable Carleman weight functions to convexify the mismatch functionals. The convexity of weighted mismatch functionals is rigorously proved by Carleman estimates. Several versions of the convexification method have been developed in [KlibanovNik:ra2017, KhoaKlibanovLoc:SIAMImaging2020, Klibanov:sjma1997, Klibanov:nw1997, Klibanov:ip2015, KlibanovLeNguyen:preprint2021] since it was first introduced in [KlibanovIoussoupova:SMA1995]. Moreover, we recently discovered that the convexification method can be used to solve a large class of first-order Hamilton-Jacobi equations [KlibanovNguyenTran:preprint2021].

In this paper, we introduce a new method to solve (1.1) based on linearization and Carleman estimates. Like the convexification method, our method delivers a reliable solution to (1.1) without requiring a good initial guess. This fact is rigorously proved. Unlike the convexification method which is time consuming, our new method quickly provides the desired solutions. Its converge rate is as for some

We find the numerical solution to (1.1) by repeatedly solving the linearization of (1.1) by a new “Carleman weighted” quasi-reversibility method. The classical quasi-reversibility method was first proposed in [LattesLions:e1969], and it has been studied intensively since then (see [Klibanov:anm2015] for a survey). By a Carleman weighted quasi-reversibility method, we mean that we let a suitable Carleman weight function involve in the cost functional suggested by the classical quasi-reversibility method. The presence of the Carleman weight function is the key for us to prove our convergence theorem. Our process to solve Problem 1.1 is as follows. We first choose any initial solution that might be far away from the true one. Denote this initial solution by the function . Linearizing (1.1) about , we obtain a linear PDE. We then solve this linear PDE by the Carleman weighted quasi-reversibility method to obtain an updated solution . Using the Carleman weighted quasi-reversibility method rather than the classical one in this step is the key to our success. By iteration, we repeat this step to construct a sequence . The convergence of this sequence to the true solution to (1.1) is proved by using Carleman estimates. We then apply this method to numerically solve some quasilinear elliptic equations. It is important to note that our approach works well for systems of quasilinear elliptic PDEs too.

Next, we use our method to solve some first-order Hamilton-Jacobi equations. More precisely, to find viscosity solutions to the first-order equation

we use the vanishing viscosity procedure and consider, for ,

with given Cauchy boundary data. Our new method is robust in the sense that it works for general nonlinearity that might not be convex in We refer the readers to [CrandallLions83, CrandallEvansLions84, Tran19] and the references therein for the theory of viscosity solutions. A weakness of our new approach in computing the viscosity solutions to Hamilton-Jacobi equations is that we need to require both Dirichlet and Neumann data of the unknown . See [KlibanovNguyenTran:preprint2021, Remark 1.1] for some circumstances that this requirement is fulfilled. There have been many important methods to solve Hamilton-Jacobi equations in the literature. For finite difference monotone and consistent schemes of first-order equations and applications, see [BS-num, CL-rate, OsFe, Sethian, Sou1] for details and recent developments. If is convex in and satisfies appropriate conditions, it is possible to construct some semi-Lagrangian approximations by the discretization of the Dynamical Programming Principle associated to the problem, see [FaFe1, FaFe2] and the references therein.

The paper is organized as follows. In Section 2, we recall a Carleman estimate and an example in which Problem 1.1 appears. In Section 3, we introduce the new iterative method based on linearization and Carleman estimates. In Section 4, we prove the convergence of our method. Some numerical results for quasilinear elliptic equations and first-order Hamilton-Jacobi equations are presented in Section 5. Concluding remarks are given in Section 6.

2 Preliminaries

In this section, we recall a Carleman estimate, which plays a key role for the proof of the convergence theorem in this paper. We then present an inverse scattering problem in which Problem 1.1 appears.

2.1 A Carleman estimate

In this section, we present a simple form of Carleman estimates. Carleman estimates were first employed to prove the unique continuation principle, see e.g., [Carleman:1933, Protter:1960AMS], and they quickly became a powerful tool in many areas of PDEs afterwards. Let be a point in such that for all For each , define

(2.1)

We have the following lemma.

Lemma 2.1 (Carleman estimate).

There exist positive constants depending only on , , , and such that for all function satisfying

(2.2)

the following estimate holds true

(2.3)

for all and . Here, depends only on the listed parameters.

Proof.

Lemma 2.1 is a direct consequence of [MinhLoc:tams2015, Lemma 5]. Let and such that . Here, for Extend to the whole such that for all . Using a change of variable and [MinhLoc:tams2015, Lemma 5], there exists a number depending on , and such that for all and , we have

(2.4)

for some constant depending only on and . Since on and since , allowing to depend on and , we deduce the Carleman estimate (2.3) from (2.4). ∎

An alternative way to obtain (2.3) is to apply the Carleman estimate in [Lavrentiev:AMS1986, Chapter 4, §1, Lemma 3] for general parabolic operators. The arguments to obtain (2.3) using [Lavrentiev:AMS1986, Chapter 4, §1, Lemma 3] are similar to that in [LeNguyenNguyenPowell:JOSC2021, Section 3] with the Laplacian replaced by the operator .

Remark 2.1.

We specially draw the reader’s attention to different forms of Carleman estimates for all three main kinds of differential operators (elliptic, parabolic and hyperbolic) and their applications in inverse problems and computational mathematics [BeilinaKlibanovBook, BukhgeimKlibanov:smd1981, KlibanovLiBook]. It is worth mentioning that some Carleman estimates hold true for all functions satisfying and where is a part of , see e.g., [KlibanovNguyenTran:preprint2021, NguyenLiKlibanov:IPI2019], which can be used to solve quasilinear elliptic PDEs with the boundary data partly given.

2.2 An inverse scattering problem

As mentioned in Section 1, Problem 1.1

arises from nonlinear inverse problems. We present here an important example in the context of inverse scattering problems in the frequency domain. Let

be the spatially distributed dielectric constant of the medium and be an interval of wavenumbers with . Since the dielectric constant of the air is 1, we set for all For each let , , be the wave field generated by a point source at with wavenumber . The function is governed by the Helmholtz equation and the Sommerfeld radiation condition

(2.5)

The inverse scattering problem is formulated as the problem of computing the function , , from the measurements of

(2.6)

for all , The knowledge of the function partly provides the internal structure of the domain . In other words, solving the inverse scattering problem allows us to examine a domain from external measurements, which has applications in security, sonar imaging, geographical exploration, medical imaging, near-field optical microscopy, nano-optics, see, e.g., [ColtonKress:2013] and references therein for more details.

Remark 2.2.

In practice, measuring the wave field is expensive, and one does not measure both and . Rather, one measures the wave for all on a surface far away from We call the information of on the far field. Then, we employ a technique, called “data propagation” to compute the near field on . This data propagation is based on the angular spectrum representation, see, e.g., [NovotnyHecht:cup2012, Chapter 2] and [LiemKlibanovLocAlekFiddyHui:jcp2017, Section 4.2.1] for details in implementation. This technique allows us to compute the wave field in the region between the measurement surface and where . The knowledges of and follow.

There have been many important methods to solve inverse scattering problems in the literature. Each method has its own advantages and disadvantages. A common drawback of the widely-used method based on optimization to solve inverse scattering problems is the need of a good initial guess of the true solution . We recall from [LeNguyen:preprint2021] a method to solve the above inverse scattering problem in which such a need is relaxed. Denote by

where Then, satisfies

Let be the orthonormal basis of introduced in [Klibanov:jiip2017] and define

(2.7)

We approximate

for a suitable cut-off number

. Then, the vector

“approximately” satisfies the system

(2.8)

where

for all and , see [LeNguyen:preprint2021, Section 6] for details. The Dirichlet and Neumann boundary conditions for can be computed by the knowledges of , , and (2.7). Solving the system of quasilinear elliptic equations (2.8) with the provided Dirichlet and Neumann data is basically a goal of Problem 1.1. Doing so is the key step to compute . See [LeNguyen:preprint2021] for convexification method to compute and the procedure to obtain from the knowledge of

We refer the reader to [KhoaKlibanovLoc:SIAMImaging2020, Khoaelal:IPSE2021, VoKlibanovNguyen:IP2020] for another setup of the inverse scattering problem and numerical results from experimental data. Since inverse problems are out of the scope of this paper, we only mention it here to explain the significance of Problem 1.1. In future works, we will use our solver for Problem 1.1 to solve inverse problems.

3 The iteration and linearization approach for Problem 1.1

Our approach to solve (1.1) is based on linearization and iteration. Assume that the solution to (1.1) is in the space for some where is the smallest integer that is greater than Define the set of admissible solutions

(3.1)

Then, the assumption in Problem 1.1 implies that , and . We now construct a sequence that converges to the solution to (1.1). Take a function Assume by induction that we have the knowledge of for some . We find as follows. Assume that for some where

Plugging into (1.1), we have

(3.2)

Heuristically, we assume at this moment that is small, that is, . This assumption plays the role in establishing a numerical scheme to find while it does not play a key role in the proof of the convergence theorem. By Taylor’s expansion, we approximate (3.2) as

(3.3)

where

for all . Here, and are the partial derivative of with respect to its second variable and its gradient vector with respect to the third variable, respectively.

The next step is to compute a function satisfying (3.3). Since there is no guarantee for the existence of such a function , we only compute a “best fit” function by the Carleman-based quasi-reversibility method described below. For each , , and , define the functional as

Here, the function is defined in (2.1) and is a regularization term.

For each , we minimize on . The unique minimizer is the desired function We then set

(3.4)

The construction of the sequence above is summarized in Algorithm 1. We will prove that the sequence converges to in Section 4 as and . The presence of the Carleman weight is a key point for us to prove this convergence result.

1:   Choose a threshold number Choose an arbitrary initial solution
2:  Set .
3:   Solve the linear equation (3.3) for a function by minimizing on . Set .
4:  if  then
5:     Replace by
6:     Go back to Step 3.
7:  else
8:     Set the computed solution
9:  end if
Algorithm 1 The procedure to compute the numerical solution to (1.1)

4 The convergence analysis

In this section, we prove that the sequence generated by Algorithm 1 converges to the solution to (1.1). The following result is the main theorem in this paper.

Theorem 4.1.

Assume that is a finite number. Assume further that (1.1) has a unique solution . Let be the sequence generated by Algorithm 1, where is chosen arbitrarily. Then, there exist and such that, for all ,

(4.1)

Here,

Proof.

Fix . Let . Due to Step 3 in Algorithm 1, is the minimizer of . By the variational principle, we have

(4.2)

for all Since , we can rewrite (4.2) as

(4.3)

for all As is the solution to (1.1), we have

(4.4)

for all It follows from (4.3) and (4.4) that for all

(4.5)

Take . It follows from (4.5) that

(4.6)

As , we can estimate

and

These estimates, together with the inequality , imply

(4.7)

Applying the Carleman estimate (2.3) for the function , we have

(4.8)

Combining (4.7) and (4.8), we have

(4.9)

Letting be sufficiently large, we can simplify (4.9) as

(4.10)

Recall that . We get from (4.10) that

(4.11)

Applying (4.11) for and denoting , we have

By induction, we have

(4.12)

which implies (4.1). The proof is complete. ∎

Remark 4.1 (Removing the boundedness condition of in in Theorem 4.1).

In the case when , we need to assume that we know in advance that the true solution to (1.1) belongs to the ball of for some . This assumption does not weaken the result since can be arbitrarily large. Define the cut-off function as

and set Since , it is obvious that solves the problem

(4.13)

Now, we can apply Algorithm 1 for (4.13) to compute .

Remark 4.2.

Theorem 4.1 and estimate (4.1) rigorously guarantee that each sequence generated by Algorithm 1 converges to at the exponential rate. This fact is numerically confirmed by our numerical results in Section 5.

Remark 4.3.

As seen in the proof of Theorem 4.1, the efficiency of Algorithm 1 is guaranteed by Carleman estimate (2.3). Therefore, we call the proposed method described in Algorithm 1 the second generation of Carleman-based numerical methods. The first generation of Carleman-based numerical method was developed in [KlibanovIoussoupova:SMA1995], which is called the convexification. See [KlibanovNik:ra2017, KlibanovNguyenTran:preprint2021, LeNguyen:2020, LeNguyen:preprint2021] for some following up results. Like the convexification method, Algorithm 1 can be used to compute solutions to nonlinear PDEs without requesting an initial good guess. The advantage of our new method is the fast convergence rate, see Remark 4.2.

5 Numerical study

In this section, we present some numerical results obtained by Algorithm 1. For simplicity, we set and . On we arrange an grid points

where In our numerical scripts, .

In Step 1 of Algorithm 1, we choose a function . It is natural to find a function satisfying the equation obtained by removing from (1.1) the nonlinearity . We apply the Carleman-based quasi-reversibility method to do so. That means, is the minimizer of

(5.1)

subject to the boundary conditions in (5.7). To simplify the efforts in implementation, the norm in the regularization term is the -norm rather than the -norm. This change does not affect the performance of Algorithm 1. Algorithm 1 still provides satisfactory solutions to (1.1).

Remark 5.1.

We employ the Carleman-based quasi-reversibility method to find for the consistency to Step 3 of Algorithm 1. Since can be chosen arbitrarily, one can use the quasi-reversibility method without the presence of the Carleman weight function . In our computation for all numerical examples below, , and The regularized parameter is . The threshold number

We minimize on by the least square MATLAB command “lsqlin”. The implementation for the quasi-reversibility method to minimize more general functionals than was described in [LeNguyen:2020, §5.3] and in [Nguyen:CAMWA2020, §5]. We do not repeat this process here. In Step 3 of Algorithm 1, given we minimize the functional on . Again, we refer the reader to [LeNguyen:2020, §5.3] and in [Nguyen:CAMWA2020, §5] for details in implementation. The scripts for other steps of Algorithm 1 can be written easily.

5.1 Quasilinear elliptic equations

The convexification method, first introduced in [KlibanovIoussoupova:SMA1995], was used to numerically solve quasilinear elliptic equations in [KlibanovNik:ra2017, KlibanovNguyenTran:preprint2021, LeNguyen:preprint2021]. Our approach here is of course different based on iteration and linearization. In particular, Step 3 in Algorithm 1 is very efficient as we only need to solve the linear equation (3.3) as opposed to solving directly a nonlinear quasilinear elliptic equation. In this subsection, we present two (2) numerical tests. In both tests, we choose the matrix to be

That means,

In test 1, we solve (1.1) when

(5.2)

for all , and . The boundary conditions are given by

(5.3)

for all The true solution of (1.1) is the function .

(a) The function
(b) The relative error
(c) The horizontal axis is the number of iteration
Figure 1: Test 1. The numerical solution to (1.1) where is given in (5.2) and the boundary data are given in (5.3).

It is evident from Figure 1 that Algorithm 1 provides out of expectation solution for test 1. The relative error . One can see from Figure (c)c that Algorithm 1 converges at the third iteration.

In test 2, we solve (1.1) when