On stable invertibility and global Newton convergence for convex monotonic functions

We derive a simple criterion that ensures uniqueness, Lipschitz stability and global convergence of Newton's method for finite dimensional inverse problems with a continuously differentiable, componentwise convex and monotonic forward function. Our criterion merely requires to evaluate the directional derivative of the forward function at finitely many evaluation points and finitely many directions. Using a relation to monotonicity and localized potentials techniques for inverse coefficient problems in elliptic PDEs, we will then show that a discretized inverse Robin transmission problem always fulfills our criterion if enough measurements are being used. Thus our result enables us to determine those boundary measurements from which an unknown coefficient can be uniquely and stably reconstructed with a given desired resolution by a globally convergent Newton iteration.

Authors

• 8 publications
• On the inverse problem of vibro-acoustography

The aim of this paper is to put the problem of vibroacoustic imaging int...
09/04/2021 ∙ by Barbara Kaltenbacher, et al. ∙ 0

• Solving an inverse elliptic coefficient problem by convex non-linear semidefinite programming

Several applications in medical imaging and non-destructive material tes...
05/24/2021 ∙ by Bastian Harrach, et al. ∙ 0

• An introduction to finite element methods for inverse coefficient problems in elliptic PDEs

Several novel imaging and non-destructive testing technologies are based...
03/01/2021 ∙ by Bastian Harrach, et al. ∙ 0

• Global linear convergence of Newton's method without strong-convexity or Lipschitz gradients

We show that Newton's method converges globally at a linear rate for obj...
06/01/2018 ∙ by Sai Praneeth Karimireddy, et al. ∙ 0

• Inverse Cubic Iteration

There are thousands of papers on rootfinding for nonlinear scalar equati...
07/13/2020 ∙ by Robert M. Corless, et al. ∙ 0

• A new formulation for the numerical proof of the existence of solutions to elliptic problems

Infinite-dimensional Newton methods can be effectively used to derive nu...
10/02/2019 ∙ by Kouta Sekine, et al. ∙ 0

• Convergence of a series associated with the convexification method for coefficient inverse problems

This paper is concerned with the convergence of a series associated with...
04/12/2020 ∙ by Michael V. Klibanov, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Disclaimer

This is a preliminary draft version. It is missing references, an introduction and numerical examples. It also requires significant proofreading and polishing.

2 Uniqueness, stability and global convergence of the Newton method

We consider a continuously differentiable, componentwise convex and monotonic function

 F: U⊆Rn→Rm

where , , and is a convex open set. In this section, we will derive a simple criterion that implies injectivity of

on a multidimensional interval, allows us to estimate the Lipschitz stability constant of the left inverse

and, for , ensures global convergence of Newton’s method.

Remark

Throughout this work, ”

” is always understood componentwise for finite-dimensional vectors and matrices, and

denotes the converse, i.e., that has at least one positive entry.

Monotonicity and convexity are understood with respect to this componentwise partial order, i.e., is monotonic if

 x≤y implies F(x)≤F(y) for % all x,y∈U.

and is convex if

 F((1−t)x+ty)≤(1−t)F(x)+tF(y) for all x,y∈U, t∈[0,1].

For continuously differentiable functions, it is easily shown that monotonicity is equivalent to

 F′(x)y≥0 for all x∈U, 0≤y∈Rn, (1)

and thus equivalent to . It is also well known (cf., e.g., [7, Thm. 13.3.2]) that convexity is equivalent to

 F(y)−F(x) ≥F′(x)(y−x) for all x,y∈U. (2)

All the proofs in this section use the monotonicity and convexity assumption in the form (1) and (2).

Throughout this work, we denote by the -th unit vector in , , and . denotes the

-dimensional identity matrix, and

is the matrix containing in all of its entries.

2.1 A simple criterion for uniqueness and Lipschitz stability

Before we state our result in its final form in subsection 2.3, we derive two weaker results that motivate our arguments and may be of independent interest. We first show a simple criterion that yields injectivity of and allows us to estimate the Lipschitz stability constant of its left inverse .

Let , , be a continuously differentiable, componentwise convex and monotonic function on an open set containing .

 F′(−ej+3e′j)(ej−3e′j)≰0 for all j∈{1,…,n}, (3)

then

1. is injective on ,

2. is injective for all .

3. With

 L:=2(minj=1,…,n maxk=1,…,neTkF′(−ej+3e′j)(ej−3e′j))−1>0, (4)

we have that for all

 ∥x−y∥∞≤L∥F(x)−F(y)∥∞, and ∥F′(x)−1∥∞≤L,

where denotes the left inverse of .

We first note that (2) also implies that

 F′(y)(y−x)≥F(y)−F(x)≥F′(x)(y−x) for all x,y∈U.

Let . Writing , we have that

 ej−3e′j≤x−~x(j)≤2ej−2e′j for all x∈[0,1]n.

Thus we deduce from (1) and (2)

 F′(x)(ej−e′j) =12F′(x)(2ej−2e′j)≥12F′(x)(x−~x(j))≥12(F(x)−F(~x(j))) ≥12F′(~x(j))(x−~x(j))≥12F′(~x(j))(ej−3e′j).

With the definition of in (4) this shows that

 maxk=1,…,neTkF′(x)(ej−e′j)≥L−1 for all x∈[0,1]n, j∈{1,…,n}. (5)

To prove injectivity of on and the Lipschitz bound on its inverse, let with . Then at least one of the entries must be either or .

1. In the case that with , we have that

 y−x∥y−x∥∞≥ej−e′j,

so that we obtain using (2) and (1)

 F(y)−F(x)∥y−x∥∞ ≥F′(x)y−x∥y−x∥∞≥F′(x)(ej−e′j).

Using (5) this shows that

2. In the case that with , we use (i) with interchanged roles of and and also obtain

 ∥F(y)−F(x)∥∞∥y−x∥∞≥L−1.

With the same arguments, we obtain that for all and , at least one of the entries of must be either or , so that there exists with either

In both cases it follows from (5) that

 ∥F′(x)y∥∞∥y∥∞≥maxk=1,…,neTkF′(x)(ej−e′j)≥L−1.

This proves injectivity of and the Lipschitz bound on its left inverse.

2.2 A simple criterion for global convergence of the Newton iteration

We will now show that we can also ensure that a convex monotonic function has a unique zero, and that the Newton method globally converges against this zero.

Let , , be a continuously differentiable, componentwise convex and monotonic function on an open set .

If , and

 F′(z(j))d(j)≰0 for all j∈{1,…,n}, (6)

with

 z(j):=−2ej+n(n+3)e′j, and d(j):=ej−(n2+3n+1)e′j,

then the following holds:

1. is injective on , is injective for all , and for all

 ∥x−y∥∞≤L∥F(x)−F(y)∥∞, and ∥F′(x)−1∥∞≤L, (7)

where

 L:=(n+2)(minj=1,…,n maxk=1,…,neTkF′(z(j))d(j))−1>0. (8)
2. If, additionally, , then there exists a unique

 ^x∈(−1n−1,1+1n−1)n⊂(−1,2)n with F(^x)=0.

The Newton iteration sequence

 x(k+1):=x(k)−F′(x(k))−1F(x(k))with initial % value x(0):=1 (9)

is well defined (i.e., is invertible in each step) and converges against .

For all

 x(k)∈(−1,n)n and 0≤M^x≤Mx(k+1)≤Mx(k)≤Mx(0)=(n+1)1,

where .

The rate of convergence of is superlinear. If is locally Lipschitz in then the rate of convergence is quadratic.

To prove Theorem 2.2 we will first show the following lemma.

Under the assumptions and with the notations of Theorem 2.2, the following holds:

1. For all ,

 maxk=1,…,neTkF′(x)(ej−ne′j)≥L−1.
2. is injective on , is injective for all , and for all

 ∥x−y∥∞≤L∥F(x)−F(y)∥∞, and ∥F′(x)−1∥∞≤L.
3. For all , and

4. With , for all , and

 F′(x)y≥0 implies My≥0.

is invertible and .

5. For all

 MF′(x)−1≥0.

The proof is similar to that of Theorem 2.1.

1. Let . Using , we have that for all

 d(j)=ej−(n2+3n+1)e′j≤x−z(j)≤(n+2)ej−n(n+2)e′j

and thus

 F′(x)(ej−ne′j) =1n+2F′(x)((n+2)ej−n(n+2)e′j) ≥1n+2F′(x)(x−z(j))≥1n+2(F(x)−F(z(j))) ≥1n+2F′(z(j))(x−z(j))≥1n+2F′(z(j))d(j),

which proves (a).

2. Since (a) implies a fortiori that

 maxk=1,…,neTkF′(x)(ej−e′j)≥L−1,

the assertion (b) follows by the same arguments as in the proof of Theorem 2.1.

3. Let , and . If there exists an index with , then , so that and thus . By contraposition, this shows that

 F′(x)y≥0 implies minj=1,…,nyj>−1n∥y∥∞,

which also shows that .

4. Using (c) it follows that implies that for all ,

 n∑j=1yj+yk≥maxj=1,…,nyj+nminj=1,…,nyj≥0.

so that implies . Also, it is easily checked that

 (In−1n+111T)(11T+In)=11T+In−1n+111T11T−1n+111T =nn+111T+In−1n+11(n)1T=In
5. For all and all with , it follows from (d) that

 F′(x)F′(x)−1d=d≥0 implies MF′(x)−1d≥0,

which proves (e).

Proof of Theorem 2.2. The assertion (a) has already been proven in lemma 2.2(b). To motivate the proof of (b), let us first note that, by lemma 2.2(e), is a convex function with Collatz monotone derivative [1], i.e. . If the Newton iterates do not leave the region where convexity and Collatz monotony holds, then classical results on monotone Newton methods (cf., e.g., [7, Thm. 13.3.4]) yield global Newton convergence for , and thus for

since the Newton method is invariant under linear transformation. The following proof combines the classical arguments in

[7, Thm. 13.3.4] with a homotopy argument to bound the Newton iterates.

We first prove that for all with and , the next Newton iterate is well-defined and fulfills

 x(k+1)∈(−1,n)n,F(x(k+1))≥0,0≤Mx(k+1)≤Mx(k)≤M1. (10)

To show this let fulfill and . Then is invertible, so that we can define the intermediate Newton steps

 x(k+t):=x(k)−tF′(x(k))−1F(x(k)) for all t∈[0,1].

Then, by convexity, we have for all

 F(x(k+t))=F(x(k)−tF′(x(k))−1F(x(k)))≥F(x(k))−tF(x(k))≥0.

Moreover, it follows from , cf. lemma 2.2(e), that

 Mx(k)−Mx(k+t)=tMF′(x(k))−1F(x(k))≥0.

Using also that and the convexity assumption, we have that

 0 ≤−tMF′(x(k))−1F(0) =0+Mx(k+t)−(Mx(k)−tMF′(x(k))−1F(x(k)))−tMF′(x(k))−1F(0) ≤Mx(k+t)−Mx(k)+tMF′(x(k))−1F′(x(k))(x(k)−0) =Mx(k+t)−(1−t)Mx(k).

Hence, for all

It remains to prove that . We argue by contradiction and assume that this is not the case. Then, by continuity, there exists with . Hence, by convexity,

 F′(x(k+t))(x(k+t)−0)≥F(x(k+t))−F(0)≥0

and using lemma 2.2(c) this would imply

 minj=1,…,nx(k+t)j >−1nmaxj=1,…,nx(k+t)j≥−1. (11)

Let be an index so that attains its maximum for . Then the -th component of gives the inequality

 2x(k+t)l+n∑j=1j≠lx(k+t)l≤n+1

and (11) yields that . Hence, which contradicts the assumption, and thus shows that . This finishes the proof of (10).

It now follows from (10) that for , the Newton algorithm produces a well-defined sequence for which is monotonically non-increasing and bounded. Hence, and thus also converge. We define

 ^x:=limk→∞x(k)∈[−1,n]n.

Since is continuously differentiable and is invertible, it follows from the Newton iteration formula (9) that . Also, the monotone convergence of shows that

To show that , we use the convexity to obtain

 F′(^x)(^x−0) ≥F(^x)−F(0)≥0, F′(1)(1−^x) ≥F(1)−F(^x)≥0,

which then implies by lemma 2.2(c) that

 minj=1,…,n^xj >−1nmaxj=1,…,n^xj, minj=1,…,n(1−^xj) >−1nmaxj=1,…,n(1−^xj).

From this we obtain that

 minj=1,…,n^xj >−1nmaxj=1,…,n^xj=1nminj=1,…,n(1−^xj)−1n (12) >−1n2maxj=1,…,n(1−^xj)−1n=1n2minj=1,…,n^xj−1n2−1n,

which yields . Using (12) again, we then obtain

 −1nmaxj=1,…,n^xj >1n2minj=1,…,n^xj−1n2−1n>−1n−1,

which shows .

Finally, since this is the standard Newton iteration, the convergence speed is superlinear and the speed is quadratic if is Lipschitz continuous in a neighbourhood of .

2.3 A result with tighter domain assumptions

Our results in subsections 2.1 and 2.2 require the considered function to be defined (and convex and monotonic) on a much larger set than . For some applications (such as the inverse coefficient problem in section 3), the following more technical variant of Theorem 2.2 may be useful, that allows us treat the case where the domain of definition is an arbitrarily small neighbourhood of .

Let and . Let , , be a continuously differentiable, componentwise convex and monotonic function on an open set .

If , and

 F′(z(j,k))d(j)≰0 for all j∈{1,…,n}, k={1,…,K}, (13)

where is the smallest natural number with , and

 z(j,k) :=(−1+ϵcn+(k−2)ϵ2cn)ej+(1+2ϵ)e′j (14) d(j) :=12ej−1+ϵ+cn+2ϵcnϵe′j (15)

then the following holds:

1. is injective on , is injective for all , and for all

 ∥x−y∥∞≤L∥F(x)−F(y)∥∞, and ∥F′(x)−1∥∞≤L,

where

 L:=(minj=1,…,n maxk=1,…,neTkF′(z(j))d(j))−1>0. (16)
2. If, additionally, , then there exists a unique

 ^x∈[−1+ϵcn,1+ϵ]n with F(^x)=0.

The Newton iteration sequence

 x(k+1):=x(k)−F′(x(k))−1F(x(k))with initial % value x(0):=1 (17)

is well defined (i.e., is invertible in each step) and converges against .

For all

 x(k) ∈(−1+ϵcn,1+ϵ)n, and 0 ≤M^x≤Mx(k+1)≤Mx(k)≤Mx(0)=(1+cn)1,

where .

The rate of convergence of is superlinear. If is locally Lipschitz in then the rate of convergence is quadratic.

To prove Theorem 2.3 we first prove a variant of lemma 2.2 with tighter domain assumptions.

Under the assumptions and with the notations of Theorem 2.3, the following holds:

1. For all ,

 maxk=1,…,neTkF′(x)(ej−cne′j)≥L−1.
2. is injective on , is injective for all , and for all

 ∥x−y∥∞≤L∥F(x)−F(y)∥∞, and ∥F′(x)−1∥∞≤L.
3. For all , and

4. With , for all , and

 F′(x)y≥0 implies My≥0.

is invertible and .

5. For all

 MF′(x)−1≥0.

To prove (a) let and . Then, by the definition of , there exists , so that

 −1+ϵcn+(k−1)ϵ2cn≤xj≤−1+ϵcn+kϵ2cn.

Then it follows from the definition of and in (14) and (15) that

 x−z(j,k) ≥ϵ2cnej−(1+ϵcn+1+2ϵ)e′j=ϵcnd(j), x−z(j,k) ≤ϵcnej−ϵe′j.

We thus obtain

 F′(x)(ej−cne′j) =cnϵF′(x)(ϵcnej−ϵe′j)≥cnϵF′(x)(x−z(jk))≥cnϵ(F(x)−F(z(jk)))

which proves (a).

(b) and (c) are analogous to the proof of lemma 2.2.

For the proof of (d) note that using (c) implies that for all ,

 n∑j=1yj+(1+(c−1)n)yk ≥maxj=1,…,nyj+(n−1)minj=1,…,nyj+(1+(c−1)n)yk =maxj=1,…,nyj+cnminj=1,…,nyj≥0.

so that implies . Also, it is easily checked that

 11+(c−1)n(In−11+cn11T)(11T+(1+(c−1)n)In)=In

(e) follows from (d) as in the proof of lemma 2.2.

Proof of Theorem 2.3. We proceed as in the proof of Theorem 2.2. Assertion (a) has already been proven in lemma 2.3(b). To show assertion (b), we first prove that for all with and , the next Newton iterate is well-defined and fu