On condition numbers of symmetric and nonsymmetric domain decomposition methods

05/30/2019 ∙ by Juan Galvis, et al. ∙ 0

Using oblique projections and angles between subspaces we write condition number estimates for abstract nonsymmetric domain decomposition methods. In particular, we design and estimate the condition number of restricted additive Schwarz methods. We also obtain non-negativity of the pre-conditioner operator. Condition number estimates are not enough for the convergence of iterative method such as GMRES but these bounds may lead to further understanding of restricted methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The restricted additive Schwarz (RAS) was originally introduced by Cai and Sarkis in [4] in 1999. RAS outperforms the classical additive Schwarz (AS) preconditioner in the sense that it requires fewer iterations, as well as lower communication and CPU time costs when implemented on distributed memory computers, [4]. Unfortunately, RAS in its original form is nonsymmetric, and therefore the conjugate gradient (CG) method cannot be used. Pursuing the analysis of RAS, several interesting methods have been developed. Some of these versions have been completely or partially analyzed and some of them outperform the classical AS. Despite of many contributions, the analysis of this method remains incomplete.

We mention some of the developments related to the RAS method. The methods was introduced in [4]. The authors introduced the RAS as a cheaper and faster variants of the classical AS preconditioner for general sparse linear systems. The new method was shown to perform better that the AS according to the numerical studies presented there (see also [3]). The authors of [4] quoted that

…RAS was found accidentally. While working on a AS/GMRES algorithm in a Euler simulation, we removed part of the communication routine and surprisingly the “then AS” method converged faster both in terms of iteration counts and CPU time. We note that RAS is the default parallel preconditioner for nonsymmetric sparse linear systems in PETSc …

Many works have been devoted to RAS and therefore it would be difficult to present a complete review of them. Here we mention that in [7, 6] an algebraic convergence analysis is presented. In [2] the authors provide and extension of RAS using the so-called harmonic overlaps (RASHO). Both RAS and RASHO outperform their counterparts of the classical additive Schwarz variants. An almost optimal convergence theory is presented for the RASHO. In [5], it is shown that a matrix interpretation of RAS iteration can be related to the the continuous level of the underlying problem. The authors explain how this interpretation reveals why RAS converges faster than classical AS. Still, an explanation of the condition number of the RAS remains to be satisfactory. In [12], a by now classical book introducing domain decomposition methods, the authors comment

To our knowledge, a comprehensive theory of this algorithm is still missing. We note however that the restricted additive Schwarz preconditioner is the default parallel preconditioner for nonsymmetric systems in the PETSc library …and has been used for the solution of very large problems…

In this paper we re-visit the method proposed by Cai and Sarkis. We re-interpret the method as an iterative procedure where each iteration requires the solution of elliptic interface problems in each overlapping subdomain. The analysis of the method is presented in an abstract setting. First we write a Hilbert space framework for the analysis of the classical additive method. Then we generalize this Hilbert space framework and apply this extension to compute the condition number of several methods that use restrictions onto original subdomains in the construction (instead of restrictions to the overlapping subdomains). We present abstract results that may be useful to analyze non-symmetric domain decomposition method in general. We illustrate in particular how to use the results for a one level restricted additive method. Several other models and similar methods can be considered as well. For instance, restricted method for the elasticity equation, two-level domain decomposition method with classical or modern coarse spaces design, e.t.c.

The rest of the paper is organized as follows. In Section 2 we review the classical domain decomposition methods results in a simple Hilbert space framework. In Section 3 we recall the classical AS one level method. In Section 4 we present the abstract analysis of symmetric methods. We first revisit the analysis for symmetric methods using projections and angles between sub-spaces. We generalize this analysis to nonsymmetric methods. In particular we apply this analysis to a special family of nonsymmetric method. In Section 6 we define the restricted method that we analyze. In Section 7 we use the previously obtained results to write a condition number estimate of the restricted method defined before.

2 A Hilbert space framework

Let and be real Hilbert spaces with inner products and , respectively. The case of complex Hilbert spaces is similar. Consider to be a bounded operator with operator norm . In domain decomposition methods literature is referred to as a restriction operator. Introduce the transpose operator defined by

(1)

Despite of the fact that and operator norms depend on inner products and , our notation makes explicit only the dependence on . We use this convention also for operator norms.

Assume there is a (closed) subspace such that

(2)

is easy to compute. The operator is known as an extension operator. Note also that and that we have with

(3)

where is the orthogonal projection on using the inner product . We want to study the operator

(4)

This operator is clearly symmetric and non-negative in the inner product. If we want to be non-singular and since and are orthogonal in , we need to be sure is 1-1 or, equivalently, is onto. A sufficient condition for the symmetric operator to be invertible is given by the following lemma known as stable decomposition lemma or Lion’s lemma in domain decomposition community. For the sake of completeness we show a detailed proof as it is usually presented in domain decomposition literature; see for instance [12, Chapter 2] or [9] and references therein. We note that we do not need to refer to the space

at this moment. Later we revise some of these inequalities in a more natural way to obtain a sharper estimate.

Lemma 1 (Lions Lemma)

Assume that there exists a bounded right inverse of . That is, there exists a bounded operator such that for all . Then, the mapping is non-singular. Moreover, we have

for all .


Proof. Note that for we have,

Using this last inequality we obtain

To obtain the upper bound we proceed as follows using properties of subordinated norm of operators,

and therefore . We also have,

This finishes the proof.  

Remark 2

Note that what it is needed is the existence of operator such that is invertible. In this case we have that is an stable right inverse of .

If, in addition, the extension operator comes from a restriction operator , as in (2), we can state the following corollaries.

Corollary 3

Let be a restriction operator such that . Assume that there exits a bounded operator such that for all . Then, the mapping is non-singular with

for all .

Corollary 4

Let be a restriction operator such that . Assume that there exits a bounded operator such that for all . Then, the mapping is non-singular with

for all .

Assume that is the solution of the following variational equation,

(5)

Assuming that is easy to compute. We see that, for the solution , is possible to compute using this variational equation (without explicitly knowing or computing the function ). In fact, we have

(6)

This equation might be easier to solve numerically than the original problem. Therefore, we can alternatively compute the solution of (5) by iteratively solving the equation,

(7)

where . When implementing an iterative method, in each iteration we have to apply the operator

to a residual vector, say

. More precisely, we have to

  1. Compute , this can be done by solving the equation

    (8)

    In terms of the restriction operator we have .

  2. Compute by applying the extension operator , that is whic is assumed possible and numerically efficient to compute.

The practicality of using this iteration depends on the possibility to inexpensively compute the right hand side and, of course, the condition number of . Once the right hand side is computed, then the performance of the iterative procedure depends on the condition number of the associated operator equation. If we use the spectral condition number of the operator , we see from Lemma 1 that

Then, the number of iterations for solving the equation (7) (up to a desired tolerance) will depend on . In this case, due to the symmetry and positivity of the operator we can use, for instance, a conjugate gradient algorithm to solve the equation above.

In general, the condition number of an operator is defined by

Recall that we use the operator norm notation Here the norm where we make explicit the dependence on only.

3 Classical additive method for Laplace equation

In this section we use the Hilbert space framework above to review the analysis of the classical additive method. As usual, we consider a subdomain with a non-overlapping partition of the domain into subdomains . By enlarging these subdomains an specific width we obtain and overlapping decomposition . For more details see [12].

Let , and . In this case consider

Denoting by the elements of , we define,

where we have put and .

Remark 5 (Norm boundary term)

The roll of is not essential and can be replaced by any other bilinear form that vanish for function on and makes a norm on .

Introduce also defined by . Equation (1) defining corresponds to

Which implies that , defined as , is also given by

where is the extension by zero outside operator. To see this note that

We have that is given by

Note that where solves the local equation

Observe that can be obtained by solving a local problem. Then

In this case we denote . The existence of a right inverse can be stated as follows as it is common in domain decomposition literature.

Stable decomposition: There exists a constant such that for all there exist , such that and

(9)

This clearly implies the existence of and . In fact, where the functions are the ones given by the stable decomposition assumption.

Strengthened Cauchy inequalities. There exits a matrix with and such that

By using bilinearity and vector Chauchy inequalities, this clearly implies that

where is the spectral radius of the matrix above. Then and using that and in Lemma 1 we have the following result.

Corollary 6

For all we have

(10)

where is the stable decomposition constant and is the spectral radius of the matrix above.

Remark 7

Let us consider the case of the one level additive method setting. More levels can be analyzed in a similar way. In the one level setting, with original domains of size and overlapping of size a usual bound for is given as follows by constructing a stable decomposition as follows; see [12]. Start by constructing cut of functions such that

(11)

Define the partition of unity function

We see that

and therefore

(12)

We should have . Then define . Define

(13)

We have (see [12])

The norm of and are bounded by

In case we want to approximate the solution of problem (5), we see that also solves the operator equation,

Where is obtaining by -assembling the solutions of the local problems,

The linear system in (7) is then well conditioned.

4 Nonsymmetric methods obtained by changing restrictions and the inner product

We use the Hilbert space framework introduced in Section 2. Recall that we have and Hilbert spaces with inner products and , respectively. We also used the bounded restriction operator for the construction.

To obtain nonsymmetric methods we additionally introduce a second bi-linear form defined on . Let us introduce a possibly different and bounded restriction operator and the transpose defined analogously to (1) by

(14)

Define as a second extension operator. As before assume that there is an stable left-inverse for , say such that for all (that is, is bounded in the and inner product norms). We can then conclude about and similar inequalities than the given before in the case is symmetric and positive definite. In particular is a bijective application from onto .

Corollary 8

Let be a restriction operator such that . Assume that there exits a bounded right inverse of , say . Then, the mapping is non-singular with

and

for all .

We want to study the nonsingularity of the operator . Note that

(15)

As a particular case we can put . In this case, and . We can then obtain the operator

(16)

See (4). This operator is also nonsymmetric for general bi-linear forms and . This is due to the fact that might not be symmetric in the bilinear form.

Remark 9 (Perturbation theory)

Note hat we can write

where is a perturbation of of size . Several results can be pursue of the type: If is small, then the operator will be invertible and it is possible to estimate its condition number. Here we found these results are not practical for analyzing domain decomposition methods.

5 Condition number estimates using norms of projections

In this section we present a different analysis that may turn useful when estimating condition number of preconditioned operators (not-necessarily constructed by a domain decomposition design). We present a series of projection arguments in order to be able to study nonsymmetric methods. As presented earlier, the idea is to be able to estimate the condition number of an operator of the form where are different extension operators. In particular we are able to bound condition number for the family of nonsymmetric method presented in Section 4 where the extension operators are defined from restriction operator from to a bigger space . Before going to to nonsymetric method we revisit the condition number bound of symmetric methods.

5.1 Condition number of symmetric methods revisited

There is a simple way to interpret the non-singularity of obtained from the existence of , the right inverse of . We can construct solutions of the equation

(17)

which is equivalent to

(18)

as follows. The solution can be constructed by applying projections defined on . First recall that if we have , and , where is the identity operator on , we also have that . Therefore, has a left stable inverse that can be used in the analysis.

Let be given. Equation (17) is equivalent to

Note that so that we have

and therefore we conclude that is the orthogonal projection of on . We can then construct as follows:

  1. Define . Then we readily see that . By assumption we then have

    (19)
  2. Construct such that . In fact, we can use the orthogonal projection onto the subspace , which is denoted by . In this case we take . Note that . Therefore, this projection is along the subspace we have and therefore . In this case we have the obvious estimate

    (20)

    See Figure 1 for an illustration.

  3. Observe that and therefore for some . By applying we can make explicit to get . Then is the solution of (18). We obviously have the estimate

    (21)

Combining (19), (20) and (21) give us

We then obtain that is invertible and .

Figure 1: Illustration of subspaces of . In order to illustrate angles we picture as a cone. We also illustrate the projection .

It is easy to see that finally given a bound for the condition number of as

Note that this is not an spectral condition number but rather the condition number of the operator .

A first consequence of this analysis is that it is evident that the estimate in (20) is, in general, not sharp. It does not have into account the relative position of the subspace with respect to , which may be taken into account.

We need the following definitions and results; see [11, 1, 8]. Let and be subspaces of (or ). Introduce the minimal angle between subspaces and with respect to the inner product , as

(22)

Equivalently, we have where

(23)

Still equivalent, we have,

(24)

where is the (oblique) projection on and in the direction of .

Introduce the maximal angle between subspaces and , as

Equivalently we have where

(25)

We also have,

and

(26)

Denote by the restriction of to . Then we can replace (20) by the sharper estimate

(27)

Observe that (see [11, 1, 8])

(28)
(29)
(30)
(31)

See Figure 1 for an illustration. See [11, 1, 8] more details and related results on oblique projections.

Remark 10

It is clear that for the case of the classical method of Section 3 where the overlap parameter is small we have but in general (for wide for instance) we may have .

We then have the following somehow sharper result for the abstract case.

Lemma 11

Assume that there exists a bounded right inverse of . That is, there exists a bounded operator such that for all . Then, the mapping is non-singular. Moreover, we have

where is the minimal angle between subspaces and , that is,

We then have the bound


Proof. The estimate is obtained by combining (19), (21) and the bound (27).  

Remark 12

The operator is obviously non-negative and therefore the condition number estimate in Lemma 11 we have useful bounds for converge of iterative method such as Krylov subspace methods (when is of finite dimension, for instance).

There is another interesting observation that is useful for the analysis and it is worth to state as a result before going to nonsymmetric methods.

Lemma 13

The operator is a projection on and along . Analogously, the operator is a projection on and along .

Using this lemma we can study the relative position of subspaces of interest. For instance we have,

(32)
Remark 14

Note that and are inverse to each other.

5.2 General nonsymmetric method analysis using projections

Let be a second extension operator. We want to study the operator . See Figure 2 for an illustration.

Figure 2: Illustration of subspaces of . In order to illustrate angles we picture as a cone. We also illustrate the procedure presented in the proof of Theorem 15 and the oblique projection .
Theorem 15

Consider extensions operators and with stable right inverse and , respectively. Assume the boundedness of , the oblique projection onto and in the direction of . Then, the operator is invertible. Moreover,


Proof. As introduced before, we solve the equation

Let be given.

  1. Define . Then we readily see that . By assumption we then have

    (33)
  2. Construct such that . Here we use the oblique projection . See Figure 2. In fact, . By definition of the projection we have so that . We have,

    (34)
  3. Take such that . In fact, . This is the solution of the equation above since we have . We can bound

    (35)

By combining the estimates in (33), (34) and (35) above we finish the proof.  

We can now give a bound for the condition number of the operator .

Corollary 16

We have