1 Introduction
Convergence of domain decomposition methods rely heavily on the efficiency of the coarse space used in the second level, see [15, 22, 16] and references therein. These methods are based on two ingredients: a coarse space (CS) and a correction formula (see e.g. [21]). The GenEO coarse space introduced in [19] has been shown to lead to a robust twolevel Schwarz preconditioner which scales well over multiple cores. The robustness is due to its good approximation properties for problems with highly heterogeneous material parameters. This approach is closely related to [4]. We refer to the introduction of [19] for more details on the differences and similarities between both approaches. Here we will mainly work with a slight modification of the GenEO CS introduced in [2] for the additive Schwarz method (see e.g. [22]) and the GenEO2 CS introduced in [7] for the P.L. Lions algorithm [11]. These variants are easier to implement and in practice have similar performances although they may lead to a larger CS. More details are given in Annex 5 where we explain how to adapt the framework of [2] to the GenEO CS of [19].
We focus in this paper on a modification of the coarse component of the correction formula. Indeed, the coarse component of the preconditioner can ultimately become a bottleneck if the number of subdomains is very large and exact solves are used. It is therefore interesting to consider the effect of inexact coarse solves on the robustness. We show that the additive Schwarz method is naturally robust. Interestingly, the GenEO2 method introduced in [7] has to be modified in order to be able to prove its robustness in this context. In the context of domain decomposition methods, the robustness of the BDDC w.r.t. inexact coarse solves has been studied in [23, 24] and in [13]. We focus here on GenEO methods. Compared to works on multilevel methods such as [25, 3]
which are concerned with Schwarz multilevel methods where the coarse space is obtained by a coarse grid discretisation of the elliptic problem, we explicitly state robustness results of the two level method with respect to inexact coarse solves when the coarse space is obtained by the solution of local generalized eigenvalue problems. Moreover, we are not concerned only with Schwarz methods but also with P.L. Lions algorithm.
The general framework of our work is the following. Let be a one level preconditioner further enhanced by a second level correction based on a rectangular matrix whose columns are a basis of a coarse space . The coarse space correction is
(1) 
and the coarse operator is defined by
(2) 
Let denote a onelevel preconditioner, the hybrid twolevel method is defined by:
see the balancing domain decomposition method by J. Mandel [12]. This formula also appeared in an unpublished work by Schnabel [18], see [5] for more details on the connections between these works.
We consider Geneo methods, where the coarse space spanned by the columns of
is built from solving generalized eigenvalue problems (GEVP) in the subdomains. Recall that these GEVP solves are purely parallel tasks with no communication involved. This part of the preconditioner setup is not penalizing parallelism. Actually, in strong scaling experiments where the number of degrees of freedom of subdomains is smaller and smaller, the elapsed time taken by these tasks will decrease. Thus, this task scales strongly. On the other hand, as the size of matrix
typically increases linearly with the number of subdomains, the solving of the corresponding linear systems for instance with afactorization becomes a bottleneck in twolevel domain decomposition methods. It is therefore interesting to estimate the robustness of the modified twolevel method when in (
2) the operator is approximated by some operator :since it paves the way to inexact coarse solves or to three or more level methods. Operator may be obtained in many ways: approximate LU factorizations (e.g. ILU(k), ILU or single precision factorization), Sparse Approximate Inverse, Krylov subspace recycling methods, multigrid methods and of course domain decomposition methods. In the latter case, we would have a multilevel method. Note that our results are expressed in terms of the spectral properties of so that an approximation method for which such results exist is preferable.
More precisely, formula (2) is modified and the preconditioner we study is defined by:
and throughout the paper we make
Assumption 1.1
The operator is symmetric positive definite.
2 Basic definitions
The problem to be solved is defined via a variational formulation on a domain for :
where is a Hilbert space of functions from with real values. The problem we consider is given through a symmetric positive definite bilinear form that is defined in terms of an integral over any open set . Typical examples are the heterogeneous diffusion equation (
is a diffusion tensor)
or the elasticity system ( is the fourthorder stiffness tensor and is the strain tensor of a displacement field ):
The problem is discretized by a finite element method. Let denote the set of degrees of freedom and be a finite element basis on a mesh . Let be the associated finite element matrix, , . For some given right hand side , we have to solve a linear system in of the form
Domain is decomposed into (overlapping or non overlapping) subdomains so that all subdomains are a union of cells of the mesh . This decomposition induces a natural decomposition of the set of indices into subsets of indices :
(3) 
For all , let be the restriction matrix from to the subset and be a diagonal matrix of size , so that we have a partition of unity at the algebraic level,
(4) 
where
is the identity matrix.
We also define for all subdomains , , the matrix defined by
(5) 
When the bilinear form results from the variational solve of a Laplace problem, the previous matrix corresponds to the discretization of local Neumann boundary value problems. For this reason we will call it “Neumann” matrix even in a more general setting.
We also make use of two numbers and related to the domain decomposition. Let
(6) 
be the maximum multiplicity of the interaction between
subdomains plus one. Let be the maximal multiplicity of subdomains intersection, i.e. the largest integer such that there exists different subdomains whose intersection has a non zero measure.
Let be defined as:
(7) 
the operator is thus an approximation to the orthogonal projection on
which corresponds to an exact coarse solve.
Note that although is not a projection it has the same kernel and range as :
Lemma 2.1
We have
where
is the vector space
orthogonal to , that is when is endowed with the scalar product induced by : .Proof First note that the kernel of contains . On the other hand, we have:
Since is SPD, it means that , that is . We have thus . Note that
As for the image of , since the last operation in its definition is the multiplication by the matrix we have . Conversely, let , there exists such that . It is easy to check that . Thus, .
The same arguments hold if is replaced by . Thus, and have the same kernel and image.
3 Inexact Coarse Solves for GenEO
The GenEO coarse space was introduced in [19] and its slight modification is defined as follows, see [2]:
Definition 3.1 (Generalized Eigenvalue Problem for GenEO)
For each subdomain , we introduce the generalized eigenvalue problem
(8) 
Let be a userdefined threshold, we define as the vector space spanned by the family of vectors corresponding to eigenvalues larger than .
Let be the projection from on parallel to .
In this section, denotes a rectangular matrix whose columns are a basis of the coarse space defined in Definition (3.1). The dimension of is . The GenEO preconditioner with inexact coarse solve reads:
(9) 
The study the spectrum of is based on the Fictitious Space lemma which is recalled here, see [14] for the original paper and [6] for a modern presentation.
Lemma 3.1 (Fictitious Space Lemma, Nepomnyaschikh 1991)
Let and be two Hilbert spaces, with the scalar products denoted by and . Let the symmetric positive bilinear forms and , generated by the s.p.d. operators and , respectively (i.e. for all and for all ). Suppose that there exists a linear operator that satisfies the following three assumptions:

is surjective.

Continuity of : there exists a positive constant such that
(10) 
Stable decomposition: there exists a positive constant such that for all there exists with and
(11)
We introduce the adjoint operator by
for all and
.
Then, we have the following spectral estimate
(12) 
which proves that the eigenvalues of operator are bounded from below by and from above by .
Loosely speaking, the first assumption corresponds to equation (2.3), page 36 of [22] where the global Hilbert space is assumed to satisfy a decomposition into subspaces. The second assumption is related to Assumptions 2.3 and 2.4, page 40 of [22]. The third assumption corresponds to the Stable decomposition Assumption 2.2 page 40 of [22].
In order to apply this lemma to the preconditioned operator , we introduce Hilbert spaces and as follows:
endowed with the bilinear form and
endowed with the following bilinear form
(13) 
We denote by the operator such that for all .
Let be defined by
(14) 
where . Recall that if we had used an exact coarse space solve, we would have introduced:
(15) 
Note that we have
It can be checked that , see (9). In order to apply the fictitious space Lemma 3.1, three assumptions have to be checked.
Continuity of
We have to estimate a constant such that for all , we have:
Let be some positive number. Using that the image of is orthogonal to the image of , CauchySchwarz inequality and the orthogonality of the projection , we have:
It is possible to minimize over the factor in front of using the
Lemma 3.2
Let and be positive constant, we have
Proof
The optimal value for corresponds to the equality .
Let
(16) 
the formula of Lemma 3.2 yields
(17) 
Actually, can be expressed in term of the minimal eigenvalue of .
Lemma 3.3
Other formula for :
Proof Since is symmetric, its norm is also given by
We can go further by using the fact that is a orthogonal and that and have the same kernels and images:
This means that formula (17) for can be expressed explicitely in terms of and of the minimal and maximal eigenvalue of .
Stable decomposition
Let be decomposed as follows:
Let be such that , we choose the following decomposition:
The stable decomposition consists in estimating a constant such that:
(18) 
Since the second term in the left hand side is the same as in the exact coarse solve method, we have (see [2], page 177, Lemma 7.15):
(19) 
We now focus on the first term of the left hand side of (18). Let be some positive number, using again (19), the following auxiliary result holds:
The best possible value for is
Hence, we have:
(20) 
Thus, we have:
This last estimate along with (19) prove that in (18), it is possible to take
(21) 
Overall, with given by (21) and by (17), we have proved the following spectral estimate:
(22) 
Constants and are stable with respect to and the spectrum of so that (22) proves the stability of preconditioner w.r.t. inexact solves.
4 Inexact Coarse Solves for GenEO2
The GenEO2 coarse space construction was introduced in [8, 7] , see [2] also § 7.7, page 186. It is motivated by domain decomposition methods for which the local solves are not necessarily Dirichlet solves e.g. discretization of Robin boundary value problems, see [20]. We have not been able to prove the robustness of the GenEO2 coarse space with respect to inexact coarse solves when used in the original GenEO2 preconditioner (40), see remark 4.3. For this reason, we study here a slight modification of the preconditioner, eq. (4.3), for which we prove robustness. The more intricate analysis of GenEO2 compared to the one of GenEO is related to the differences between the Schwarz and P.L. Lions algorithms themselves. Indeed, in the Schwarz method, Assumption (ii) of the fictitious space lemma 3.1 comes almost for free even for a one level method whereas Assumption (iii) (stable decomposition) can only be fulfilled with a two level method. In P.L. Lions algorithm neither of the two assumptions are satisfied by the one level method. This is reflected in the fact that the proofs for GenEO2 are more intricate than for GenEO.
For all subdomains , let be a matrix of size , which comes typically from the discretization of boundary value local problems using optimized transmission conditions or Neumann boundary conditions. Recall that by construction matrix is symmetric positivesemi definite and we make the extra following assumption:
Assumption 4.1
For all subdomains , matrix is symmetric positive semidefinite and either of the two conditions holds

is definite,

and is definite.
We first consider the case where is definite. The other case will be treated in Remark 4.4. We recall the coarse space defined in [8, 7, 2]. Let and be two user defined thresholds. We introduce two generalized eigenvalue problems which by Assumption 4.1 are regular.
Definition 4.1 (Generalized Eigenvalue Problem for the lower bound)
For each subdomain , we introduce the generalized eigenvalue problem
(23) 
Let be a userdefined threshold and be the projection from on parallel to . We define as the vector space spanned by the family of vectors corresponding to eigenvalues smaller than . Let be the vector space spanned by the collection over all subdomains of vector spaces .
Definition 4.2 (Generalized Eigenvalue Problem for the upper bound)
For each subdomain , we introduce the generalized eigenvalue problem
(24) 
Let be a userdefined threshold, we define as the vector space spanned by the family of vectors corresponding to eigenvalues larger than . Let be the vector space spanned by the collection over all subdomains of vector spaces .
Now, let denote the orthogonal projection from on
parallel to
The coarse space built from the above generalized eigenvalues is defined as the following sum:
It is spanned by the columns of a full rank rectangular matrix with columns. Projection and its approximation are defined by the same formula as above, see (7).
We have the following
Lemma 4.1
For , let us introduce the orthogonal projection from on
Then for all , we have:
Moreover, for all , we have:
Proof Let , we have:
Since we have by Lemma 7.6, page 167 in [2] :
the conclusion follows by summation over all subdomains.
The definition of the stable preconditioner is based on a pseudo inverse of that we introduce now. Let denote the restriction of to where is endowed with the Euclidean scalar product:
(25) 
By Riesz representation theorem, there exists a unique isomorphism into itself so that for all , we have:
The inverse of will be denoted by and is given by the following formula
Comments
There are no comments yet.