## 1. Introduction

In 2009, Nielsen, Tveito, and Hackbusch studied in [12] spectra of elliptic differential operators of the type defined in infinite-dimensional spaces which are preconditioned using the inverse of the Laplacian. They proved that the range of the scalar coefficient is contained in the spectrum of the preconditioned operator, provided that is continuous. Ten years later, Gergelits, Mardal, Nielsen, and Strakoš showed in [5] without any assumption about the continuity of the scalar function that there exists a one-to-one pairing between the eigenvalues of the discretized operator of the type preconditioned by the inverse of the discretized Laplacian and the intervals determined by the images under of the supports of the conforming finite element (FE) nodal basis functions used for the discretization.

The present paper contributes to the results of [5] and generalizes some of them. While in [5], a one-to-one pairing between the eigenvalues and images of the scalar data defined on supports of the FE basis function is proved, we introduce guaranteed two-sided bounds on all individual eigenvalues. Our approach is based on the Courant–Fischer min-max principle. Similarly as in [5], the bounds can be obtained solely from the data of the original and preconditioning problems defined on supports of the FE basis functions. While in [12] and [5]

only the diffusion operator with scalar data is considered and the Laplacian operator is used for preconditioning, we treat also the diffusion operator with tensor data and with Dirichlet or Robin boundary conditions for both the original and preconditioning operators. Our theory also applies to operators with non-zero null spaces and to operators with vector valued unknown functions; as an example we study the elasticity operator with general tensor data. Any kind of conforming FE basis functions can be employed for discretization; the sets of the FE basis functions must be the same for the original and preconditioning operators. For the sake of brevity, the name preconditioning matrix (operator) will be used for the matrix

(or operator) which is (spectrally) close to the original matrix (or operator, respectively) rather than for the inverse of . In contrast, in literature, including [5], is often called the preconditioning matrix.For numerical solution of sparse discretized elliptic partial differential equations, the conjugate gradient method (or Krylov subspace methods, in general) is a method of choice; see, e.g.,

[8, 16, 13]. It is well known, that its convergence depends on distribution (clustering) of eigenvalues of the related matrices and on magnitudes of the associated eigenvectors within the initial residual. For example, large outlying eigenvalues or well-separated clusters of large eigenvalues can lead to acceleration of convergence, see, e.g.,

[8, 14] or the example in [5, Section 2]. Using finite precision arithmetic, however, can slow down the convergence rate; see, e.g. [10, 15] and the recent comprehensive paper [6]. In any case, controlling or estimating not only condition numbers but also distributions of all eigenvalues of preconditioned matrices can yield faster convergence. Our approach can also provide guaranteed easily accessible lower bounds on the smallest eigenvalue of the preconditioned problem, which is demanded, for example, for accurate algebraic error estimates; see, e.g., [9].The structure of the paper is as follows. In the subsequent section, we introduce the diffusion and linear elasticity equations as examples of scalar and vector valued elliptic differential equations which our approach can be applied to. In the third section, the discretization and the preconditioning setting are described. In the fourth section, the main part of the paper, we suggest a method of estimating the eigenvalues of the preconditioned matrices. The theoretical developments are accompanied with illustrative examples. Finally, we compare our method with the recent results from [5]. A short conclusion summarizes the paper.

## 2. Diffusion and elasticity problems

Our theory of estimating the eigenvalues will be applied to two frequent types of scalar and vector valued elliptic partial differential equations: the diffusion and linear elasticity equations, respectively. To this end, let us briefly introduce the associated definitions and notation; see, e.g., [2, 3, 4, 11] for further details. We assume general mixed boundary conditions for the diffusion equation, and for simplicity of exposition, homogeneous Dirichlet boundary conditions for the elasticity equation.

Let be a polygonal bounded domain, where or . We consider the diffusion equation with Dirichlet and Robin boundary conditions

where and are two disjoint parts of the boundary , , and denotes the outer normal to . After lifting the solution by a differentiable function that fulfills the non-homogeneous Dirichlet boundary condition and substituting , the weak form of the new problem reads: find such that

(2.1) |

where

for ; see, e.g., [4] for details. We assume , , and , on . The material data are assumed to be essentially bounded, i.e. , symmetric, and uniformly elliptic (positive definite) in . Thus there exist constants such that

(2.2) |

The weak form of the linear elasticity problem with homogeneous boundary conditions reads: find , , such that

(2.3) |

where

for , where are body forces. Due to the homogeneous Dirichlet boundary conditions on , we use the special notation of the solution space. Let

(2.4) |

be the components of the Cauchy stress tensor with the strain components obtained from the displacement vector as

Assuming and denoting , , we can write

We assume that the coefficients of the tensor in (2.4) are bounded measurable functions defined in , , fulfilling the symmetry conditions

(2.5) |

Further, we assume there exists a constant such that

Assuming , due to the symmetries (2.5) of , there exist coefficients , , such that the stress vector can be obtained from the strain vector as

Starting from this place, we will use only the new set of material coefficients , , (instead of , ) and call the associated matrix . Some special material qualities imply certain structures of . For example, homogeneous cubic 3D material corresponds to , , , and annihilates the other components, where , and . Especially, for isotropic material, we have

where is the Young’s modulus and is the Poisson ratio [11].

The vector of external forces fulfills

yielding

Thus and can be equivalently written as

If , the dimensions of the arrays naturally reduce. For example, for cubic material we get

## 3. Discretization and preconditioning

We assume that a conforming FE method is employed to discretization of the diffusion and elasticity problems defined by (2.1) and (2.3), respectively. The domain is thus decomposed into a finite number of elements , . Some continuous FE basis functions (with compact supports) denoted by , , are used as approximation and test functions. By we denote the smallest patch of elements covering the support of . Correspondingly to section 2, we denote the material data by and of the diffusion and elasticity operators, respectively, and the data of the associated preconditioning operators by and , respectively. The function entering the Robin boundary conditions is allowed to be different in the original and preconditioning operators; therefore, it is denoted by in the latter.

The stiffness matrices and of the systems of linear equations of the discretized problems (2.1) and (2.3), respectively, have elements

and

(3.1) |

respectively, where , and . The preconditioning matrices and obtained for the material data and , respectively, have elements

and

respectively. All integrals are supposed to be carried out exactly.

The idea of preconditioning, see, e.g. [7, Section 10.3] or [13, Chapters 9 and 10], is based on assumptions that a system of linear equations with a matrix is relatively easily solvable and that the spectrum of is more favorable than that of regarding some iterative solution method, which does not necessarily mean a smaller condition number [5]. Substituting the equation with

thus leads to equivalent problems that can be solved more efficiently than the original one.

## 4. Bounds on eigenvalues of preconditioned problems

The main results of the paper are introduced in this section. Instead of presenting our results for a general elliptic second order partial differential equation with tensor data and a vector valued unknown function , we first present our theory for the (scalar) diffusion equation with tensor data in full detail. Then we apply the same approach to the elasticity equation. The section is concluded by some general remarks mainly on relationship between our results and the recent results from [5].

### 4.1. Diffusion equation

The lower and upper bounds on the eigenvalues of for any uniformly positive definite measurable data are introduced in this part. The boundary conditions of the original and preconditioning problems may differ at most in the function , i.e. instead of , the function can be used in Robin boundary condition of the preconditioning problem. We assume, however, that there exist constants such that

Since is the number of the FE basis functions then . We now build two sequences of positive real numbers and , . Let us first set

if no edge of lies in , and

if at least one edge of lies in , . If and are element-wise constant and if and are constant on every edge (of any element) lying in , the computation of and reduces to calculating the extreme eigenvalues of matrices on all individual elements , , and eventual comparing them with on some of the attached edges. For every function , supported on the patch , let us set

(4.1) |

Thus and are in the above sense the smallest and the largest, respectively, eigenvalues of on the patch , or the extremes of along the parts of the boundary of lying in . After inspecting all patches, we sort the two series in (4.1) non-decreasingly. Thus we obtain two bijections

such that

(4.2) |

Note that we could define and compute and directly without defining and . However, dealing with the constants and is more algorithmically acceptable, because it allows to avoid multiple evaluation of eigenvalues of on every element.

Next we prove an auxiliary lemma. Let denote the spectrum of the matrix .

###### Lemma 4.1.

Let be symmetric and positive definite for all . Let there exist constants and such that

(4.3) |

and

Then for we get

(4.4) |

and

(4.5) | |||

###### Proof.

Now we introduce the first part of the main results of this paper.

###### Theorem 4.2.

Let us assume that the -dimensional measure of is positive. The lower and upper bounds on the eigenvalues of are given by (4.2), i.e.,

(4.6) |

###### Proof.

Due to the positive measure of , the matrices and are positive definite. We only prove the lower bounds of (4.6); the upper bounds can be proved analogously. Due to the Courant–Fischer min-max theorem, e.g. [7, Theorem 8.1.2],

where denotes a subspace of . Then we have

where the inequality follows from Lemma 4.1. Indeed, using , definitions (4.1) and Lemma 4.1 with , we get

Then we proceed to

where the last inequality follows from Lemma 4.1 where (due to ) contains only the patches associated to the FE basis functions , ,

and from

We can proceed further in the same manner to get all inequalities of (4.6). ∎

In Theorem 4.2, we consider positive definite problems with homogeneous Dirichlet and/or general Robin boundary conditions (with ). Neumann boundary condition is a special type of Robin boundary condition with . In practical implementation of nonhomogeneous Dirichlet boundary conditions, the lifting function does not necessarily have to be employed. If the same non-homogeneous Dirichlet boundary conditions are considered for the original and preconditioning problems, the method of getting the lower and upper bounds (4.2) can be used unchanged. Our theory, however, does not cover the settings where the original and preconditioning problems are considered under different non-homogeneous Dirichlet boundary conditions or different functions in Robin boundary conditions, or if in the preconditioning problem does not coincide with used for the original problem.

If periodic or Neumann boundary conditions are applied along and if they are the same for the original and preconditioning problems, then and are singular; they share the smallest eigenvalue and the associated eigenvector. Then we can use the same method again to get the bounds on all of the eigenvalues of the preconditioned matrix; however, we must omit the null space of (which is the same as the null space of ) from the respective formulas. To justify the method, we can proceed analogously as in the proof of Theorem 4.2, where the vectors are now additionally considered fulfilling . Then

We can proceed further, analogously to the proof of Theorem 4.2,

In this way we get lower bounding numbers on the non-zero eigenvalues of , where both and are now considered restricted to the subspace of that is orthogonal to the null space of . Analogously, we get the upper bounds; thus finally,

Let us now apply our method to some examples.

###### Example 4.3.

Assume , , ,

and a simple and a more sophisticated preconditioning operators with

respectively. Let us consider one of the following settings:

(a) uniform grid with piece-wise bilinear FE functions, or
, ; see Figure 1;

(b) uniform grid with piece-wise bilinear FE functions,
periodic boundary conditions, ; see Figure 2;

(c) nonuniform grid and triangular elements with piece-wise linear
FE functions, , ; see Figure 3.

The numerical experiments illustrate that the bounds on the
eigenvalues are guaranteed for different types of boundary conditions. We can also notice that since
is point-wise closer to than to ,
the spectrum of the second preconditioned problem
(together with its bounds) is closer to unity than the spectrum of the problem preconditioned by using .
Note also that refining the mesh does not lead to more accurate
bounds, in general. This is caused by the difference between the extreme eigenvalues
of , , on individual elements;
see also section 4.3.

The numbers of the CG steps needed to reduce the energy norm of the errors by the factor (starting with zero initial vectors) for setting (a) with in are 17 and 13 for and , respectively, for , and 20 and 15 for and , respectively, for .

### 4.2. Elasticity equation

In the elasticity problem, or in vector valued problems in general, the searched function has multiple components, , where individual components are coupled within the equation. For approximation of the scalar functions , , we use the same sets of the FE basis functions , , supported again inside the patches . Recall that for the sake of simplicity, we consider homogeneous Dirichlet boundary conditions only.

###### Lemma 4.4.

Let , where if , and if . Let and be symmetric and positive definite for all . Let there exist constants such that

(4.7) |

Then for we get

(4.8) |

We now show how to obtain the guaranteed bounds on all individual eigenvalues of the preconditioned elasticity problem for any positive definite material data and . Since is the number of the FE basis functions defined on used to approximate each component of , the number of unknowns is . We now build two sequences and , , to bound the eigenvalues of . In contrast to subsection 4.1, for the sake of brevity, we do not define and , but we directly set

. Similarly to the case of the diffusion equation in section 4.1, we sort these two series non-decreasingly, and thus get bijections

such that

Moreover, we double (if ) or triple (if ) all items in the two series of and and get two new -times longer series

Comments

There are no comments yet.