A note on using the mass matrix as a preconditioner for the Poisson equation

11/04/2021
by   Chen Greif, et al.
The University of British Columbia
0

We show that the mass matrix derived from finite elements can be effectively used as a preconditioner for iteratively solving the linear system arising from finite-difference discretization of the Poisson equation, using the conjugate gradient method. We derive analytically the condition number of the preconditioned operator. Theoretical analysis shows that the ratio of the condition number of the Laplacian to the preconditioned operator is 8/3 in one dimension, 9/2 in two dimensions, and 2^9/3^4 ≈ 6.3 in three dimensions. From this it follows that the expected iteration count for achieving a fixed reduction of the norm of the residual is smaller than a half of the number of the iterations of unpreconditioned CG in 2D and 3D. The scheme is easy to implement, and numerical experiments show its efficiency.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/04/2021

A closed-form multigrid smoothing factor for an additive Vanka-type smoother applied to the Poisson equation

We consider an additive Vanka-type smoother for the Poisson equation dis...
03/11/2021

A finite difference method for the variational p-Laplacian

We propose a new monotone finite difference discretization for the varia...
05/09/2020

High order mixed finite elements with mass lumping for elasticity on triangular grids

A family of conforming mixed finite elements with mass lumping on triang...
04/19/2021

Two-phase image segmentation by the Allen-Cahn equation and a nonlocal edge detection operator

Based on a nonlocal Laplacian operator, a novel edge detection method of...
11/30/2021

A novel multigrid method for elliptic distributed control problems

Large linear systems of saddle-point type have arisen in a wide variety ...
11/27/2020

HMG – Homogeneous multigrid for HDG

We introduce a homogeneous multigrid method in the sense that it uses th...
06/03/2020

Easy and Efficient preconditioning of the Isogeometric Mass Matrix

This paper deals with the fast solution of linear systems associated wit...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider a standard finite difference discretization of the Poisson equation in one, two and three dimensions:

(1)

on a simple domain , e.g., the unit interval, square or cube respectively, and subject to simple boundary conditions such as Dirichlet. Suppose we discretize the problem on a uniform mesh whose size is , where is the number of meshpoints in a single direction of the domain.

The computational stencil for the Laplacian is given by

(2)
(3)

and

(4)

For large problems in two and three dimensions we are interested in iterative methods, and specifically, in solving the resulting linear system by the Conjugate Gradient (CG) method.

In our recent work [CH2021addVanka], we showed that in one dimension and in the context of multigrid, an element-wise Vanka smoother is equivalent to the scaled mass operator obtained from the linear finite element method, and in two dimensions, the element-wise Vanka smoother is equivalent to the scaled mass operator discretized by bilinear finite element method plus a scaled identity operator. While the context of that work is different, this has motivated us to ask whether the mass matrix obtained from the finite element method can be utilized as a preconditioner for the Laplacian. Here, we mean that preconditioning would amount to multiplying by the mass matrix; no inversion is involved. Such a possibility seems attractive given the ease of multiplying the Laplacian by the mass matrix, which is sparse and well conditioned. In this short note we provide analytical and numerical evidence that using the mass matrix in this manner at least doubles the convergence speed of CG in 2D and 3D at a modest computational cost.

In Section 2 we provide analytical observations on the condition number and the spectral distribution of the Laplacian scaled by the mass matrix, in comparison with the Laplacian. In Section 3 we validate our analysis experimentally. Brief concluding remarks are given in Section 4.

2 Convergence analysis

The stencil of the mass matrix in 1D using linear finite elements is given by

In the context of solving the Poission equation (1), we consider the scaled mass matrix as a preconditioner for the Laplacian operator defined in (2), (3) and (4), given by

A well-known convergence bound on CG is determined by the condition number of the coefficient matrix, and we will study the condition numbers of and preconditioned operator . We rush to add that formally one would need to consider a symmetric positive definite similarity transformation of the latter, , but the spectrum and the condition number are not affected by that transformation.

The following results are straightforward and/or well known, and are provided without a proof.

Lemma 2.1.

[MR1807961]

The eigenvalues of

are given by

Lemma 2.2.

The eigenvalues of are given by

Lemma 2.3.

The condition number of , is given by

From Lemma 2.3, we see that when , the condition number goes to infinity, which means that the iteration number will increase dramatically for CG without preconditioner.

We now consider the preconditioned operator and analyze its eigenvalues.

Theorem 2.1.

The eigenvalues of are given by

(5)

Furthermore, the condition number of is as follows:
In 1D,

(6)

where stands for the integer part of a number.

In 2D

(7)

In 3D

(8)

where .

Proof.

We can consider local Fourier analysis [MR1807961] here to compute the eigenvalues of . When and are obtained from periodic operator, then the eigenvalues of are the products of eigenvalues of and .

When , from (5), we have

Let us consider with . Note that the maximum of is achieved at and the minimum is achieved at . Thus,

which leads to (6).

When , from (5), we have

Let us consider with . We compute the derivatives of with respect to and , given by

Solving with gives . It readily follows that is a local maximum point and . Next, we consider the boundary of , and we find the extreme maximum are and the minimum is . Thus,

which leads to (7).

When , from (5), we have

Let us consider with . We compute the derivatives of with respect to and , given by

Solving with gives . It is obvious that is a local maximum point and .

Next, we consider at the boundary of , and due to the symmetry of , we only need to consider two cases. One is that and and the other is and . When and , . However, from the proof of the case , we know that the maximum of is and the minimum of is . When and , . It can easily be shown that the maximum of is and the minimum of is . Thus, the maximum of is and the minimum is . This means that

which yields (8). ∎

Next, we describe the relationship between the two condition numbers, and .

Theorem 2.2.

Define the ratio . The the ratio satisfies:
In 1D

and

In 2D

and

In 3D

where and

From Theorem 2.2 it is interesting to notice that the gains in terms of condition number ratios grow with the dimension; this suggests that our approach is particularly effective for 3D.

It is well known that the convergence bound of CG satisfies [saad2003iterative]

Requiring gives

It follows that to achieve the same convergence tolerance , the ratio of iteration numbers of CG without preconditioner to that of CG with preconditioner is

(9)

3 Numerical experiments

To demonstrate the efficiency of the mass matrix as a preconditioner for the Laplacian, we consider the Poisson equation in two and three dimensions on the unit square and cube, respectively, subject to homogeneous Dirichlet boundary conditions. We discretize it using a uniform mesh, as briefly described in Section 1. We run CG with and without preconditioner and stop the iteration when the residual norm is below .

In Figure 1 we illustrate the effect that preconditioning has on the eigenvalues of the matrix. It is evident that most of the eigenvalues of the preconditioned matrix have values relatively close to 1, which explains the effectiveness of the preconditioner.

Figure 1: Eigenvalues of the Laplacian vs. the product of the mass matrix by the Laplacian. This is a 2D problem with , i.e., the matrix is of dimensions .

In Table 1, we numerically compute the ratio for different values of meshgrid size and dimension . The results in Table 1 very closely match the analytical results in Theorem 2.2. For fixed , when increases, the ratio increases, but is bounded.

8 16 32
32.1634/12.6914 2.5 116.4612/44.2414 2.6 440.6886/165.8836 2.7
32.1634/7.6173 4.2 116.4612/26.3451 4.4 440.6886/98.3943 4.5
32.1634/5.5393 5.8 116.4612/18.8900 6.2 440.6886/70.1771 6.3
Table 1: The ratio of two condition numbers

We now move to show some convergence results in 2D and 3D. In Table 2 we summarize our findings.

Type mtx-size itn-unprec itn-prec th-itn-ratio itn-ratio
2D 32 1,024 62 30 2.12 2.07
64 4,096 122 58 2.10
128 16,384 231 110 2.10
256 65.536 454 215 2.11
3D 32 32,768 81 33 2.51 2.45
64 262,144 158 63 2.51
96 884,736 225 90 2.50
128 2,097,152 296 118 2.51
Table 2:

Iteration counts for 2D and 3D experiments. The column under ‘mtx-size’ gives the sizes of the linear systems considered (number of degrees of freedom). The term ‘th-itn-ratio’ stands for ‘theoretical iteration counts ratio’ (see (

9)), as explained in the example. The term ‘itn-ratio’ refers to the ratio between iteration counts for the unpreconditioned case (‘itn-unprec’) and iteration counts for the preconditioned case (‘itn-prec’).

These results in Table 2 are consistent with our theoretical findings. When , and . Thus, unpreconditioned CG is expected to take approximately 2.12 times the iteration number of preconditioned CG. When , and . Thus, unpreconditioned CG is expected to take approximately 2.51 times the iteration number of preconditioned CG. The table shows that those predictions are remarkably accurate in both 2D and 3D.

In Figure 2, we show convergence history for and in 2D. In Figure 3 we show convergence history and in 3D.

Figure 2: Convergence history of CG with and without a preconditioner in 2D. Left: . Right: .
Figure 3: Convergence history of CG with and without preconditioner in 3D. Left: . Right: .

4 Concluding remarks

Our analytical results provide a remarkably accurate estimate of the condition number and iteration counts for CG. At a minimal cost that amounts to a matrix-vector product by the sparse and well-conditioned mass matrix, convergence speed is at least doubled. The gains are stronger in the 3D case. The cost of the additional matrix-vector product per iteration is modest, especially if considered in a parallel computing environment. Therefore, the overall computational gains are meaningful.

The proposed scheme is extremely simple and easy to implement and may make it possible to utilize the mass matrix in other problems in potentially useful ways.

References