A new block diagonal preconditioner for a class of 3× 3 block saddle point problems

09/23/2021
by   Maryam Abdolmaleki, et al.
University of Guilan
0

We study the performance of a new block preconditioner for a class of 3×3 block saddle point problems which arise from finite element methods for solving time-dependent Maxwell equations and some other practical problems. We also estimate the lower and upper bounds of eigenvalues of the preconditioned matrix. Finally, we examine our new preconditioner to accelerate the convergence speed of the GMRES method which shows the effectiveness of the preconditioner.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

01/27/2022

A block triangular preconditioner for a class of three-by-three block saddle point problems

This paper deals with solving a class of three-by-three block saddle poi...
08/23/2018

An iterative generalized Golub-Kahan algorithm for problems in structural mechanics

This paper studies the Craig variant of the Golub-Kahan bidiagonalizatio...
11/06/2019

On fixed-point, Krylov, and 2× 2 block preconditioners for nonsymmetric problems

The solution of matrices with 2× 2 block structure arises in numerous ar...
03/23/2020

On the preconditioning of three-by-three block saddle point problems

We establish a new iterative method for solving a class of large and spa...
03/14/2018

Block Diagonally Dominant Positive Definite Sub-optimal Filters and Smoothers

We examine stochastic dynamical systems where the transition matrix, Φ, ...
09/28/2017

Preconditioners for Saddle Point Problems on Truncated Domains in Phase Separation Modelling

The discretization of Cahn-Hilliard equation with obstacle potential lea...
02/20/2020

A matrix-free Discontinuous Galerkin method for the time dependent Maxwell equations in unbounded domains

A Discontinuous Galerkin (DG) Finite Element Method (FEM) approach for t...

1 Introduction

Consider the following block saddle point problems:

(1.1)

where is a symmetric positive definite (SPD) matrix, and and have full row rank. In addition, , and

are given vectors, and

is an unknown vector which is to be determined. Furthermore, we presume that the matrices and are large and sparse. It is not difficult to check that under the above conditions the coefficient matrix is nonsingular and as a result system (1.1) has a unique solution [27]. In this work, we focus on preconditioned Krylov-subspace methods, especially, the preconditioned GMRES method (see [23, 24]).

Linear systems of the form (1.1) appear in a variety of scientific and engineering problems, for instance, full discrete finite element methods for solving the time-dependent Maxwell equations with discontinuous coefficients [1, 8, 13], the following quadratic program [15]:

where and are given vectors, Picard iteration schemes for variational formulation of the stationary incompressible magnetohydrodynamics system [21] and least squares problems [28].

Linear systems of block form,

(1.2)

known as traditional saddle point problems, have been extensively studied for decades, where and are positive and positive semi-definite matrices respectively, and is a full row rank matrix. Constructing preconditioners to improve the convergence speed of Krylov-subspace methods for solving (1.2) has produced a considerable amount of literature, e.g., shift-splitting preconditioners [5, 11, 12, 29], block triangular preconditioners [10, 14, 20, 25], inexact constraint preconditioners[2, 6, 30] and so on. It is obvious that the block linear system (1.1) can be seen as a special case of the traditional form (1.2) using the following partitioning strategies,

(1.3)

or

(1.4)

However, most of the methods for the latter matrices cannot be directly applied to (1.1). This is because the properties of the sub-matrix in (1.3) and (1.4) are different from the traditional form (1.2). Indeed, the leading block matrix in (1.3) is symmetric indefinite and a standard saddle point matrix. Thus, the linear system with the coefficient matrix (1.3) can be considered as a double saddle point problem studied recently [3, 4]. In addition, the block in (1.4) is a symmetric matrix and block is rank deficient. Therefore, it is extremely interesting to find new efficient preconditioners for the block saddle point problems (1.1).

Some preconditioners have been studied to accelerate convergence rate of the Krylov-subspace methods for solving this class of systems, for example [9, 16, 17]. Recently, Huang and Ma [18] proposed two block diagonal (BD) preconditioners,

(1.5)

for solving (1.1) in which , and are SPD approximations of and , respectively. They also derive all the eigenpairs of preconditioned matrix. Subsequently, Xie and Li [27] proposed three new preconditioners for solving the linear system (1.1) that can be seen as follows

(1.6)

where . They analyzed spectral properties of corresponding preconditioned matrices and showed that the proposed preconditioners significantly accelerate the convergence rate of GMRES method. However, when the inner systems are solved inexactly, the elapsed CPU time is increased drastically and often give unacceptable solutions.

Here, we consider the following equivalent form of (1.1):

(1.7)

Although the coefficient matrix of the system (1.7) is not symmetric, it has some desirable properties. For instance, the matrix is positive semi-definite, i.e., is symmetric positive semi-definite. This is a signification for the GMRES method. In fact, the restarted version of GMRES() converges for all . It is noticeable that setting up the exact preconditioners mentioned above are very time-consuming and the inexact preconditioners need approximations of and . In this paper, we establish a new block preconditioner for solving the linear system (1.7) which is easy to implement and has much better computing efficiently than the preconditioners studied recently.

It is noteworthy that, few analytical results on spectral bounds are available for a block matrix of the form . In fact, contrary to the case for block matrix, we get a cubic equation from the eigen-system of a block matrix. Therefore, estimating the bounds of eigenvalues brings some difficulties. Consider the following monic polynomial of degree :

(1.8)

Finding approximately the roots of through simple operations with its coefficients has led to publish a plenty of studies, for example, see the comprehensive surveys [22, 26] and references therein for further details. We will derive the spectral bounds of corresponding preconditioned system by using one of the classical and sharp bounds that has been used to obtain simple lower and upper bounds on the absolute value of the roots of .

This paper is divided into three sections, schemed as follows. In Section 2, the new preconditioner is presented and the clustering properties of corresponding preconditioned system are discussed. Numerical results are given in Section 3 to demonstrate the effectiveness of the new preconditioner. The paper is ended by some concluding remarks in Section 4.

We end this section with an introduction of some notation that will be used in the subsequent sections. The symbol is used for the conjugate transpose of the vector . For any square matrix with real eigenvalues, the minimum and maximum eigenvalues of are indicated by and , respectively. The norm indicates the Euclidean norm. Moreover, we use Matlab notation to denote the vector .

2 The new block diagonal preconditioner

We propose the following block preconditioner for solving the linear system (1.7)

(2.1)

where . The main advantage of the preconditioner over the preconditioners mentioned in Section 1 is that it is free of the Schur complement matrix and easy to implement. Concerning the clustering properties of the eigenvalues of the preconditioned matrix , we have the following theorems.

Theorem 2.1.

[19] Let be a monic polynomials with complex coefficients and be any root of . Then satisfies the following inequalities

  • Cauchy’s lower and upper bounds

  • Montel’s lower and upper bounds

  • Carmichael-Mason’s lower and upper bounds

  • Frobenius’ lower and upper bounds

It is noted that, Cauchy’s bounds are essentially the most sharpest ones in Theorem 2.1 and have been used for testing the sharpness of other new bounds. Hence, we will utilize these bounds to locate the eigenvalues of corresponding preconditioned matrix.

Theorem 2.2.

Suppose that is SPD, and and have full row rank. Then the preconditioned matrix has an eigenvalue of algebraic multiplicity , i.e. ,

, and the corresponding eigenvectors are of the form

where is a basis for the null space of . Moreover, for the remaining eigenvalues we have the following statements:

  • If , ,

  • If , ,

where

for some and .

Proof.

Let be an eigenpair of . Then, we have

which can be rewritten as

(2.2a)
(2.2b)
(2.2c)

If , then from (2.2a) we obtain , which shows that . This along with (2.2c) yield that , which is a contradiction with the fact that is an eigenvector. Next, we complete the proof in the following cases.

Firstly, we consider the case that . Since and is nonsingular, we deduce that . Then (2.2a) and (2.2b) are reduced to

respectively. This shows that and the corresponding eigenvectors are of the form with

Next, we consider the case that . Then from (2.2a), we know that . Noticing that both of matrices and are nonsingular, it follows from (2.2a) and (2.2c) that

Substituting the preceding equalities into (2.2b), gives

By premultiplying the above equation by and , we obtain the following cubic equation:

(2.3)

where

(2.4)

To obtain the clustering properties of the eigenvalues around the point we set . By substituting in (2.3) we get the following cubic equation

(2.5)

and find upper and lower bounds for . From Theorem 2.1, we deduce that the Cauchy’s lower and upper bounds for are

(2.6a)
(2.6b)

which completes the proof. ∎

As we see in Theorem 2.2, the lower and upper bounds of some eigenvalues of the preconditioned matrix depend on the parameters and . Using (2.4) and the Schur decomposition of matrices and , and straightforward computations the following upper and lower bounds for and are obtained as

and

Hence, we conclude that

(2.7)

Now, we consider the following two cases:

case I::  From (2.7) we deduce that all we need is to seek the parameters and such that

For the sake of the simplicity, let

Therefore, we have the following inequality

It is clear that matrices and are both SPD under our assumptions. Hence, we infer that . Therefore, we can set

and derive that

Note that , and then it follows that . Therefore, the above inequality can be easily deduced.

case II::  In this case, it is enough to find and such that

Hence, we have

where

Similar to the argument in the previous case we set

and finally conclude that

According to the above discussion we state the following theorem that gives conditions on and under which or .

Theorem 2.3.

We have the following cases:

case I: If

then, .

case II: If

then, .

In each iteration of a Krylov subspace method for solving the preconditioned system we need to compute a vector of the form where , and . To do so, it is enough to solve the system for . Since, is a block diagonal matrix, solution of the system is reduced to the solution of three systems with the coefficient matrices , and , which are all SPD. By summarizing the above notes we can state Algorithm 1.

  1. Solve for .

  2. Solve for .

  3. Solve for .

Algorithm 1 Computation of

Since the coefficient matrices of the subsystems in Algorithm 1 are SPD, they can be solved exactly using either the Cholesky factorization or inexactly using the conjugate gradient (CG) iteration method. It is noted that, the shift matrix in the steps 3 and 4 of Algorithm 1 increases the convergence speed of the CG method considerably.

3 Numerical experiments

In this section, we experimentally compare the effectiveness our proposed preconditioner with two preconditioners and defined, respectively, in Eqs. (1.5) and (1.6) for solving the saddle point linear system (1.7). All the numerical experiments were computed in double precision using some Matlab codes on a Laptop with Intel Core i7 CPU 2.9 GHz, 16GB RAM.

We apply the preconditioners to accelerate the convergence of the GMRES (flexible and full versions) method [23, 24]. In the implementation of the preconditioners , and within the GMRES method three subsystems with SPD coefficient matrices need to be solved. When the subsystems are solved inexactly we apply the flexible GMRES (FGMRES) method to solve the preconditioned system and if the subsystems are solved exactly we employ the full version of GMRES (Full-GMRES) method.

In the subsequent presented numerical results, the right-hand side vector is set to , where is a vector of all ones. We use a null vector as an initial guess for the GMRES method and the iteration is stopped once the relative residual 2-norm satisfies

(3.1)

where is the th computed approximate solution. The maximum number of iterations is set to be . To show the accuracy of the methods we report the values

(3.2)

In the implementation of the preconditioners , and three subsystems with SPD coefficient matrices need to be solved in each iteration of the GMRES method. These systems are solved using the Cholesky factorization of matrices, when we apply the Full-GMRES method for solving the preconditioned system. On the other hand, the subsystems are solved using the CG method, when we employ the FGMRES method for solving the preconditioned system. In this case, for solving the subsystems the initial guess is set to be a zero vector and the iterations is stopped as soon as the residual norm is reduced by a factor of or the number of iterations exceeds .

Numerical results are presented in the tables. To show the effectiveness of the preconditioners we also report the numerical results of Full-GMRES without any preconditioner.

Example 1.

[18, 9] Consider the saddle point problem (1.7) for which

and

with being the Kroneker product symbol and the discretization meshsize. Here, the total number of unknowns is and .

Numerical results are presented in Table 1 for different values of . This table shows the number of iterations (denoted by “Iters”) and CPU time (denoted by “CPU”) of the FGMRES without preconditioning, and with the preconditioners , and . We set . Moreover, we provide the elapsed CPU time (denoted by “Prec.CPU”) for setting up the preconditioners and and the total CPU time (denoted by “Total.CPU”). Furthermore, for the preconditioner we set and . In the all tables, the symbols and are used to indicate that the method has not converged in 1000 seconds and , respectively. As we see, the preconditioner outperforms the other examined preconditioners from the iteration counts, CPU time and the accuracy of computed solution points of view. It should be mentioned that for the FGMRES method without preconditioning, and the FGMRES with the preconditioners and fail to converge in 1000 iterations. Therefore, our preconditioner is more effective and practical than the preconditioners and for solving saddle point problems of the form (1.7).

Numerical results of the Full-GMRES method in conjunction with the three preconditioners , and are shown in Table 2. We observe that the preconditioner has provided quite suitable results and this results are in good agreement with what we claimed above for the preconditioned FGMRES method. In addition, when is large the preconditioned GMRES method fails to converge for the preconditioners and in 1000 seconds. However, the FGMRES method with the new preconditioner requires less CPU time. It is noted that we could not set up the preconditioners and for , because of memory limitation.

16 32 64 128 256
No Preconditioning
Iters 425 949
CPU 0.24 5.02
- - -
- - -
Iters 109 80 65 71 78
CPU 0.15 0.25 1.17 6.66 36.99
Iters 124 616
Prec.CPU 0.0006 0.0015 0.0082
CPU 0.81 25.53 288.93
Total.CPU 0.81 25.53 288.93
Iters 163 785
Prec.CPU 0.0006 0.0015 0.0082
CPU 0.99 34.66 286.02
Total.CPU 0.99 34.36 286.03
Table 1: Numerical results of FGMRES for Example 1.
16 32 64 128 256
No Preconditioning
Iters 425 949
CPU 0.21 5.35 14.27 45.34 182.99
Iters 109 75 54 60 74
Prec.CPU 0.0016 0.0037 0.0389 0.2334 1.8452
CPU 0.05 0.08 0.24 1.67 14.08
Total.CPU 0.05 0.08 0.28 1.91 15.92
Iters 115 402 -
Prec.CPU 0.0286 0.2170 6.0734 261.1566 -
CPU 0.05 1.77 42.95 454.22
Total.CPU 0.08 1.98 49.02 715.38 -
-
-
Iters 156 569 -
Prec.CPU 0.0286 0.2170 6.0734 261.1566 -
CPU 0.05 3.15 41.72 451.88
Total.CPU 0.08 3.37 47.80 713.04 -
-
-
Table 2: Numerical results of Full-GMRES for Example 1.
Example 2.

[18] Consider the saddle point problem (1.1), for which

is a block-diagonal matrix,

are both full row-rank matrices, where , ; with ;

is an identity matrix;

, are diagonal matrix with

and

Similar to Example 1, the Matrix is set to be . In addition, we choose the parameters and . Numerical results for different values of are listed in Table 3. As we observe, the numerical results illustrate that the preconditioner considerably reduces the CPU time of the FGMRES method without preconditioning.

We have also applied the Full-GMRES iteration method incorporated with preconditioners , and . In this case, the subsystems were solved exactly using the Cholesky factorization. Our numerical results show that, the Full-GMRES method outperforms the Full-GMRES method in conjunction with the preconditioners (all the three preconditioners) from the elapsed CPU time point of view for large values of . Hence, we have not reported the numerical results.

16 32 64 128 256 512
No Preconditioning
Iters 186 190 187 180 172 163
CPU 0.08 0.37 0.97 2.86 17.78 96.70
Iters 70 69 68 65 63 60
CPU 0.09 0.29 0.85 2.18 10.30 45.97
Iters 13 13 13 13
Prec.CPU 0.0015 0.01 0.37 13.75
CPU 0.02 0.18 2.59 34.55
Total.CPU 0.03 0.20 2.96 48.31
Iters 19 18 18 18
Prec.CPU 0.0015 0.01 0.37 13.75
CPU 0.03 0.25 3.30 49.54
Total.CPU 0.03 0.26 3.67 63.29
Table 3: Numerical results of FGMRES for Example 2.

4 Conclusion

A new block diagonal preconditioner has been presented for a class of block saddle point problems. This preconditioner is based on augmentation and performs well in practice. Also, it is easy to implement and has much better efficiency than the recently existing preconditioners. We have further estimated the lower and upper bounds of eigenvalues of the preconditioned matrix. We have examined the new preconditioner to accelerate the convergence speed of the FGMRES method as well as Full-GMRES. Our numerical tests illustrate that the proposed preconditioner is quite suitable and is superior to the other tested preconditioners in the literature.

Acknowledgments

The authors would like to thank the anonymous referees for their useful comments and suggestions.

References

  • [1] F. Assous, P. Degond, E. Heintze, P.A. Raviart, J. Segre, On a finite-element method for solving the three-dimentional Maxwell equations. J. Comput. Phys. 109 (1993), 222-237.
  • [2] Z.-Z. Bai, M.K. Ng, Z.-Q. Wang, Constraint preconditioners for symmetric indefinite matrices. SIAM J. Matrix Anal. Appl. 31 (2009), 410-433.
  • [3] F.P.A. Beik, M. Benzi, Iterative methods for double saddle point systems. SIAM J. Matrix Anal. Appl. 39 (2018), 902-921.
  • [4] F.P.A. Beik, M. Benzi, Block Preconditioners for Saddle Point Systems Arising from Liquid Crystal Directors Modeling. Calcolo, 55 (2018), Article 29.
  • [5] M. Benzi, G.H. Golub, A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 26 (2004), 20-41.
  • [6] L. Bergamaschi, On eigenvalue distribution of constraint-preconditioned symmetric saddle point matrices. Numer. Linear Algebra Appl. 19 (2012), 754-772.
  • [7] E.P. Bertsekas, Nonlinear Programming. 2nd edn, Athena Scientific, (1999).
  • [8] Z.-M. Chen, Q. DU, J. Zou, Finite element methods with matching and nonmatching meshes of Maxwell equation with discontinues coefficients. SIAM J.Numer. Anal. 37 (2000), 1542-1570.
  • [9] Y. Cao, Shift-splitting preconditioners for a class of block three-by-three saddle point problems. Appl. Math. Lett. 96 (2019), 40-46.
  • [10] Z.H. Cao, Positive stable block triangular preconditioners for symmetric saddle point problems. Appl. Numer. Math. 57 (2007), 899-910.
  • [11] Y. Cao, J. Du, Q. Niu, Shift-splitting preconditioners for saddle point problems. J. Comput. Appl. Math. 272 (2014), 239-250.
  • [12] C. Chen, C.-F Ma, A generalized shift-splitting preconditioner for saddle point problems. Appl. Math. Lett. 43 (2015), 49-55.
  • [13] P. Ciarlet, J. Zou, Finite element convergence for Darwin model to Maxwell’s equations. RAIRO Math. Modeling Numer. Anal. 31 (1997), 213-249.
  • [14] H.C. Elman, D.J. Silvester, A.J. Wathen, Performance and analysis of saddle point preconditioners for the discrete steady-state Navier-Stokes equations. Numer. Math. 90 (2002), 665-688.
  • [15] D. Han , X. Yuan, Local linear convergence of the alternating direction method of multipliers for quadratic programs. SIAM J.Numer. Anal. 51 (2013), 3446-3457.
  • [16] N. Huang, Variable parameter Uzawa method for solving a class of block three-by-three saddle point problems. Numerical Algorithms, 85 (2020), 1233-1254.
  • [17] N. Huang, Y.H. Dai, Q. Hu, Uzawa methods for a class of block three-by-three saddle point problems. Numer Linear Algebra Appl. 26 (2019), e2265.
  • [18] N. Huang, C.-F. Ma, Spectral analysis of the preconditioned system for the block saddle point problem. Numer. Algorithms 81 (2019), 421-444.
  • [19] R.A. Horn, C.R. Johnson, Matrix Analysis. 2nd Edition, Cambridge University Press, Cambridge, (2012).
  • [20] M.-Q. Jiang, Y. Cao, L.-Q. Yao, On parameterized block triangular preconditioners for generalized saddle point problems. Appl. Math. Comput. 216 (2010), 1777-1789.
  • [21] K.B. Hu, J.C. XU, Structure-preserving finite element methods for stationary MHD models. Math. Comp. 88 (2019), 553-581.
  • [22] M. Madern, Geometry of Polynomials. Math. Surveys Monger., vol. 3, AMS, Providence, RI, USA, (1966).
  • [23] Y. Saad, A flexible inner-outer preconditioned GMRES algorithm. SIAM J. Sci. Comput. 14 (1993), 461-469.
  • [24] Y. Saad, Iterative methods for sparse linear systems. Second edition PWS, New York, 1995.
  • [25] D.K. Salkuyeh, M. Abdolmaleki, S. Karimi. On a splitting preconditioner for saddle point problems. J. Appl. Math. & Informatics. 36 (2018), 459-474.
  • [26] Bl. Sendov, A. Andreev, N. Kjurkchiev, Numerical solution of polynomial equations