Consider the following block saddle point problems:
where is a symmetric positive definite (SPD) matrix, and and have full row rank. In addition, , and
are given vectors, andis an unknown vector which is to be determined. Furthermore, we presume that the matrices and are large and sparse. It is not difficult to check that under the above conditions the coefficient matrix is nonsingular and as a result system (1.1) has a unique solution . In this work, we focus on preconditioned Krylov-subspace methods, especially, the preconditioned GMRES method (see [23, 24]).
Linear systems of the form (1.1) appear in a variety of scientific and engineering problems, for instance, full discrete finite element methods for solving the time-dependent Maxwell equations with discontinuous coefficients [1, 8, 13], the following quadratic program :
Linear systems of block form,
known as traditional saddle point problems, have been extensively studied for decades, where and are positive and positive semi-definite matrices respectively, and is a full row rank matrix. Constructing preconditioners to improve the convergence speed of Krylov-subspace methods for solving (1.2) has produced a considerable amount of literature, e.g., shift-splitting preconditioners [5, 11, 12, 29], block triangular preconditioners [10, 14, 20, 25], inexact constraint preconditioners[2, 6, 30] and so on. It is obvious that the block linear system (1.1) can be seen as a special case of the traditional form (1.2) using the following partitioning strategies,
However, most of the methods for the latter matrices cannot be directly applied to (1.1). This is because the properties of the sub-matrix in (1.3) and (1.4) are different from the traditional form (1.2). Indeed, the leading block matrix in (1.3) is symmetric indefinite and a standard saddle point matrix. Thus, the linear system with the coefficient matrix (1.3) can be considered as a double saddle point problem studied recently [3, 4]. In addition, the block in (1.4) is a symmetric matrix and block is rank deficient. Therefore, it is extremely interesting to find new efficient preconditioners for the block saddle point problems (1.1).
Some preconditioners have been studied to accelerate convergence rate of the Krylov-subspace methods for solving this class of systems, for example [9, 16, 17]. Recently, Huang and Ma  proposed two block diagonal (BD) preconditioners,
for solving (1.1) in which , and are SPD approximations of and , respectively. They also derive all the eigenpairs of preconditioned matrix. Subsequently, Xie and Li  proposed three new preconditioners for solving the linear system (1.1) that can be seen as follows
where . They analyzed spectral properties of corresponding preconditioned matrices and showed that the proposed preconditioners significantly accelerate the convergence rate of GMRES method. However, when the inner systems are solved inexactly, the elapsed CPU time is increased drastically and often give unacceptable solutions.
Here, we consider the following equivalent form of (1.1):
Although the coefficient matrix of the system (1.7) is not symmetric, it has some desirable properties. For instance, the matrix is positive semi-definite, i.e., is symmetric positive semi-definite. This is a signification for the GMRES method. In fact, the restarted version of GMRES() converges for all . It is noticeable that setting up the exact preconditioners mentioned above are very time-consuming and the inexact preconditioners need approximations of and . In this paper, we establish a new block preconditioner for solving the linear system (1.7) which is easy to implement and has much better computing efficiently than the preconditioners studied recently.
It is noteworthy that, few analytical results on spectral bounds are available for a block matrix of the form . In fact, contrary to the case for block matrix, we get a cubic equation from the eigen-system of a block matrix. Therefore, estimating the bounds of eigenvalues brings some difficulties. Consider the following monic polynomial of degree :
Finding approximately the roots of through simple operations with its coefficients has led to publish a plenty of studies, for example, see the comprehensive surveys [22, 26] and references therein for further details. We will derive the spectral bounds of corresponding preconditioned system by using one of the classical and sharp bounds that has been used to obtain simple lower and upper bounds on the absolute value of the roots of .
This paper is divided into three sections, schemed as follows. In Section 2, the new preconditioner is presented and the clustering properties of corresponding preconditioned system are discussed. Numerical results are given in Section 3 to demonstrate the effectiveness of the new preconditioner. The paper is ended by some concluding remarks in Section 4.
We end this section with an introduction of some notation that will be used in the subsequent sections. The symbol is used for the conjugate transpose of the vector . For any square matrix with real eigenvalues, the minimum and maximum eigenvalues of are indicated by and , respectively. The norm indicates the Euclidean norm. Moreover, we use Matlab notation to denote the vector .
2 The new block diagonal preconditioner
We propose the following block preconditioner for solving the linear system (1.7)
where . The main advantage of the preconditioner over the preconditioners mentioned in Section 1 is that it is free of the Schur complement matrix and easy to implement. Concerning the clustering properties of the eigenvalues of the preconditioned matrix , we have the following theorems.
 Let be a monic polynomials with complex coefficients and be any root of . Then satisfies the following inequalities
Cauchy’s lower and upper bounds
Montel’s lower and upper bounds
Carmichael-Mason’s lower and upper bounds
Frobenius’ lower and upper bounds
It is noted that, Cauchy’s bounds are essentially the most sharpest ones in Theorem 2.1 and have been used for testing the sharpness of other new bounds. Hence, we will utilize these bounds to locate the eigenvalues of corresponding preconditioned matrix.
Suppose that is SPD, and and have full row rank. Then the preconditioned matrix has an eigenvalue of algebraic multiplicity , i.e. , , and the corresponding eigenvectors are of the form
, and the corresponding eigenvectors are of the formwhere is a basis for the null space of . Moreover, for the remaining eigenvalues we have the following statements:
If , ,
If , ,
for some and .
Let be an eigenpair of . Then, we have
which can be rewritten as
If , then from (2.2a) we obtain , which shows that . This along with (2.2c) yield that , which is a contradiction with the fact that is an eigenvector. Next, we complete the proof in the following cases.
respectively. This shows that and the corresponding eigenvectors are of the form with
Substituting the preceding equalities into (2.2b), gives
By premultiplying the above equation by and , we obtain the following cubic equation:
To obtain the clustering properties of the eigenvalues around the point we set . By substituting in (2.3) we get the following cubic equation
and find upper and lower bounds for . From Theorem 2.1, we deduce that the Cauchy’s lower and upper bounds for are
which completes the proof. ∎
As we see in Theorem 2.2, the lower and upper bounds of some eigenvalues of the preconditioned matrix depend on the parameters and . Using (2.4) and the Schur decomposition of matrices and , and straightforward computations the following upper and lower bounds for and are obtained as
Hence, we conclude that
Now, we consider the following two cases:
case I: : From (2.7) we deduce that all we need is to seek the parameters and such that
For the sake of the simplicity, let
Therefore, we have the following inequality
It is clear that matrices and are both SPD under our assumptions. Hence, we infer that . Therefore, we can set
and derive that
Note that , and then it follows that . Therefore, the above inequality can be easily deduced.
case II: : In this case, it is enough to find and such that
Hence, we have
Similar to the argument in the previous case we set
and finally conclude that
According to the above discussion we state the following theorem that gives conditions on and under which or .
We have the following cases:
case I: If
case II: If
In each iteration of a Krylov subspace method for solving the preconditioned system we need to compute a vector of the form where , and . To do so, it is enough to solve the system for . Since, is a block diagonal matrix, solution of the system is reduced to the solution of three systems with the coefficient matrices , and , which are all SPD. By summarizing the above notes we can state Algorithm 1.
Since the coefficient matrices of the subsystems in Algorithm 1 are SPD, they can be solved exactly using either the Cholesky factorization or inexactly using the conjugate gradient (CG) iteration method. It is noted that, the shift matrix in the steps 3 and 4 of Algorithm 1 increases the convergence speed of the CG method considerably.
3 Numerical experiments
In this section, we experimentally compare the effectiveness our proposed preconditioner with two preconditioners and defined, respectively, in Eqs. (1.5) and (1.6) for solving the saddle point linear system (1.7). All the numerical experiments were computed in double precision using some Matlab codes on a Laptop with Intel Core i7 CPU 2.9 GHz, 16GB RAM.
We apply the preconditioners to accelerate the convergence of the GMRES (flexible and full versions) method [23, 24]. In the implementation of the preconditioners , and within the GMRES method three subsystems with SPD coefficient matrices need to be solved. When the subsystems are solved inexactly we apply the flexible GMRES (FGMRES) method to solve the preconditioned system and if the subsystems are solved exactly we employ the full version of GMRES (Full-GMRES) method.
In the subsequent presented numerical results, the right-hand side vector is set to , where is a vector of all ones. We use a null vector as an initial guess for the GMRES method and the iteration is stopped once the relative residual 2-norm satisfies
where is the th computed approximate solution. The maximum number of iterations is set to be . To show the accuracy of the methods we report the values
In the implementation of the preconditioners , and three subsystems with SPD coefficient matrices need to be solved in each iteration of the GMRES method. These systems are solved using the Cholesky factorization of matrices, when we apply the Full-GMRES method for solving the preconditioned system. On the other hand, the subsystems are solved using the CG method, when we employ the FGMRES method for solving the preconditioned system. In this case, for solving the subsystems the initial guess is set to be a zero vector and the iterations is stopped as soon as the residual norm is reduced by a factor of or the number of iterations exceeds .
Numerical results are presented in the tables. To show the effectiveness of the preconditioners we also report the numerical results of Full-GMRES without any preconditioner.
with being the Kroneker product symbol and the discretization meshsize. Here, the total number of unknowns is and .
Numerical results are presented in Table 1 for different values of . This table shows the number of iterations (denoted by “Iters”) and CPU time (denoted by “CPU”) of the FGMRES without preconditioning, and with the preconditioners , and . We set . Moreover, we provide the elapsed CPU time (denoted by “Prec.CPU”) for setting up the preconditioners and and the total CPU time (denoted by “Total.CPU”). Furthermore, for the preconditioner we set and . In the all tables, the symbols and are used to indicate that the method has not converged in 1000 seconds and , respectively. As we see, the preconditioner outperforms the other examined preconditioners from the iteration counts, CPU time and the accuracy of computed solution points of view. It should be mentioned that for the FGMRES method without preconditioning, and the FGMRES with the preconditioners and fail to converge in 1000 iterations. Therefore, our preconditioner is more effective and practical than the preconditioners and for solving saddle point problems of the form (1.7).
Numerical results of the Full-GMRES method in conjunction with the three preconditioners , and are shown in Table 2. We observe that the preconditioner has provided quite suitable results and this results are in good agreement with what we claimed above for the preconditioned FGMRES method. In addition, when is large the preconditioned GMRES method fails to converge for the preconditioners and in 1000 seconds. However, the FGMRES method with the new preconditioner requires less CPU time. It is noted that we could not set up the preconditioners and for , because of memory limitation.
is a block-diagonal matrix,
are both full row-rank matrices, where , ; with ; is an identity matrix;
is an identity matrix;, are diagonal matrix with
Similar to Example 1, the Matrix is set to be . In addition, we choose the parameters and . Numerical results for different values of are listed in Table 3. As we observe, the numerical results illustrate that the preconditioner considerably reduces the CPU time of the FGMRES method without preconditioning.
We have also applied the Full-GMRES iteration method incorporated with preconditioners , and . In this case, the subsystems were solved exactly using the Cholesky factorization. Our numerical results show that, the Full-GMRES method outperforms the Full-GMRES method in conjunction with the preconditioners (all the three preconditioners) from the elapsed CPU time point of view for large values of . Hence, we have not reported the numerical results.
A new block diagonal preconditioner has been presented for a class of block saddle point problems. This preconditioner is based on augmentation and performs well in practice. Also, it is easy to implement and has much better efficiency than the recently existing preconditioners. We have further estimated the lower and upper bounds of eigenvalues of the preconditioned matrix. We have examined the new preconditioner to accelerate the convergence speed of the FGMRES method as well as Full-GMRES. Our numerical tests illustrate that the proposed preconditioner is quite suitable and is superior to the other tested preconditioners in the literature.
The authors would like to thank the anonymous referees for their useful comments and suggestions.
-  F. Assous, P. Degond, E. Heintze, P.A. Raviart, J. Segre, On a finite-element method for solving the three-dimentional Maxwell equations. J. Comput. Phys. 109 (1993), 222-237.
-  Z.-Z. Bai, M.K. Ng, Z.-Q. Wang, Constraint preconditioners for symmetric indefinite matrices. SIAM J. Matrix Anal. Appl. 31 (2009), 410-433.
-  F.P.A. Beik, M. Benzi, Iterative methods for double saddle point systems. SIAM J. Matrix Anal. Appl. 39 (2018), 902-921.
-  F.P.A. Beik, M. Benzi, Block Preconditioners for Saddle Point Systems Arising from Liquid Crystal Directors Modeling. Calcolo, 55 (2018), Article 29.
-  M. Benzi, G.H. Golub, A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 26 (2004), 20-41.
-  L. Bergamaschi, On eigenvalue distribution of constraint-preconditioned symmetric saddle point matrices. Numer. Linear Algebra Appl. 19 (2012), 754-772.
-  E.P. Bertsekas, Nonlinear Programming. 2nd edn, Athena Scientific, (1999).
-  Z.-M. Chen, Q. DU, J. Zou, Finite element methods with matching and nonmatching meshes of Maxwell equation with discontinues coefficients. SIAM J.Numer. Anal. 37 (2000), 1542-1570.
-  Y. Cao, Shift-splitting preconditioners for a class of block three-by-three saddle point problems. Appl. Math. Lett. 96 (2019), 40-46.
-  Z.H. Cao, Positive stable block triangular preconditioners for symmetric saddle point problems. Appl. Numer. Math. 57 (2007), 899-910.
-  Y. Cao, J. Du, Q. Niu, Shift-splitting preconditioners for saddle point problems. J. Comput. Appl. Math. 272 (2014), 239-250.
-  C. Chen, C.-F Ma, A generalized shift-splitting preconditioner for saddle point problems. Appl. Math. Lett. 43 (2015), 49-55.
-  P. Ciarlet, J. Zou, Finite element convergence for Darwin model to Maxwell’s equations. RAIRO Math. Modeling Numer. Anal. 31 (1997), 213-249.
-  H.C. Elman, D.J. Silvester, A.J. Wathen, Performance and analysis of saddle point preconditioners for the discrete steady-state Navier-Stokes equations. Numer. Math. 90 (2002), 665-688.
-  D. Han , X. Yuan, Local linear convergence of the alternating direction method of multipliers for quadratic programs. SIAM J.Numer. Anal. 51 (2013), 3446-3457.
-  N. Huang, Variable parameter Uzawa method for solving a class of block three-by-three saddle point problems. Numerical Algorithms, 85 (2020), 1233-1254.
-  N. Huang, Y.H. Dai, Q. Hu, Uzawa methods for a class of block three-by-three saddle point problems. Numer Linear Algebra Appl. 26 (2019), e2265.
-  N. Huang, C.-F. Ma, Spectral analysis of the preconditioned system for the block saddle point problem. Numer. Algorithms 81 (2019), 421-444.
-  R.A. Horn, C.R. Johnson, Matrix Analysis. 2nd Edition, Cambridge University Press, Cambridge, (2012).
-  M.-Q. Jiang, Y. Cao, L.-Q. Yao, On parameterized block triangular preconditioners for generalized saddle point problems. Appl. Math. Comput. 216 (2010), 1777-1789.
-  K.B. Hu, J.C. XU, Structure-preserving finite element methods for stationary MHD models. Math. Comp. 88 (2019), 553-581.
-  M. Madern, Geometry of Polynomials. Math. Surveys Monger., vol. 3, AMS, Providence, RI, USA, (1966).
-  Y. Saad, A flexible inner-outer preconditioned GMRES algorithm. SIAM J. Sci. Comput. 14 (1993), 461-469.
-  Y. Saad, Iterative methods for sparse linear systems. Second edition PWS, New York, 1995.
-  D.K. Salkuyeh, M. Abdolmaleki, S. Karimi. On a splitting preconditioner for saddle point problems. J. Appl. Math. & Informatics. 36 (2018), 459-474.
-  Bl. Sendov, A. Andreev, N. Kjurkchiev, Numerical solution of polynomial equations