1 Introduction
Symmetric indefinite linear systems
(1) 
arise in many applications ranging from optimization problems to problems in computational physics, see e.g. [2, 24]. In this paper we assume that is a sparse, fullrank, symmetric and indefinite matrix with only few negative eigenvalues. Our motivation to develop a new preconditioned iterative method arises from an application in the automotive industry. In order to control brake squeal, large scale eigenvalue problems are solved via a shiftandinvert Arnoldi method to obtain a reduced model that can be used for parameter studies and optimization, see [15] and Section 3.1. We propose the use of a twolevel preconditioned iterative method with a positive definite preconditioner for the solution of the arising linear systems. The basic idea of such a preconditioned iteration is wellknown. In the context of optimization problems, see [14], a sparse BunchParlett factorization
(2) 
is suggested as a solver for the systems involving the indefinite blocks of various preconditioners. Here is a permutation matrix (with ), is a sparse lower triangular matrix (typically with some fillin compared to the sparsity pattern of ), and is a blockdiagonal matrix that contains either or blocks. Given such a factorization, one can modify the diagonal matrix to obtain a positive definite such that the eigenvalues of are the absolute values of the eigenvalues of , so that also is positive definite. If a diagonal block of is and negative, then one replaces it with its absolute value. Otherwise, it is a symmetric block,
(3) 
and one computes the spectral decomposition
(4) 
where satisfy , and one replaces the block with
(5) 
The matrix , if easily available, is a good preconditioner for a preconditioned Krylov subspace method, such as the Minimum Residual method (MINRES) [20], since due to the fact that the spectrum of has only the values , it would converge in at most iterations in exact arithmetic if the factorization is exact. However, this preconditioner is, in general, not practical for large problems due to fillin and large storage requirements. In [16], therefore, an incomplete factorization based preconditioner for MINRES is proposed.
Another suggestion for a preconditioner of MINRES, proposed in [28], is the positive definite absolute value of , defined as in which is the spectral decomposition of . However, to avoid the high computational complexity of the spectral decomposition, in [28]
it is suggested to use a geometric multigrid method instead of the absolute value preconditioner and it is illustrated via a model problem that this approach is very effective when the system matrix arises from elliptic partial differential equations.
In our motivating problem, the indefinite matrix arises from a perturbed wave equation where the resulting linear system depends on parameters and has the extra property that the number of negative eigenvalues is much smaller than the number of positive eigenvalues. For this class of problems we propose a new twolevel iterative scheme that combines the absolute value preconditioner approach with a deflation procedure and we show that this method is also very effective for a large class of indefinite problems arising in other applications.
2 A twolevel iterative scheme
In this section we describe a new twolevel preconditioned iterative scheme for symmetric indefinite linear systems where the coefficient matrix has only very few negative eigenvalues. The method employs MINRES together with a modified absolute value preconditioner that is constructed via a deflation procedure which, however, is not carried out explicitly. The linear systems involving the preconditioner are solved again iteratively via the preconditioned Conjugate Gradient (CG) [11] which can be preconditioned via an incomplete () decomposition, see e.g. [24], of the original coefficient matrix or any other preconditioner obtained from the original coefficient matrix . These include but are not limited to Sparse Approximate Inverse [4, 17], Algebraic Multigrid [30, 5], and banded [19] preconditioners. We note that just like any iterative method for symmetric linear systems, the proposed scheme would benefit from symmetric permutations and scaling to enhance the numerical stability of the preconditioner as well. Alternatively, one can use nonsymmetric permutation and scaling [9] to further improve the numerical stability of the preconditioner [3]. However, this would destroy the symmetry and have the disadvantage that robust Krylov subspace methods with short term recurrences are no longer applicable. In this work, we show the robustness of the proposed iterative scheme with classical preconditioners. We illustrate that this MINRESCG iterative scheme is very effective and more robust than other preconditioned general Krylov subspace methods, such as the restarted Generalized Minimum Residual (GMRES) [25], the stabilized BiConjugate Gradient method (BiCGStab) [27], innerouter FGMRESGMRES [23] using the same (ILU) preconditioner or just modified incomplete preconditioned MINRES.
As an approximation to the absolute value preconditioner we use
(6) 
where is an approximate invariant subspace of associated with the (say ) negative eigenvalues and is the corresponding absolute value of the diagonal matrix of negative eigenvalues. Since we have assumed that is much smaller than , the modification (or as it is sometimes called deflation) is of small rank. In each iteration of MINRES applied to (1) a system of the form
(7) 
has to be solved, and again the preconditioned matrix has only eigenvalues or so that MINRES with the exact preconditioner converges theoretically again in at most iterations. However, since is symmetric and positive definite, we propose to use a preconditioned CG iteration for solving system (7) approximately with an indefinite preconditioner, , which is an approximation of the original coefficient matrix itself. Note that the eigenvalues of the preconditioned matrix for CG, , would again be either or if the exact matrix was used.
Indefinite preconditioning for symmetric positive definite systems is studied in [12] where the preconditioned system is solved via a Krylov subspace method other than CG that does not require positive definiteness of the coefficient matrix. Indefinite preconditioning for the CG method is, however, rarely applied with the exception of [22], where CG for indefinite systems with indefinite preconditioner is used but it is assumed that the preconditioned matrix is positive definite. In our case, however, this will not be the case.
The first level preconditioner () is symmetric and positive definite, but dense, so it should not be formed explicitly. On the other hand, the second level preconditioner () is sparse and symmetric but not positive definite. However, the preconditioned CG is still guaranteed not to break down (see [24, p. 277]) using an indefinite preconditioner which can be seen as follows. It is wellknown, see e.g. [24, p. 279], that preconditioned CG with a preconditioner applied to a system with symmetric positive definite can be expressed in an indefinite scalar product by replacing the Euclidean inner products in CG by the inner products. If is symmetric positive definite, and is symmetric indefinite (but invertible), then we can define the indefinite inner product as , so is positive definite with respect to the inner product, since for all .
Given the system , an initial guess , and a preconditioner
, as CG is a projection based Krylov subspace method, the vectors
must satisfy the orthogonality condition(8) 
where and with . Note that (8) is equivalent to the orthogonality condition of CG without preconditioning
(9) 
Therefore, indefinitely preconditioned CG minimizes the error
(10) 
in the energy norm defined by the positive definite matrix .
In summary, the proposed scheme consists of three stages. First, an initial preconditioner is obtained from the coefficient matrix , e.g. using the ILU factorization). Second, we compute approximations to the negative eigenvalues and the corresponding invariant subspace. This computation itself may be very expensive even if the invariant subspace has small dimension. However, in our motivating application many linear systems with the same coefficient matrix (or closely related coefficient matrices) need to be solved. Hence, this potentially expensive initial cost is quickly amortized. This is typical when solving eigenvalue problems with the shiftandinvert Arnoldi method as in [15]. The third stage is the iterative solution stage consisting of nested MINRES and CG iterations (Algorithm 1). Note that while the outer MINRES iterations require matrixvector multiplications with the original sparse coefficient matrix , the inner CG iterations require matrixvector multiplications of the form which are efficiently performed by using sparse matrixvector multiplications and together with dense matrixvector operations (BLAS Level 2) and vectorvector operations (BLAS Level 1) in the following procedure
(11) 
The total cost of each such matrix multiplication operation is arithmetic operations where , and are the number of nonzeros, negative eigenvalues and rows of , respectively. We note that this extra operation is much more cache friendly than constructing an orthonormal basis in GMRES which relies on inner products (BLAS Level 1). Alternatively, to further speed up the convergence, the proposed scheme can be implemented using RMINRES [13, 29], i.e. recycled and deflated MINRES, as the outer solver instead of plain MINRES. The tradeoff would be increased storage and computation requirements due to the necessary orthogonalization against the recycled subspace and the updates of the recycled subspace [29]. Furthermore, finding subspaces that lead to improved convergence is considered to be a highly challenging task and application specific [13].
2.1 Improvement via ShermanMorrisonWoodbury Formula
The preconditioner in MINRESCG is a krank update of . Therefore, one can use the ShermanMorrisonWoodbery formula to express . Given,
(12) 
after applying the ShermanMorrisonWoodbury formula and some algebraic manipulations, we obtain,
(13) 
Note that since we do not have the exact , but use an approximation of it, is not positive definite. Still, we can use it as the preconditioner for the iterations. In other words, we apply the preconditioner for CG, in which the action of is approximated, such as by an incomplete factorization of . In the improved scheme, application of the preconditioner involves additional dense matrixvector multiplications (BLAS Level 2) with a cost of arithmetic operations but no additional storage requirement. Hereafter, we refer to this improved version as MINRESCG.
3 Application of the twolevel method
In this section we describe the applications to which we apply the proposed twolevel procedure.
3.1 Finite Element Models of Disk Brakes
In the context of noise reduction in disk brakes, reduced order models are determined from the finite element model [15] by computing the eigenvalues in the right half plane and close to the imaginary axis of a parametric Quadratic Eigenvalue Problem (QEP)
(14) 
in which
(15) 
and
(16) 
where and are symmetric positive definite,
is skewsymmetric,
are symmetric indefinite, and is general [15]. Here denotes the angular velocity of the disk () and is the reference angular velocity.The QEP is solved by first rewriting it as a linear eigenvalue problem, using a companion linearization of (14) given by
(17) 
Audible brake squeal is associated with eigenvalues in the right half plane. For this reason we are interested in those eigenvalues that lie in a rectangular domain in the complex plane given by and corresponding to the audible range.
Solving the eigenvalue problem (17) via an eigensolver such as the shiftandinvert Arnoldi method [18] or Contour Integration based methods [21, 26], requires the solution of a shifted linear system of equations in each iteration, see [15] for details of the eigensolver. To apply our twolevel linear system solver, we consider the solution of the following shifted linear system with complex shifts ( inside the rectangular domain of interest),
(18) 
where , , and
(19) 
In [15] this complex linear system is solved with a sparse complex direct solver. To solve the problem iteratively, we follow [1] and map the complex system (18) to an equivalent doublesize real system.
Splitting into real and imaginary parts and with and , we obtain
(20) 
for the real and complex parts of , respectively. This leads to the real system
(21) 
which we then solve via a preconditioned Krylov subspace method with preconditioner
(22) 
where
(23) 
and . Note that both and are symmetric and positive definite. The preconditioner can be block factorized as
(24) 
Hence, the major cost in solving systems involving the preconditioner is the solution of two linear systems where the coefficient matrix is (i) and (ii) , namely the Schur complement. Since the solution of (i) is quite trivial, we only discuss how to solve systems involving the Schur complement matrix, which typically is dense see [2], but in our case it has the factorization
(25) 
Solving systems involving the Schur complement matrix, therefore, requires two steps: (i) scaling the right hand side vector with and (ii) solving systems where the coefficient matrix is . Step (i) is again trivial, hence we now look into (ii) which we solve iteratively using a Krylov subspace method where the preconditioner is
(26) 
since in our case . Hence, the main cost in solving the block triangular systems lies in the solution of
(27) 
or after multiplying both sides of the system by we obtain
(28) 
where . Even though and are symmetric and positive definite, there is no guarantee that the symmetric coefficient matrix is positive definite. However, system (28) is perfectly suitable for the proposed MINRESCG scheme, since in our application it only has few negative eigenvalues and they need to be computed only once. Furthermore, the preconditioner (22) is completely independent of the parameters and , and the coefficient matrix of inner systems that have to be solved (28) are the same for a given . This means that a factorization (incomplete or exact) or an approximation for the coefficient matrix can be computed once and reused for all values of of the same absolute value and for all corresponding values.
Numerical experiments for this class of problems are presented in section 4.
3.2 Other applications
As further applications we consider all symmetric indefinite problems in the SuiteSparse Matrix Collection [8] of sizes between and and with at most negative eigenvalues. Since this includes matrices from the PARSEC group [6], we exclude the smallest matrices from this group. Furthermore, since shifts around the socalled Fermi level are also of interest in the PARSEC group of matrices, we shift the largest matrix (SiO) by . For , we chose three values (, and ) which approximately correspond to the gaps in the spectrum of . The properties of these matrices are given in Table 1. Note that we include two examples (*) that arise in finite element discretization of structural problems. These are not full eigenvalue problems but just mass matrices; solving these linear systems is useful if eigenvalues in the inverse mass matrix inner product space are computed.
Matrix  Application  

Bcsstm  Structural Engineering  
Bcsstm  Structural Engineering  
Nasa  Structural Engineering  
Meg  RAM Simulation  
Benzene  Realspace pseudopotential method  
SiH  Realspace pseudopotential method  
SiH  Realspace pseudopotential method  
SiO  Realspace pseudopotential method  
SiO()  Realspace pseudopotential method  
SiO()  Realspace pseudopotential method  
SiO()  Realspace pseudopotential method 
4 Numerical results
In this section, we study the robustness of the proposed twolevel scheme for indefinite linear systems described in the previous section. All experiments are performed using MATLAB R2018a.
In MINRESCG, we use the MATLAB eigs function to compute negative eigenvalues and the corresponding eigenvectors. We use an indefinite preconditioner (
) obtained either by an incomplete or factorization of the coefficient matrix for inner CG iterations. Former is the only suitable preconditioner available in MATLAB which we refer to as . Hence, even though it does not exploit symmetry, we use it to show the robustness of the proposed scheme in Section 4.2.1. For the latter, on the other hand, we use symmildl which is an external package [16] that has an interface for MATLAB and is robust. Hereafter, we refer to this preconditioner as . We use it to show that the proposed scheme is competitive against other solvers in terms of number of iterations even when a much more robust preconditioner is used in Sections 4.1 and 4.2.2. For a fair comparison, exactly the same preconditioner is used for BiCGStab, GMRES() ( and ) as well as another outerinner scheme with Flexible GMRES (FGMRES) as the outer solver and GMRES as the inner solver [23]. In Sections 4.2.2 and 4.1, we also use a MINRES preconditioner with the modified factorization. After computing the factorization, the matrix is modified as described in [14] in order to obtain a positive definite preconditioner to be used with MINRES. For FGMRESGMRES we use a restart value of for both inner and outer iterations. Iterative solvers, except FGMRES, are the implementations that are available in MATLAB. We note that MATLAB’s BiCGStab implementation terminates early before completing a full iteration if the relative residual is already small enough. This counts as a half iteration. We modified GMRES to stop the iteration based on the true relative residual rather than the preconditioned relative residual. Storage requirements for MINRES, CG, BiCGStab, FGMRES and GMRES are given in [7, 23, 24, 25, 27], respectively. In Table 2, we illustrate the storage requirement of each of the iterative solvers via the number of vectors in addition to the coefficient matrix, the preconditioner (i.e. incomplete factors) and the right hand side vector which are common for all solvers.MINRES  MINRESCG  GMRES  FGMRESGMRES  BiCGStab 

4.1 Disk brake example
In the following we solve (28) for the small and large test problems of [15] of sizes and , respectively, with . Note again that (28) is independent of . For the first set of experiments we fix the shift to be the largest value in the range of values of interest, namely . This also happens to be the most challenging case since the number of negative eigenvalues is also the largest, with and , respectively.
For the proposed scheme, an factorization of the coefficient matrix is used as the preconditioner () of the inner CG iteration. We use the same preconditioner for BiCGStab, GMRES(), FGMRESGMRES and the modified for MINRES. For the smaller problem, we also use the factorization with no fillin (i.e. ) preconditioner of MATLAB.
For all experiments a moderate outer stopping tolerance of relative residual norm less than or equal to is used. For the MINRESCG and FGMRESGMRES schemes the inner stopping tolerance is . For all methods, the maximum (total) number of iterations are and for small and large problems, respectively. In all experiments, the right hand side vector is a random vector of size .
The required number of iterations for the proposed scheme as well as for baseline algorithms are given in Table 3 for solving the small problem using , and preconditioners. GMRES() reaches the maximum number of iterations without converging () irrespective of the preconditioner. When the preconditioner is , BiCGStab converges but it requires twice as many iterations as MINRESCG, while all other solvers reach the maximum number of iterations without converging. Using and as the preconditioners, BiCGStab , MINRES and GMRES() converge for and , respectively.
In Table 4, results are presented for solving the large problem using the seven iterative methods with the preconditioners , as well as . Note that a much smaller dropping tolerance is required for the large problem. Incomplete factors contain , , and nonzeros per row, respectively, which is relatively small considering that the complete factorization would produce nonzeros per row. In fact, incomplete factorization may not be an efficient preconditioner for this problem. However, we still include these results here only to show the robustness of the proposed scheme in terms of number of iterations. While for all preconditioners GMRES(), FGMRESGMRES, BiCGStab and MINRES reach the maximum number of iterations without converging, MINRESCG still converges in outer iterations albeit with a large number of inner iterations.
Preconditioner  Solver  Outer its.  Inner its. (Avg.)  Total its. 

BiCGStab    
GMRES()    
GMRES()    
GMRES()    
GMRES()    
FGMRESGMRES  
MINRESCG  
BiCGStab    
GMRES()    
GMRES()    
GMRES()    
GMRES()    
MINRES    
FGMRESGMRES  
MINRESCG  
BiCGStab    
GMRES()    
GMRES()    
GMRES()    
GMRES()    
MINRES    
FGMRESGMRES  
MINRESCG 
Preconditioner  Solver  Outer its.  Inner its. (Avg.)  Total its. 

BiCGStab    
GMRES()    
GMRES()    
GMRES()    
GMRES()    
MINRES    
FGMRESGMRES  
MINRESCG  
BiCGStab    
GMRES()    
GMRES()    
GMRES()    
GMRES()    
MINRES    
FGMRESGMRES  
MINRESCG  
BiCGStab    
GMRES()    
GMRES()    
GMRES()    
GMRES()    
MINRES    
FGMRESGMRES  
MINRESCG 
In Figure 1, the relative residual history is given when the preconditioner is used for three algorithms for the small test problem. Note that for MINRESCG the relative residual is only available at each outer iteration. Hence, only those are presented in the figure.
As second application we fix the preconditioner to be and vary the shift in the complex domain of interest for the small test problem. Here is a parameter that we change in the context of the eigenvalue problem. It is of interest to see how the method behaves as is varied. In Figure 2, the total number of iterations is presented. Preconditioned BiCGStab fails to converge for some values of (shown as the white area in the figure) while MINRESCG converges for all values. Figure 3 depicts the number of outer iterations and the average number of inner iterations for MINRESCG.
4.2 Test cases from the SuiteSparse matrix collection
In this subsection, the results are presented for systems that are obtained from the SuiteSparse Matrix Collection. In the first set we compare the proposed method against the classical general iterative schemes GMRES() and BiCGStab as well as a nested FGMRESGMRES scheme using an incomplete LU factorization based preconditioner. In the second set, we compare the proposed method against the same iterative methods as in the first set but using the preconditioner, and MINRES with the modified preconditioner.
4.2.1 Comparison against preconditioner
We use for all cases except Meg where incomplete LU factorization fails due to a zero pivot. Therefore, we use the modified incomplete LU factorization in MATLAB with dropping tolerance (i.e. ) for this case only. Since in practice GMRES is always used with a value for the restart () we choose a restart value of and . In FGMRESGMRES, we use a restart of for both inner and outer iterations. We stop the iterations when the relative residual norm is less than for all cases. The inner iteration stops when the relative residual norm is less than for CG and GMRES, in MINRESCG and FGMRESGMRES, respectively. Both in MINRESCG and BiCGStab iterations stop when the true relative residual is less than the tolerance. For preconditioned GMRES the available residual is only the preconditioned residual. In order to have a fair comparison, we explicitly compute the true residual at each GMRES iteration and stop the iteration based on the true relative residual norm. For all methods, the maximum (total) number of iterations is .
In Table 6, the detailed number of iterations for preconditioned MINRESCG, GMRES(), FGMRESGMRES and BiCGStab are given. GMRES() fails in cases out of . For , GMRES() stagnates (), while for other cases (namely , , , and ), the maximum number of iterations is reached without convergence (). If the restart is increased to , and , GMRES(), fails in , and cases, respectively. BiCGStab fails for and due to the maximum of iterations being reached without convergence () and a scalar quantity became too large or too small during the iteration (), respectively. FGMRESGMRES fails in cases due to the maximum of iterations being reached without convergence (). The proposed MINRESCG method does not fail in any of the test problems. Although the cost per iteration is different for each method, the total number of iterations are presented in Table 7. For the cases they do not fail, GMRES() and FGMRESGMRES require fewer number of iterations than MINRESCG but they also require more storage. In cases MINRESCG requires fewer iterations than BiCGStab. It is possible to improve the total number of iterations of MINRESCG via using the algorithm described in Section 2.1, Table 5 shows the improved number of iterations which is significant especially for the cases where the inner or outer number of iterations are high.
In order to study the effect of the inner stopping tolerance on the eigenvalues of the preconditioned matrix, we explicitly compute using preconditioend CG iterations with stopping tolerances of , and . In Figure 5, a clear clustering of eigenvalues of the preconditioned matrix is visible around and for , while the unpreconditioned coefficient matrix had no clustering of eigenvalues (see Figure 4). As expected, the clustering around and improves as the stopping tolerance for the inner CG iterations is decreased.
MINRESCG  MINRESCG  

Name  MINRES  CG (Avg.)  MINRES  CG (Avg.) 
Bcsstm  
Bcsstm  
Nasa  
Meg  
Benzene  
SiH  
SiH  
SiO  
SiO()  
SiO()  
SiO() 
MINRESCG  GMRES()  FGMRES()GMRES()  BiCGStab  

Name  MINRES  CG  
Bcsstm  
Bcsstm  
Nasa  
Meg  
Benzene  
SiH  
SiH  
SiO  
SiO ()  
SiO ()  
SiO () 
MINRESCG  GMRES()  FGMRES()  BiCGStab  

Name  GMRES()  
Bcsstm  
Bcsstm  
Nasa  
Meg  
Benzene  
SiH  
SiH  
SiO  
SiO ()  
SiO ()  
SiO () 
4.2.2 Comparisons using incomplete
In this subsection, we compare the proposed scheme against a robust incomplete (BunchParlett) factorization [10].
We use the implementation of [16] in MATLAB which computes the incomplete BunchParlett factorization of the coefficient matrix. The default parameters are and for the level of fillin and the dropping tolerance, respectively. Furthermore, it uses the Approximate Minimum Degree reordering, Rook pivoting and scaling to improve the numerical stability of the incomplete factors by default. Note that all of those enhancements that are implemented in make the preconditioner much more robust than the preconditioner. In the following experiments all methods are applied to the permuted and scaled linear systems. We use the modified factorization as a preconditioner for MINRES. To have a fair comparison, the same factorization (without the modification) is used as the preconditioner for GMRES(), FGMRESGMRES, BiCGStab and as the inner preconditioner for MINRESCG. Stopping tolerances and the maximum number of iterations allowed are set exactly the same as in Section 4.2.
In Table 8 the total number of iterations for all methods are given. Even though it is a much more robust preconditioner, MINRES preconditioned with the modified preconditioner stagnates () for . For the same problem BiCGStab, FGMRESGMRES and GMRES() (for all restart values ) reach the maximum number of iterations without converging (). On the other hand, MINRESCG converges in all problems which confirms the robustness of the proposed scheme. In Table 9, the total number of iterations for all methods are given. GMRES() with larger restart values and BiCGStab require the fewest number of iterations. FGMRESGMRES requires fewer iterations than MINRESCG. On the other hand, MINRES requires more iterations than MINRESCG in cases, and the required number of iterations are marginally better than that of MINRESCG for other cases.
MINRESCG  GMRES()  FGMRES()GMRES()  BiCGStab  MINRES  

Name  MINRES  CG  
Bcsstm  
Bcsstm  
Nasa  
Meg  
Benzene  
SiH  
SiH  
SiO  
SiO()  
SiO()  
SiO() 
MINRESCG  GMRES()  FGMRES()  BiCGStab  MINRES  

Name  GMRES()  
Bcsstm  
Bcsstm  
Nasa  
Meg  
Benzene  
SiH  
SiH  
SiO  
SiO ()  
SiO ()  
SiO () 
5 Conclusions
A twolevel nested iterative scheme is proposed for solving sparse linear systems of equations where the coefficient matrix is symmetric indefinite with few negative eigenvalues. The first level is MINRES preconditioned via CG. The inner level CG is preconditioned via the original indefinite coefficient matrix. The robustness of the proposed scheme is illustrated for linear systems that arise in disk brake squeal as well as systems that arise in a variety of test cases from the SuiteSparse Matrix Collection.
References
 [1] O. Axelsson, M. Neytcheva, and B. Ahmad, A comparison of iterative methods to solve complex valued linear algebraic systems, Numerical Algorithms, 66 (2014), pp. 811–841, https://doi.org/10.1007/s1107501397641.
 [2] M. Benzi, G. Golub, and J. Liesen, Numerical solution of saddle point problems, Acta Numerica, 14 (2005), pp. 1–137.
 [3] M. Benzi, J. C. Haws, and M. Tuma, Preconditioning highly indefinite and nonsymmetric matrices, SIAM Journal on Scientific Computing, 22 (2000), pp. 1333–1353.
 [4] M. Benzi, C. D. Meyer, and M. Tuma, A sparse approximate inverse preconditioner for the conjugate gradient method, SIAM Journal on Scientific Computing, 17 (1996), pp. 1135–1149.
 [5] A. Brandt, S. McCoruick, and J. Huge, Algebraic multigrid (amg) f0r sparse matrix equati0ns, Sparsity and its Applications, 257 (1985).
 [6] J. R. Chelikowsky, N. Troullier, and Y. Saad, Finitedifferencepseudopotential method: Electronic structure calculations without a basis, Physical review letters, 72 (1994), p. 1240.
 [7] S.C. T. Choi, C. C. Paige, and M. A. Saunders, Minresqlp: A krylov subspace method for indefinite or singular symmetric systems, SIAM Journal on Scientific Computing, 33 (2011), pp. 1810–1836.
 [8] T. Davis and Y. Hu, The university of florida sparse matrix collection, ACM Trans. Math. Softw., 38 (2011), pp. 1:1–1:25, https://doi.org/10.1145/2049662.2049663.
 [9] I. S. Duff and J. Koster, On algorithms for permuting large entries to the diagonal of a sparse matrix, SIAM Journal on Matrix Analysis and Applications, 22 (2001), pp. 973–996.
 [10] B. Fischer, Polynomial Based Iteration Methods for Symmetric Linear Systems, Society for Industrial and Applied Mathematics, 2011, https://doi.org/10.1137/1.9781611971927.
 [11] L. Fox, H. D. Huskey, and J. H. Wilkinson, Notes on the solution of algebraic linear simultaneous equations, The Quarterly Journal of Mechanics and Applied Mathematics, 1 (1948), pp. 149–173, https://doi.org/10.1093/qjmam/1.1.149.
 [12] R. Freund, On polynomial preconditioning and asymptotic convergence factors for indefinite hermitian matrices, Linear algebra and its applications, 154 (1991), pp. 259–288.
 [13] A. Gaul, M. H. Gutknecht, J. Liesen, and R. Nabben, A framework for deflated and augmented Krylov subspace methods, SIAM Journal on Matrix Analysis and Applications, 34 (2013), pp. 495–518.
 [14] P. Gill, W. Murray, D. B. Ponceleon, and M. A. Saunders, Preconditioners for indefinite systems arising in optimization, SIAM Journal on Matrix Analysis and Applications, 13 (1992), pp. 292–311, https://doi.org/10.1137/0613022.

[15]
N. Gräbner, V. Mehrmann, S. Quraishi, C. Schröder, and U. von
Wagner,
Numerical methods for parametric model reduction in the simulation of disk brake squeal
, ZAMM  Zeitschrift für Angewandte Mathematik und Mechanik, 96 (2016), pp. 1388–1405.  [16] C. Greif, S. He, and P. Liu, Symildl: Incomplete ldlt factorization of symmetric indefinite and skewsymmetric matrices, ACM Trans. Math. Softw., 44 (2017), pp. 1:1–1:21, https://doi.org/10.1145/3054948.
 [17] M. J. Grote and T. Huckle, Parallel preconditioning with sparse approximate inverses, SIAM Journal on Scientific Computing, 18 (1997), pp. 838–853.
 [18] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK Users Guide: Solution of LargeScale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods, SIAM, 1998.
 [19] M. Manguoglu, M. Koyutürk, A. H. Sameh, and A. Grama, Weighted matrix ordering and parallel banded preconditioners for iterative linear system solvers, SIAM journal on Scientific Computing, 32 (2010), pp. 1201–1216.
 [20] C. C. Paige and M. A. Saunders, Solution of sparse indefinite systems of linear equations, SIAM Journal on Numerical Analysis, 12 (1975), pp. 617–629, https://doi.org/10.1137/0712047.
 [21] E. Polizzi, Densitymatrixbased algorithm for solving eigenvalue problems, Physical Review B, 79 (2009), p. 115112.
 [22] M. Rozloznìk and V. Simoncini, Krylov subspace methods for saddle point problems with indefinite preconditioning, SIAM Journal on Matrix Analysis and Applications, 24 (2002), pp. 368–391, https://doi.org/10.1137/S0895479800375540.
 [23] Y. Saad, A flexible innerouter preconditioned gmres algorithm, SIAM Journal on Scientific Computing, 14 (1993), pp. 461–469.
 [24] Y. Saad, Iterative Methods for Sparse Linear Systems, Society for Industrial and Applied Mathematics, second ed., 2003, https://doi.org/10.1137/1.9780898718003.
 [25] Y. Saad and M. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM Journal on scientific and statistical computing, 7 (1986), pp. 856–869.
 [26] T. Sakurai, H. Tadano, et al., Cirr: a rayleighritz type method with contour integral for generalized eigenvalue problems, Hokkaido mathematical journal, 36 (2007), pp. 745–757.
 [27] H. A. van der Vorst, BiCGSTAB: A fast and smoothly converging variant of bicg for the solution of nonsymmetric linear systems, SIAM Journal on Scientific and Statistical Computing, 13 (1992), pp. 631–644, https://doi.org/10.1137/0913035.
 [28] E. Vecharynski and A. Knyazev, Absolute value preconditioning for symmetric indefinite linear systems, SIAM Journal on Scientific Computing, 35 (2013), pp. A696–A718, https://doi.org/10.1137/120886686.
 [29] S. Wang, E. d. Sturler, and G. H. Paulino, Largescale topology optimization using preconditioned krylov subspace methods with recycling, International journal for numerical methods in engineering, 69 (2007), pp. 2441–2468.
 [30] U. M. Yang et al., Boomeramg: a parallel algebraic multigrid solver and preconditioner, Applied Numerical Mathematics, 41 (2002), pp. 155–177.
Comments
There are no comments yet.