1 Introduction
In many problems in scientific computing we encounter with matrix equations. Matrix equations are one of the most interesting and intensively studied classes of mathematical problems and play vital roles in applications, and many researchers have studied matrix equations and their applications, see [6, 7, 8, 14, 16, 17, 21, 22] and their references. Nowadays, the continuous Sylvester equation is possibly the most famous and the most broadly employed linear matrix equation, and is given as
(1) 
where , and are defined matrices and is an unknown matrix. A Lyapunov equation is a special case with , and . Here and in the sequel, is used to denote the transpose of the matrix . Equation (1) has a unique solution if and only if and
have no common eigenvalues, which will be assumed throughout this paper.
Many results have been obtained about the Sylvester equation and it appears frequently in many areas of applied mathematics and plays vital roles in a number of applications such as control theory [6], model reduction [4] and image processing [5], see [1, 3, 7, 10, 11, 13, 15, 18, 19, 20] and their references for more details.
In general, the dimensions of and may be orders of magnitude different, and this fact is key in selecting the most appropriate numerical solution strategy [21]
. For solving general Sylvester equations of small size we use some methods which classified such as direct methods. Some of these direct methods are the BartelsStewart
[3] and the HessenbergSchur [13] methods which consist of transforming coefficient matrices and into triangular or Hessenberg form by an orthogonal similarity transformation and then solving the resulting system directly by a backsubstitution process. When the coefficient matrices and are large and sparse, iterative methods are often the methods of choice for solving the Sylvester equation (1) efficiently and accurately. Many iterative methods were developed for solving matrix equations, such as the alternating direction implicit (ADI) method [4], the Krylov subspace based algorithms [15, 20, 11], the Hermitian and skewHermitian splitting (HSS) method, and the inexact variant of HSS (IHSS) iteration method [2], The nested splitting conjugate gradient (NSCG) method [18] and the nested splitting CGNR (NSCGNR) method [19].When both coefficient matrices are (nonHermitian) positive semidefinite, and at least one of them is positive definite, the Hermitian and skewHermitian splitting (HSS) method [1] and the nested splitting conjugate gradient (NSCG) method [18] are often the methods of choice for efficiently and accurately solving the Sylvester equation (1).
In order to study the numerical methods, we often rewrite the continuous Sylvester equation (1) as a mathematically equivalent linear system of equations such as follows:
(2) 
where the matrix is of dimension and is given by
(3) 
where denotes the Kronecker product and
Of course, this is a numerically poor way to determine the solution of the Sylvester equation (1), as the linear system of equations (2) is costly to solve and can be illconditioned.
Motivated by [23, 24], we apply the minimal residual technique to the Hermitian and skewHermitian iteration scheme and introduce a nonstationary iteration method named minimal residual Hermitian and skewHermitian (MRHSS) iteration method to solve the continuous Sylvester equation.
In the remainder of this paper, we use , and to denote the spectral norm, the Frobenius norm of a matrix
, and the identity matrix with dimension
, respectively. Note thatis also used to represent the 2norm of a vector. Furthermore, we have the following equivalent relationships between the Frobenius norm of a matrix
and the 2norm of a vector :2 Main results
For the linear system of equations (2), we consider the Hermitian and skewHermitian splitting , where
(4) 
are the Hermitian and skewHermitian parts of matrix , respectively. Then, the iteration scheme of the MRHSS iteration method [23, 24] for system of linear equations (2) is
(5) 
where, , , and . Let and . The residual form of iteration scheme (5) can be written as
(6) 
Denote . Then, an inner product can be defined as
(7) 
where denotes the inner product of two vectors. Thus, for and , the induced vector and the induced matrix norms can be defined as and , respectively. Now, the parameter is determined by the 2norm of the residual, and we have
(8) 
However, the parameter will be determined by minimizing the Mnorm of the residual rather than the 2norm, see [23]. Therefore, we have
(9) 
According the following theorem, the iteration scheme (5) is an unconditionally convergent MRHSS iteration method [23].
Theorem 2.1
Let be a nonHermitian positive definite matrix. Then, the MRHSS iteration method used for solving the system of linear equations (2) is unconditionally convergent for any and any initial guess .
Proof. See [23].
Let and are the Hermitian and skewHermitian parts of and , respectively. For the Sylvester equation (1), according to iterative scheme (5), we have the following iteration scheme
(10) 
where, obtain from the Sylvester equation
(11) 
and obtain from the Sylvester equation
(12) 
with and . We state how to update a few later.
If the Sylvester equation (1) has a unique solution, then under the assumption and are positive semidefinite and at last one of them is positive definite, we can easily see that there is no common eigenvalue between the matrices and (also for and ), so the Sylvester equations (11) and (12) have unique solution for all given right hand side matrices.
From (3) and (4), by using the Kronecker product’s properties, we have
(13) 
(14) 
where . Form relations (6), we can obtain
(15) 
where and . Moreover, similar to (8) and (9), we can obtain
(16) 
and
(17) 
where, obtain from the Sylvester equation
and obtain from the Sylvester equation
On the surface, four systems of linear equations should be solved at each step of the MRHSS method for system of linear equations (2). But it can be reduced to three. Denote and , the vector in Step can be calculated as follows
where the and have been calculated in Step . Therefore, in (10) we can update as
In addition, we choose the value of parameter as in [1].
Therefore, an implementation of the MRHSS method for the continuous Sylvester equation can be given by the following algorithm.
Algorithm 2.2
The MRHSS algorithm for the Sylvester equation

Select an initial guess , compute

Solve

For until convergence, Do:





Solve

Solve


Solve





End Do
Theorem 2.3
Suppose that the coefficient matrices and in the continuous Sylvester equation (1) are nonHermitian positive semidefinite, and at least one of them is positive definite. Then the MRHSS iteration method (10) for solving the Sylvester equation (1) is unconditionally convergent for any and any initial guess .
Proof. The continuous Sylvester equation (1) is mathematically equivalent to the linear system of equations (2). Therefore, the proof is similar to that of Theorem 3.3 in [23] with only technical modifications.
2.1 Using the MRHSS splitting as a preconditioner
From the fact that any matrix splitting can naturally induce a splitting preconditioner for the Krylov subspace methods (see [2]) in section 3, by numerical computation, we show that the minimal residual Hermitian and skewHermitian splitting can be used as a splitting preconditioner and induce accurate, robust and effective preconditioned Krylov subspace iteration methods for solving the continuous Sylvester equation.
3 Numerical results
In this section, we use a few numerical results to show the effectiveness of the MRHSS method by comparing its results with the HSS method. All numerical experiments were computed in double precision with a number of MATLAB codes. All iterations are started from the zero matrix for initial
and terminated when the current iterate satisfieswhere is the residual of the th iterate. Also, we use the tolerance for inner iterations in corresponding methods. We report the results of the CPU time (CPU), the number of iteration steps (IT) and the norm of residual (resnorm) in the tables, and compare the HSS iterative method [1] with the MRHSS iterative method for solving the continuous Sylvester equation (1).
Example 3.1
This class of problems may arise in the preconditioned Krylov subspace iteration methods used for solving the systems of linear equations resulting from the finite difference or SincGalerkin discretization of various differential equations and boundary value problems [1].
We apply the iteration methods to this problem with different dimensions . The results are given in Tables 1 and 2. From the results presented in the Tables 1 and 2, we observe that the MRHSS method is more efficient than the HSS method in terms of CPU time. However, when the dimension increases, we observe that the HSS method is more efficient than the MRHSS method in terms of number of iterations (IT).
HSS  MRHSS  

CPU  iteration  resnorm  CPU  iteration  resnorm  
0.04  14  2.3191e6  0.02  7  2.3518e6  
0.05  26  1.2712e6  0.03  16  1.3088e6  
0.16  48  1.3215e6  0.12  37  1.1597e6  
1.02  89  1.5946e6  0.91  85  1.6722e6  
13.09  164  2.2369e6  11.51  188  2.2271e6  
85.04  298  3.2107e6  75.06  404  3.2155e6 
HSS  MRHSS  

CPU  iteration  resnorm  CPU  iteration  resnorm  
0.95  20  6.9889e6  0.12  11  6.8967e6  
2.64  36  5.0093e6  0.71  24  4.9071e6  
6.95  67  3.6776e6  2.56  53  3.1928e6  
25.01  122  3.4599e6  9.73  126  3.2791e6  
90.23  218  3.4718e6  39.60  272  3.5181e6  
370.45  365  3.8891e6  206.07  517  3.9374e6 
Example 3.2
The results of this problem are given in Table 3. Here, we observe that the MRHSS method is more efficient in both terms of CPU time and number of iterations (IT) than the HSS method.
HSS  MRHSS  

CPU  IT  resnorm  CPU  IT  resnorm  
0.04  19  6.9896e6  0.02  11  7.3379e6  
0.07  24  1.7183e5  0.03  16  2.2925e5  
0.14  31  8.1598e5  0.07  22  9.4150e5  
0.41  40  4.0795e4  0.26  29  4.3751e4  
5.42  54  0.0016  2.71  37  0.0018  
27.70  73  0.0070  12.53  45  0.0071  
326.71  99  0.0288  135.82  49  0.0304 
Example 3.3
Method  IT  CPU  resnorm 

HSS  2.32  
MRHSS  1.3021  
BiCGSTAB  NaN  
HSSBiCGSTAB  NaN  
MRHSSBiCGSTAB  12  2483.35  7.7951e6 
For this problem, the HSS and the MRHSS methods are converging very slowly. We use the BiCGSTAB method for this problem and observe that this method is diverged. In the Table 4, dagger shows that no convergence has been obtained. Motivate by [18] and [19], we use each of the MRHSS and the HSS methods as a splitting preconditioner in the BiCGSTAB method. We observe that use of the MRHSS method as a precondition improves the results obtained by the corresponding method (MRHSSBiCGSTAB). However, use of the HSS method as a precondition cannot improve the results.
4 Conclusion
In this paper, we have proposed an efficient iterative method, which named the MRHSS method, for solving the continuous Sylvester equation . We have compared the MRHSS method with the HSS method for some problems. We have observed that, for these problems the MRHSS method is more efficient versus the HSS method. Moreover, the use of the MRHSS splitting as a precondition can induce accurate and effective preconditioned BiCGSTAB method.
References
 [1] Z. Z. Bai, On Hermitian and skewHermitian splitting iteration methods for continuous Sylvester equations, J. Comput. Math., 29:2 (2011) 185–198.
 [2] Z.Z. Bai, J.F. Yin and Y.F Su, A shiftsplitting preconditioner for nonHermitian positive definite matrices, J. Comput. Math., 24 (2006) 539–552.
 [3] R. H. Bartels and G. W. Stewart, Algorithm 432: Solution of the matrix equation AX+XB=C, Circ. Syst. Signal Proc. 13 (1994) 820–826.
 [4] P. Benner, R. C. Li and N. Truhar, On the ADI method for Sylvester equations, J. Comput. Appl. Math. 233 (2009) 1035–1045.
 [5] A. Bouhamidi and K. Jbilou, Sylvester Tikhonovregularization methods in image restoration, J. Comput. Appl. Math. 206 (2007) 86–98.
 [6] B. Datta, Numerical methods for linear control systems, Elsevier Academic Press, 2004.
 [7] M. Dehghan and M. Hajarian, Two algorithms for finding the Hermitian reflexive and skewHermitian solutions of Sylvester matrix equations, Appl. Math. Lett., 24 (2011) 444–449.
 [8] M. Dehghan and A. Shirilord, The doublestep scale splitting method for solving complex Sylvester matrix equation, Comp. Appl. Math., 38, 146 (2019) 444–449.
 [9] I. S. Duff, R. G. Grimes and J. G. Lewis, User’s guide for the HarwellBoeing sparse matrix collection, Technical Report RAL92086, Rutherford Applton Laboratory, Chilton, UK, 1992.
 [10] D. J. Evans and C. R. Wan, A preconditioned conjugate gradient method for , Intern. J. Computer Math., 49 (1993) 207–219.
 [11] A. El Guennouni, K. Jbilou and J. Riquet, Block Krylov subspace methods for solving large Sylvester equation, Numer. Algorithms, 29 (2002) 75–96.
 [12] A. El Guennouni, K. Jbilou and H. Sadok, A block version of BiCGSTAB for linear systems with multiple righthand sides, Electron. Trans. Numer. Anal., 16 (2004) 243–256.
 [13] G. H. Golub, S. Nash and C. Van Loan, A HessenbergSchur method for the problem AX+XB=C, IEEE Trans. Contr. AC24 (1979) 909–913.
 [14] M. Hajarian, Solving the general Sylvester discretetime periodic matrix equations via the gradient based iterative method, Appl. Math. Lett., 52 (2016) 87–95.
 [15] D. Y. Hu and L. Reichel, Krylovsubspace methods for the Sylvester equation, Linear Algebra Appl., 172 (1992) 283–313.
 [16] Y. F. Ke and C. F. Ma, A preconditioned nested splitting conjugate gradient iterative method for the large sparse generalied Sylvester equation, Comput. Math. Appl., 68 (2014) 1409–1420.
 [17] M. Khorsand Zak and F. Toutounian, Nested splitting conjugate gradient method for matrix equation and preconditioning, Comput. Math. Appl., 66 (2013) 269–278.
 [18] M. Khorsand Zak and F. Toutounian, Nested splitting CGlike iterative method for solving the continuous Sylvester equation and preconditioning, Adv. Comput. Math., 40 (2014) 865–880.
 [19] M. Khorsand Zak and F. Toutounian, An iterative method for solving the continuous Sylvester equation by emphasizing on the skewHermitian parts of the coefficient matrices, Intern. J. Computer Math., 94 (2017) 633–649.
 [20] D. K. Salkuyeh and F. Toutounian, New approaches for solving large Sylvester equations, Appl. Math. Comput. 173 (2006) 9–18.
 [21] V. Simoncini, Computational methods for linear matrix equations, SIAM Review, 58 (2016) 377–441.

[22]
E. Tohidi and M. Khorsand Zak,
A new matrix approach for solving secondorder linear matrix partial differential equations
, Mediterr. J. Math. 13 (2016) 1353–1376.  [23] A.L Yang, On the convergence of the minimum residual HSS iteration method, Appl. Math. Lett., 94 (2019) 210–216.
 [24] A.L Yang, Y. Cao, and Y.J. Wu, Minimum residual Hermitian and skewHermitian splitting iteration method for nonHermitian positive definite linear systems, BIT Numer. Math., 59 (2019) 299–319.