In many problems in scientific computing we encounter with matrix equations. Matrix equations are one of the most interesting and intensively studied classes of mathematical problems and play vital roles in applications, and many researchers have studied matrix equations and their applications, see [6, 7, 8, 14, 16, 17, 21, 22] and their references. Nowadays, the continuous Sylvester equation is possibly the most famous and the most broadly employed linear matrix equation, and is given as
where , and are defined matrices and is an unknown matrix. A Lyapunov equation is a special case with , and . Here and in the sequel, is used to denote the transpose of the matrix . Equation (1) has a unique solution if and only if and
have no common eigenvalues, which will be assumed throughout this paper.
Many results have been obtained about the Sylvester equation and it appears frequently in many areas of applied mathematics and plays vital roles in a number of applications such as control theory , model reduction  and image processing , see [1, 3, 7, 10, 11, 13, 15, 18, 19, 20] and their references for more details.
In general, the dimensions of and may be orders of magnitude different, and this fact is key in selecting the most appropriate numerical solution strategy 
. For solving general Sylvester equations of small size we use some methods which classified such as direct methods. Some of these direct methods are the Bartels-Stewart and the Hessenberg-Schur  methods which consist of transforming coefficient matrices and into triangular or Hessenberg form by an orthogonal similarity transformation and then solving the resulting system directly by a back-substitution process. When the coefficient matrices and are large and sparse, iterative methods are often the methods of choice for solving the Sylvester equation (1) efficiently and accurately. Many iterative methods were developed for solving matrix equations, such as the alternating direction implicit (ADI) method , the Krylov subspace based algorithms [15, 20, 11], the Hermitian and skew-Hermitian splitting (HSS) method, and the inexact variant of HSS (IHSS) iteration method , The nested splitting conjugate gradient (NSCG) method  and the nested splitting CGNR (NS-CGNR) method .
When both coefficient matrices are (non-Hermitian) positive semi-definite, and at least one of them is positive definite, the Hermitian and skew-Hermitian splitting (HSS) method  and the nested splitting conjugate gradient (NSCG) method  are often the methods of choice for efficiently and accurately solving the Sylvester equation (1).
In order to study the numerical methods, we often rewrite the continuous Sylvester equation (1) as a mathematically equivalent linear system of equations such as follows:
where the matrix is of dimension and is given by
where denotes the Kronecker product and
Motivated by [23, 24], we apply the minimal residual technique to the Hermitian and skew-Hermitian iteration scheme and introduce a non-stationary iteration method named minimal residual Hermitian and skew-Hermitian (MRHSS) iteration method to solve the continuous Sylvester equation.
In the remainder of this paper, we use , and to denote the spectral norm, the Frobenius norm of a matrix
, and the identity matrix with dimension, respectively. Note that
is also used to represent the 2-norm of a vector. Furthermore, we have the following equivalent relationships between the Frobenius norm of a matrixand the 2-norm of a vector :
2 Main results
For the linear system of equations (2), we consider the Hermitian and skew-Hermitian splitting , where
where, , , and . Let and . The residual form of iteration scheme (5) can be written as
Denote . Then, an inner product can be defined as
where denotes the inner product of two vectors. Thus, for and , the induced vector and the induced matrix norms can be defined as and , respectively. Now, the parameter is determined by the 2-norm of the residual, and we have
However, the parameter will be determined by minimizing the M-norm of the residual rather than the 2-norm, see . Therefore, we have
Let be a non-Hermitian positive definite matrix. Then, the MRHSS iteration method used for solving the system of linear equations (2) is unconditionally convergent for any and any initial guess .
Proof. See .
where, obtain from the Sylvester equation
and obtain from the Sylvester equation
with and . We state how to update a few later.
If the Sylvester equation (1) has a unique solution, then under the assumption and are positive semi-definite and at last one of them is positive definite, we can easily see that there is no common eigenvalue between the matrices and (also for and ), so the Sylvester equations (11) and (12) have unique solution for all given right hand side matrices.
where . Form relations (6), we can obtain
where, obtain from the Sylvester equation
and obtain from the Sylvester equation
On the surface, four systems of linear equations should be solved at each step of the MRHSS method for system of linear equations (2). But it can be reduced to three. Denote and , the vector in Step can be calculated as follows
where the and have been calculated in Step . Therefore, in (10) we can update as
In addition, we choose the value of parameter as in .
Therefore, an implementation of the MRHSS method for the continuous Sylvester equation can be given by the following algorithm.
The MRHSS algorithm for the Sylvester equation
Select an initial guess , compute
For until convergence, Do:
Suppose that the coefficient matrices and in the continuous Sylvester equation (1) are non-Hermitian positive semi-definite, and at least one of them is positive definite. Then the MRHSS iteration method (10) for solving the Sylvester equation (1) is unconditionally convergent for any and any initial guess .
Proof. The continuous Sylvester equation (1) is mathematically equivalent to the linear system of equations (2). Therefore, the proof is similar to that of Theorem 3.3 in  with only technical modifications.
2.1 Using the MRHSS splitting as a preconditioner
From the fact that any matrix splitting can naturally induce a splitting preconditioner for the Krylov subspace methods (see ) in section 3, by numerical computation, we show that the minimal residual Hermitian and skew-Hermitian splitting can be used as a splitting preconditioner and induce accurate, robust and effective preconditioned Krylov subspace iteration methods for solving the continuous Sylvester equation.
3 Numerical results
In this section, we use a few numerical results to show the effectiveness of the MRHSS method by comparing its results with the HSS method. All numerical experiments were computed in double precision with a number of MATLAB codes. All iterations are started from the zero matrix for initialand terminated when the current iterate satisfies
where is the residual of the th iterate. Also, we use the tolerance for inner iterations in corresponding methods. We report the results of the CPU time (CPU), the number of iteration steps (IT) and the norm of residual (res-norm) in the tables, and compare the HSS iterative method  with the MRHSS iterative method for solving the continuous Sylvester equation (1).
This class of problems may arise in the preconditioned Krylov subspace iteration methods used for solving the systems of linear equations resulting from the finite difference or Sinc-Galerkin discretization of various differential equations and boundary value problems .
We apply the iteration methods to this problem with different dimensions . The results are given in Tables 1 and 2. From the results presented in the Tables 1 and 2, we observe that the MRHSS method is more efficient than the HSS method in terms of CPU time. However, when the dimension increases, we observe that the HSS method is more efficient than the MRHSS method in terms of number of iterations (IT).
The results of this problem are given in Table 3. Here, we observe that the MRHSS method is more efficient in both terms of CPU time and number of iterations (IT) than the HSS method.
For this problem, the HSS and the MRHSS methods are converging very slowly. We use the BiCGSTAB method for this problem and observe that this method is diverged. In the Table 4, dagger shows that no convergence has been obtained. Motivate by  and , we use each of the MRHSS and the HSS methods as a splitting preconditioner in the BiCGSTAB method. We observe that use of the MRHSS method as a precondition improves the results obtained by the corresponding method (MRHSS-BiCGSTAB). However, use of the HSS method as a precondition cannot improve the results.
In this paper, we have proposed an efficient iterative method, which named the MRHSS method, for solving the continuous Sylvester equation . We have compared the MRHSS method with the HSS method for some problems. We have observed that, for these problems the MRHSS method is more efficient versus the HSS method. Moreover, the use of the MRHSS splitting as a precondition can induce accurate and effective preconditioned BiCGSTAB method.
-  Z. Z. Bai, On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations, J. Comput. Math., 29:2 (2011) 185–198.
-  Z.-Z. Bai, J.-F. Yin and Y.-F Su, A shift-splitting preconditioner for non-Hermitian positive definite matrices, J. Comput. Math., 24 (2006) 539–552.
-  R. H. Bartels and G. W. Stewart, Algorithm 432: Solution of the matrix equation AX+XB=C, Circ. Syst. Signal Proc. 13 (1994) 820–826.
-  P. Benner, R. C. Li and N. Truhar, On the ADI method for Sylvester equations, J. Comput. Appl. Math. 233 (2009) 1035–1045.
-  A. Bouhamidi and K. Jbilou, Sylvester Tikhonov-regularization methods in image restoration, J. Comput. Appl. Math. 206 (2007) 86–98.
-  B. Datta, Numerical methods for linear control systems, Elsevier Academic Press, 2004.
-  M. Dehghan and M. Hajarian, Two algorithms for finding the Hermitian reflexive and skew-Hermitian solutions of Sylvester matrix equations, Appl. Math. Lett., 24 (2011) 444–449.
-  M. Dehghan and A. Shirilord, The double-step scale splitting method for solving complex Sylvester matrix equation, Comp. Appl. Math., 38, 146 (2019) 444–449.
-  I. S. Duff, R. G. Grimes and J. G. Lewis, User’s guide for the Harwell-Boeing sparse matrix collection, Technical Report RAL-92-086, Rutherford Applton Laboratory, Chilton, UK, 1992.
-  D. J. Evans and C. R. Wan, A preconditioned conjugate gradient method for , Intern. J. Computer Math., 49 (1993) 207–219.
-  A. El Guennouni, K. Jbilou and J. Riquet, Block Krylov subspace methods for solving large Sylvester equation, Numer. Algorithms, 29 (2002) 75–96.
-  A. El Guennouni, K. Jbilou and H. Sadok, A block version of BiCGSTAB for linear systems with multiple right-hand sides, Electron. Trans. Numer. Anal., 16 (2004) 243–256.
-  G. H. Golub, S. Nash and C. Van Loan, A Hessenberg-Schur method for the problem AX+XB=C, IEEE Trans. Contr. AC-24 (1979) 909–913.
-  M. Hajarian, Solving the general Sylvester discrete-time periodic matrix equations via the gradient based iterative method, Appl. Math. Lett., 52 (2016) 87–95.
-  D. Y. Hu and L. Reichel, Krylov-subspace methods for the Sylvester equation, Linear Algebra Appl., 172 (1992) 283–313.
-  Y. -F. Ke and C. -F. Ma, A preconditioned nested splitting conjugate gradient iterative method for the large sparse generalied Sylvester equation, Comput. Math. Appl., 68 (2014) 1409–1420.
-  M. Khorsand Zak and F. Toutounian, Nested splitting conjugate gradient method for matrix equation and preconditioning, Comput. Math. Appl., 66 (2013) 269–278.
-  M. Khorsand Zak and F. Toutounian, Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning, Adv. Comput. Math., 40 (2014) 865–880.
-  M. Khorsand Zak and F. Toutounian, An iterative method for solving the continuous Sylvester equation by emphasizing on the skew-Hermitian parts of the coefficient matrices, Intern. J. Computer Math., 94 (2017) 633–649.
-  D. K. Salkuyeh and F. Toutounian, New approaches for solving large Sylvester equations, Appl. Math. Comput. 173 (2006) 9–18.
-  V. Simoncini, Computational methods for linear matrix equations, SIAM Review, 58 (2016) 377–441.
E. Tohidi and M. Khorsand Zak,
A new matrix approach for solving second-order linear matrix partial differential equations, Mediterr. J. Math. 13 (2016) 1353-–1376.
-  A.-L Yang, On the convergence of the minimum residual HSS iteration method, Appl. Math. Lett., 94 (2019) 210–216.
-  A.-L Yang, Y. Cao, and Y.-J. Wu, Minimum residual Hermitian and skew-Hermitian splitting iteration method for non-Hermitian positive definite linear systems, BIT Numer. Math., 59 (2019) 299–319.