1 Introduction
In this paper we are concerned with numerical methods for largescale systems of stiff matrix Riccati differential equations (MRDEs) of the following form
(1) 
where is the sought, MRDEs of this form occur in many important applications such as optimal control, control, filtering, boundary value problems for systems of ODEs and many others (see e.g. Abou ; Ascher ; Ichikawa ; Jacobs ). In most control problems, the coefficient matrices and are obtained from the discretization of operators defined on infinite dimensional space, and the fast and slow modes are exist. This means that the associated MRDEs will be fairly large and stiff.
The most important case of (1) is the symmetric MDREs
(2) 
here, and It is obvious that the solution of symmetric MRDEs is symmetric as is also a solution. The symmetric MRDEs is one of the most widely studied equations due to its importance in linearquadratic optimal control problems. If we set the equations (1) reduce to the socalled matrix Sylvester differential equations (MSDEs). For a thorough description of these equations and some qualitative issues, we refer the reader to Abou ; Fital ; Jacobs ; Juang and the references appearing therein.
Many numerical methods have been developed in the past for solving MRDEs. Perhaps the most natural numerical technique is to use Kronecker products to rewrite (1) as an
vector ODEs, and then uses a standard numerical integrators such as RungeKutta or linear multistep solvers
Butcher . However, these approaches are not suitable for solutions of large stiff MRDEs. They are generally computationally expensive and hard to exploit the structure inherited in some large practical problems. For stiff MRDEs, some matrixvalued versions of implicit time integration schemes, such as the BDF, Rosenbrock methods have been explored through a direct time discretization of (1), see e.g. Benner01 ; Choi ; Dieci . Recently, some other unconventional numerical methods have been also developed for MRDEs and related problems, including splitting methods Stillfjord1 ; Stillfjord2 and projection methods Behr ; Jbilou ; Hached ; Koskela , etc.The aim of this paper is to introduce exponential integrators for largescale stiff problems of the forms 1 and 2. In the past two decades exponential integrators have become a popular tool for solving largescale stiff semilinear systems of ODEs
(3) 
A general derivation of exponential integrators is based on the variationofconstants formula
(4) 
By approximating the nonlinear terms by an appropriate algebra polynomial, various type of exponential integrators can be exploited. Different approximations result different types of exponential integrators of either multistep type or RungeKutta type, see e.g. MH2 ; MH4 ; MH3 ; AK . A main advantage of exponential integrators with stiff order conditions don’t suffer from an order reduction even if the matrix is a discretization of a unbounded linear operator. For a full overview of exponential integrators and associated software, we refer the readers to the reviews Hochbruck2010 ; BV2005 and references therein. Although the matrix differential equations (1) can be reformulated as the form (3) and solved by an exponential integrator, this approach will generate very large and not be appropriate.
In the present work we present matrixvalued versions exponential integrators for stiff MRDEs (1). The methods provide an efficient alternative to implicit integrators for computing solutions of MRDEs. For largescale systems, in many applications it is often observed, both practical and theoretical, the solution has low numerical rank and can be approximated by products of lowrank matrices Lang ; Stillfjord3 . To utilize such structure, we introduce how to the lowrank implementation can be applied to exponential integrators. Thus we are able to save computational and memory storage requirements compared to the simple application of an exponential integrator.
The remainder of the paper is organized as follows. In Section 2, we give some basic results and properties of MRDEs. In Section 3, the exponential Rosenbroctype methods are introduced for the application to the MRDEs. Section 4 we show some issues of implementation and the lowrank approximations for both typical exponential integration schemes are exploited. Section 5 is devoted to some numerical examples and comparisons with splitting methods of similar orders. Finally, we draw some conclusions in Section 6.
2 Preliminaries
We start with recalling a general result on the solution of the MRDEs. The following result shows that the MRDEs (1) can be equivalently rewritten in an integral form (see e.g. Kucera ).
Theorem 1.
The exact solution of the MRDEs (1) is given by
(5) 
Proof.
∎
Over the time interval by using a change of variables in (5) to give
(6) 
The formula (6) also holds for timevarying coefficient matrices and To make constructing methods for matrix differential equations easier, we use
(7) 
to denote the linear operator from the righthand side of MRDEs (1). The operator (7) is wellknown Sylvester operator. The operator exponential satisfy the following relation (see Behr ):
(8) 
here and Then, the expression (6) has the simplified form
(9) 
The first term from the righthand side of (9) involves operator exponential and represents the homogenous part of the solution, whereas the other two terms consists of integrals, again involving operator exponential. An natural idea to construct exponential integrators is to approximate the integrals on the righthand side of (9) by a quadrature formula, in which only the nonlinearity term are approximated but the operator exponential are treated exactly. Particularly, for MSDEs, we have the following result.
Lemma 0.
Let and is a sufficiently differential matrixvalue function, then the exact solution of the matrix differential equations
(10) 
can be represented by the expansion
(11) 
where
(12) 
(13) 
Proof.
The functions satisfy the following recurrence relations
(16) 
3 Exponential Rosenbrocktype integrators for MRDEs
In this section, we consider the time discretization of MRDEs (1). Rewrite the equations (1) as
(18) 
where denotes the Fréchet derivative of and the nonlinear remainder at respectively:
(19) 
with and
It is obvious a Sylvester operator. Formally, by the variation of constants formula (9), the exact solution of (18) satisfy
(20) 
The above expression has a similar structure with (4) but Sylvester operator exponential instead of matrix exponential. We can apply various existing exponential integrators to (18). The application of the general exponential RungeKutta type methods MH2 , to the MRDEs (18) yields
(21) 
Here, is the nodes, and the coefficients are linear combinations of the respectively. They can be determined by a stiff error analysis which can be adapted from the stiff order theory presented in Hochbruck2010 ; Luan2013 . The process is highly sophisticated and is omitted here.
In our context, we only consider two specific exponential integration schemes. They will be used in our numerical experiments in Section 5. The first and simplest exponential integration scheme is the exponential Rosenbrocktype Euler scheme
(22) 
The scheme is computationally attractive since it achieves secondorder with one stage only. The second scheme is orderthird (denoted Erow3), which can be regarded as an simple modification of exponential Rosenbrocktype Euler scheme
(23) 
The internal stage has the same structure as the exponential Rosenbrocktype Euler scheme (22), and the external stage is a perturbation of the internal stage. The above two schemes are usually embedded to create an adaptive time stepping method.
4 Implementation issues
For exponential integrators, the main computational cost is to approximate the exponential and exponentialtype functions at each timestep. To our knowledge, there is no explicit method to evaluate the functions of a Sylvester operator in the literatures. For the computation of the first function, the following formula gives an indirect way.
Define the augmented matrix by
(24) 
By the formula (10.40) arising in Higham 2008 , we have
(25) 
Then the scheme (22) can be rewritten as
(26) 
For the computation of a single matrix exponential or its action on a thin matrix, a number of methods have been proposed in the literatures for carrying out this task, see e.g. AlMohy 2009 ; AlMohy 2011 ; Sidje 1998 and the review Moler 2003 . This approach has the major advantage of simplicity but is likely to be too expensive for large and
A more general strategy for approximating functions is to construct a quadrature formula. For a given function , the Sylvester operator and an matrix we approximate by a quadrature approximation with quadrature nodes and weights
(27) 
Thus to evaluate we need to compute operator exponential acting on the same matrix. In practical application if the matrix has a lowrank factorization where both and are full column rank and is nonsingular, then the block Krylov subspace method can be apply to the computation of operator exponential involved.
In fact, as shown in the schemes (22) and (23), every stage in an exponential integrator can be expressed as a linear combination of the form
(28) 
here Using the recurrence relation (16) we can calculate (28) recursively. Two alternatives are available. The first one is a forward recursion, i.e.,
(29) 
then
(30) 
The main computational cost of this process includes the action of Sylvester operator and a . The other approach is a backward recursion
(31) 
(32) 
This process requires the computation of algebra Sylvester equations and a Sylvester operator exponential acting on matrix.
In many practical applications the MRDEs have an lowrank structure and the solution has the lowrank property. In such cases it is necessary to avoid forming the matrices explicitly, because this in general leads to dense computations. In the remainder of the section we briefly introduce how to implement the above mentioned two schemes in a lowrank fashion to the symmetric MRDEs.
Provided and in (2) are symmetric positive semidefinite, and are given in the lowrank form
(33) 
with and This implies that the solution to the MRDEs (2) is also symmetric positive semidefinite for all First, we consider the exponential Rosenbrocktype Euler scheme (22). Assume that the previous solution approximations admit a decomposition of the form with The in scheme (22) can be written the form of
(34) 
The new matrix have more columns than and also more than their rank. As the number of columns in the decomposition increases, the computation cost will become prohibitively expensive. To overcome this difficult we can incorporate the column compression strategy Lang to and find more suitable lowrank factors. Then, we apply the numerical quadrature formula (27) to approximate and the decomposition is given by the factors
(35) 
and
(36) 
Note that the evaluation of requires computation of products between matrix exponential and a thin matrix. For large matrix the block Krylov projection algorithm is a popular choice Saad . An advantage of this computation is that one can project the operator exponential in the same search subspace and evaluate them simultaneously.
Now, using the splitting of and of the solution the approximation to is given by
(37) 
Again, we can employ column compression strategy to obtain lowrank splitting factors.
We now describe an alternative lowrank implementation of the exponential Rosenbrocktype Euler scheme. Apply the backward recursion (31)(32) to the scheme (22), giving
(38) 
This results in solving one algebraic Lyapunov equation (ALE) which the right hand side has lowrank form in each timestep. There are many methods for solving Lyapunov equations where the righthand side is of low rank, for instance by a lowrank ADI iteration Benner or Krylov subspace based methods Simoncini . Due to the availability of lowrank ADI iteration based codes Benner , here we limit ourselves to this procedure.
For orderthird exponential integration scheme (23), again, the previous solution approximation is assumed to be given in lowrank format. Note that the interval stage is the same as exponential Rosenbrocktype Euler scheme, thus can be written in the lowrank form The external stage is a perturbation of the matrix by In order to find a lowrank factorization of the entire right hand side, we first consider the type splitting of Direct calculation shows that
(39) 
Inserting the splitting factors of and into (39) finally gives the splitting with
(40) 
Again, using (27), the splitting factors of can be computed as follows:
(41) 
and
(42) 
Now, using the type splitting with we obtain
(43) 
with
(44) 
and
(45) 
In actual implementation, once the new splitting factors are formed, column compression strategy should be performed to eliminate the redundant information.
5 Numerical experiments
In this section, we present some numerical experiments to illustrate the behaviour of exponential integration methods. We compare the numerical performance of the exponential Rosenbrocktype Euler scheme (22) (denoted ExpEuler) and third order exponential integration scheme Erow3 (23) with the splitting schemes in Stillfjord2 . For ExpEuler, we consider all the above mentioned three different implementations. They are marked as follows: the general implementation (26) (denoted by GExpEuler), the lowrank implementation (denoted by LrExpEuler) and the backward recursion implementation (38) (denoted by BrExpEuler). For lowrank implementations, the tolerance for column compression strategies are set to where is the system dimension and denotes the machine precision. All experiments are performed under Windows 10 and MATLAB R2018b running on a laptop with an Intel Core i7 processor with 1.8 GHz and RAM 8 GB. Unless otherwise stated, we use the relative errors at the final time, measured in the Frobenius norm.
Experiment 1. As a first test, we consider the matrix obtained from the standard 5point difference discretization of the twodimensional PDE
(46) 
on the domain with homogeneous Dirichlet boundary conditions, and with entries chosen randomly from [0; 1]. The matrix is a negative stiffness matrix which can be generated by MATLAB function fdm2dmatrix from LYAPACK toolbox Penzl . We consider the discretization of (46) for two different values of functions namely and respectively. The former generate a symmetric matrix (denoted fdmsym), while the latter is unsymmetric (denoted fdmnonsym). The corresponding initial values are choosen as a lowrank product where are randomly generated. To ensure the availability of a reference solution, we perform two sizes on the time interval [0, 1], one with and the other with The reference solutions are obtained by MATLAB builtin function ode15s with an absolute tolerance of and a relative tolerance of This has been done by vectorizing the MRDEs into a vectorvalued ODEs with unknowns. We use the above mentioned methods to integrate the four systems over the time interval [0, 1] with time step size
Table 1 lists the relative errors at the final time as well as the total time (in seconds) of the methods to integrate these systems. The results show that the two exponential integration schemes achieve the high precision of about in all cases and obtained a higher order of convergence than we expected. An interpretation of this as the exponential integrators are suitable for the structure of the MRDEs and capture some qualitative properties. As a comparison, we also present the results for the additive symmetric scheme in Stillfjord2 of order 4 (denoted Additive4) with the same timestep and ode15s. The code for the Additive4 contains parallel loops which uses 4 workers on our machine. We can see the exponential integration schemes accomplish higher computational accuracy than Additive4 and take less runtimes than ode15s.
Figures 1, 2 show plots of the Fnorm of solutions obtained by ExpEuler and the reference solutions provided by ode15s for each test system. The Erow3 yields very similar behaviors with ExpEuler and is omitted here. From these figures we see that the ExpEuler follow well behaviours of the reference solutions. Although it is not very accurate in the start some time steps, the behaviours are completely corrected as the time increase. At the final time, the relative error even level out around This is also true in the subsequent experiments and we interpret this as exponential integration schemes being favorable for MRDEs.
size(A)  GExpEuler  LrExpEuler  BrExpEuler  Erow3  Additive4  ode15s  

Error  Time  Error  Time  Error  Time  Error  Time  Error  Time  Error  Time  
fdmsym  6464  1.22e14  0.95  1.31e14  3.36  4.58e14  1.56  1.30e14  4.25  2.09e04  2.67  1.78e14  334.81 
100100  1.57e14  1.81  1.73e14  6.28  4.46e13  1.82  1.77e14  7.51  8.53e04  3.07  2.29e14  3708.20  
fdmnonsym  6464  2.01e14  0.96  2.16e14  2.12  8.61e14  2.12  2.15e14  3.11  5.17e04  4.02  2.89e14  423.85 
100100  2.26e14  2.16  2.78e14  12.67  3.21e14  2.58  2.79e14  15.37  1.89e03  5.73  3.19e14  4773.10 
Experiment 2. As a second test, we consider a finite element discretization of a heat equation arising from the optimal control of steel cooling Benner . The matrix is symmetric and stable, and The initial value is set as or random vectors with elements from the normal (0,1) distribution. To ensure the availability of a reference solution, we perform two sizes on the time interval [0,45] with , one with and the other with The reference solutions are computed by MATLAB builtin function ode45 with an absolute tolerance of and a relative tolerance of Similar the above experiment Figures 3 and 4 show the convergence behaviour of the solutions of the ExpEuler to the reference solution for the size and From these Figures, we observe that the ExpEuler gives a fairly good approximations of the reference solutions. We also present the numerical results of the different methods mentioned above in Table 2. We see that the proposed method performs quite well in terms of accomplished accuracy and computational time.
GExpEuler  LrExpEuler  BrExpEuler  Erow3  Additive4  

Error  Time  Error  Time  Error  Time  Error  Time  Error  Time  
rail371  8.92e14  0.75  7.19e12  0.11  7.07e11  1.10  7.19e12  0.16  3.38e11  0.33  
rand(371,2)  8.61e14  0.99  4.23e12  0.13  7.25e11  1.18  4.22e12  0.19  3.26e11  0.42  
rail1357  2.23e14  21.29  7.66e12  0.56  2.48e10  14.55  7.66e12  0.80  3.60e11  0.52  
rand(1357,2)  1.59e14  29.54  4.18e12  0.67  1.78e10  16.11  4.27e12  0.97  2.57e11  0.56 
Experiment 3. The third experiment is the same problem as in Experiment 1 for largerscale dimensions. We choose the same setting as in Experiment 1. We compare the lowrank approximations, LrExpEuler, Erow3, BrExpEuler to the symmetric splitting of orders 4 and 6 for systems of dimensions For largerscale stiff matrix, our tests indicate that the numerical integration formula (27) will cause low computational accuracy. In following tests, we use a relatively smaller sizes except for the BrExpEuler which is set as Due to the systems sizes, it is infeasible to use ode15s to compute an accurate reference solutions. Instead, we use the eighthorder symmetric splitting scheme in Stillfjord2 with the time step Table 3 present the relative errors and the corresponding computation times for each of the systems with different values of the system size It is noted that the BrExpEuler produces the smallest errors of all methods even though the time step is larger. This is due to the poor performance of numerical integrator to function. This also shows that the exponential integrators have large computational potential and inspires us to exploit other efficiently numerical approximations to function. We plan to investigate this option in our future work. In terms of the CPU times, we observe that in some cases the exponential integration methods spend more CPU times than the symmetric splitting schemes. This might have a weakened effect when the computation is done on a single core machine as the symmetric splitting methods employ parallel loops with four cores in our laptop. Again, the LrExpEuler obtained almost the same accuracy as Erow3, which illustrates the feasibility of adaptivity. We observe that the accuracy of the resulting solutions reduce as the size of system and its stiffness increase for all methods. We attribute this mainly to the pessimistic reference solutions.
Matrix  size(A)  LrExpEuler  BrExpEuler  Erow3  Additive4  Additive6  

Error  Time  Error  Time  Error  Time  Error  Time  Error  Time  
fdmsym  400400  8.21e07  16.64  1.46e08  10.79  8.21e07  41.31  3.68e05  64.69  8.03e07  86.99 
900900  7.67e05  110.77  3.06e06  77.50  7.67e05  191.85  5.39e04  148.54  4.18e05  189.66  
16001600  7.84e04  500.91  6.21e05  470.55  7.84e04  711.84  2.53e03  293.53  3.42e04  348.13  
25002500  3.14e03  1641.29  3.70e04  1953.69  3.14e03  2067.08  7.51e03  541.69  1.15e03  957.70  
fdmnonsym  400400  1.18e06  28.03  1.96e08  19.37  1.18e06  87.31  4.40e05  116.79  1.03e06  170.70 
900900  7.80e05  112.67  3.37e06  116.65  7.80e05  263.88  5.64e04  392.62  4.51e05  491.76  
16001600  8.16e04  558.12  6.52e05  702.36  8.16e04  925.01  2.56e03  917.33  3.58e04  990.88  
25002500  3.22e03  2013.00  3.92e04  2822.60  3.22e03  2629.20  8.00e03  1671.76  1.22e03  2120.35 
6 Conclusion
In this paper, we show how to apply exponential integrators to get approximate solutions of large stiff MRDEs. The lowrank implementation of such schemes for largescale applications and their comparison with current stateoftheart integrators must be addressed. Numerical experiments illustrate that the exponential integration methods can achieved converge than the expected order. Thus the EI schemes can provide an efficient alternative to standard integrators for a largescale stiff problems. The study of the performance and application of the higherorder exponential integration schemes and their comparative performance with implicit schemes will be presented elsewhere. We also plan to develop new more efficient algorithms to approximate the exponential and related functions of a Sylvester operator acting on an matrix.
References
 (1) H. AbouKandil, G. Freiling, V. Ionescu and G. Jank, Matrix Riccati Equations in Control and Systems Theory, Birkhäuser, Basel, Switzerland, 2003.
 (2) A. AlMohy and N. Higham, A new scaling and modified squaring algorithm for matrix functions, SIAM J. Matrix Anal. Appl. 31(2009), pp. 970989.
 (3) A. AlMohy and N. Higham, Computing the action of the matrix exponential, with an application to exponential integrators, SIAM J. Sci. Comput., 33 (2011), pp. 488511.
 (4) A.C. Antoulas , Approximation of largescale dynamical Systems, SIAM, Philadelphia, 2009.

(5)
U.M. Ascher, R.M. Mattheij and R. G. Russell,
Numerical solution of boundary value problems for ordinary differential equations,
PrenticeHall, Englewood Cliffs, NJ, 1988.  (6) M. Behr, P. Benner and J. Heiland, Solution Formulas for Differential Sylvester and Lyapunov Equations, arXiv:1811.08327[math.NA], 2018.
 (7) P. Benner, M. Köhler, and J. Saak, M.E.S.S.the matrix equations sparse solvers library, https://www.mpimagdeburg.mpg.de/projects/mess.
 (8) P. Benner and H. Mena, Rosenbrock methods for solving Riccati differential equations, IEEE Trans. Automat. Control, 58(2013), pp. 29502956.
 (9) P. Benner and J. Saak. A semidiscretized heat transfer model for optimal cooling of steel profiles, In Dimension Reduction of LargeScale Systems, volume 45 of Lect. Notes Comput. Sci. Eng.), P. Benner, V. Mehrmann, and D. Sorensen, Eds.Berlin/Heidelberg, Germany: SpringerVerlag, 2005, pp. 353356.
 (10) J.C. Butcher, Numerical Methods for Ordinary Differential Equations, John Wiley and Sons Ltd, Chichester, 2008.
 (11) C.H. Choi and A. J. Laub, Efficient matrixvalued algorithms for solving stiff Riccati differential equations, IEEE T. Automat. Contr., 35 (1990), pp. 770776.
 (12) L. Dieci, Numerical integration of the differential Riccati equation and some related issues, SIAM J. Numer. Anal., 29 (1992), pp. 781815.
 (13) S. Fital, C.H. Guo, Convergence of the solution of a nonsymmetric matrix Riccati differential equation to its stable equilibrium solution, J. Math. Anal. Appl., 318 (2006), pp. 648657.
 (14) Y. Güldoǧan, M. Hached, K. Jbilou, M. Kurulay, Low rank approximate solutions to largescale differential matrix Riccati equations, arXiv:1612.00499v2[math.NA], 2017.
 (15) M. Hached and K. Jbilou, Approximate solution to large nonsymmetric differential Riccati problems, arXiv:1801.01291[math.NA], 2018.
 (16) N.J. Higham, Functions of matrices: theory and computation, SIAM, Philadelphia, 2008.
 (17) M. Hochbruck and A. Ostermann, Explicit Exponential RungeKutta Methods for Semilinear Parabolic Problems, SIAM J. Numer. Anal., 43 (2006), pp. 10691090.
 (18) M. Hochbruck and A. Ostermann, Exponential multistep methods of Adamstype, BIT Numer. Math., 51 (2011), pp. 889908.
 (19) M. Hochbruck,and A. Ostermann, Exponential Integrators, Acta Numer., 19 (2010), pp. 209286.
 (20) M. Hochbruck, A. Ostermann and J. Schweitzer, Exponential RosenbrockType Methods, SIAM J. Numer. Anal., 47 (2009), pp. 786803.
 (21) A. Ichikawa and H. Katayama, Remarks on the timevarying Riccati equations, Systems Control Lett., 37 (1999), pp. 335345.
 (22) C. Moler, C.V. Loan, Nineteen dubious ways to compute the exponential of a matrix, twentyfive years later, SIAM Review, 45 (2003), pp. 349.
 (23) O.L.R. Jacobs, Introduction to Control Theory, Oxford Science Publications, Oxford, UK, 2nd ed., 1993.
 (24) J. Juang, Global existence and stability of solutions of matrix Riccati equations, J. Math. Anal. Appl., 258(2001), pp. 112.
 (25) A.K. Kassam and L.N. Trefethen, Fourthorder time stepping for stiff PDEs, SIAM J. Sci. Comput., 26 (2005), pp. 12141233.
 (26) A. Koskela and H. Mena, A structure preserving Krylov subspace method for large scale differential Riccati equations, arXiv:1705.07507v1 (2017).
 (27) V. Kučera, A review of the matrix Riccati equation, Kybernetika, 9 (1973), pp. 4261.
 (28) N. Lang, H. Mena and H. Saak, On the benefits of the factorization for lorgescale differential matrix equation solvers, Linear Algebra Appl., 480 (2015), pp. 4471.
 (29) V.T. Luan and A. Ostermann, Exponential Bseries: the stiff case, SIAM J. Numer. Anal., 51 (2013), pp. 34313445.
 (30) H. Mena, A. Ostermann, L. Pfurtscheller, C. Piazzola, Numerical lowrank approximation of matrix differential equations, arXiv:1705.10175, 2017.
 (31) B.V. Minchev and W.M. Wright, A review of exponential integrators for first order semilinear problems, Tech. report 2/05, Department of Mathematics, NTNU, 2005.
 (32) J. Niesen and W.M. Wright, Algorithm 919: A Krylov subspace algorithm for evaluating the phifunctions appearing in exponential integrators, ACM Trans. Math. Softw., 38 (3), Article 22, 2012.
 (33) T. Penzl, LYAPACK: A MATLAB Toolbox for Large Lyapunov and Riccati Equations, Model Reduction Problems, and LinearQuadratic Optimal Control Problems, Users’ Guide (Version 1.0), 1999.
 (34) W.T. Reid, Riccati Differential Equations, Academic Press, New York, 1972.
 (35) Y. Saad, Iterative methods for sparse linear systems, SIAM, Philadelphia, 1972.
 (36) L.A. Safonov, V.V. Strygin, The convergence of the solution of a matrix Riccati equation to the maximal stationary solution in the critical case, Differential Equations, 38 (2002), pp. 2934.
 (37) R.B. Sidje, Expokit: A software package for computing matrix exponentials, ACM Trans. Math. Softw., 24 (1998), pp. 130156.
 (38) T. Stillfjord, Lowrank secondorder splitting of largescale differential Riccati equations, IEEE Trans. Automat. Control, 60 (2015), pp. 27912796.
 (39) T. Stillfjord, Adaptive highorder splitting schemes for largescale differential Riccati equations, Numer. Algor., 78 (2018), pp. 11291151.
 (40) T. Stillfjord, Singular value decay of operatorvalued differential Lyapunov and Riccati equations, SIAM J. Control Optim., 56 (2018), pp. 35983618.
 (41) V. Simoncini, A new iterative method for solving largescale Lyapunov matrix equations, SIAM J. Scient. Computing, 29 (03) (2007), pp. 12681288.
Comments
There are no comments yet.