Exponential integrators for large-scale stiff matrix Riccati differential equations

07/30/2019
by   Dongping Li, et al.
0

Matrix Riccati differential equations arise in many different areas and are particular important within the field of control theory. In this paper we consider numerical integration for large-scale systems of stiff matrix Riccati differential equations. We show how to apply exponential Rosenbrock-type integrators to get approximate solutions. Two typical exponential integration schemes are considered. The implementation issues are addressed and some low-rank approximations are exploited based on high quality numerical algebra codes. Numerical comparisons demonstrate that the exponential integrators can obtain high accuracy and efficiency for solving large-scale systems of stiff matrix Riccati differential equations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/04/2018

Approximate solutions to large nonsymmetric differential Riccati problems

In the present paper, we consider large scale nonsymmetric differential ...
03/04/2020

Matrix Equations, Sparse Solvers: M-M.E.S.S.-2.0.1 – Philosophy, Features and Application for (Parametric) Model

Matrix equations are omnipresent in (numerical) linear algebra and syste...
05/11/2021

Learning Runge-Kutta Integration Schemes for ODE Simulation and Identification

Deriving analytical solutions of ordinary differential equations is usua...
12/21/2019

Numerical solution of large scale Hartree-Fock-Bogoliubov equations

The Hartree-Fock-Bogoliubov (HFB) theory is the starting point for treat...
08/01/2020

Quadrature methods for integro-differential equations of Prandtl's type in weighted spaces of continuous functions

The paper deals with the approximate solution of integro-differential eq...
09/08/2019

Lanczos-like method for the time-ordered exponential

The time-ordered exponential is defined as the function that solves any ...
08/03/2021

Parallel-in-time preconditioners for the Sinc-Nyström method

The Sinc-Nyström method in time is a high-order spectral method for solv...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we are concerned with numerical methods for large-scale systems of stiff matrix Riccati differential equations (MRDEs) of the following form

(1)

where is the sought, MRDEs of this form occur in many important applications such as optimal control, -control, filtering, boundary value problems for systems of ODEs and many others (see e.g. Abou ; Ascher ; Ichikawa ; Jacobs ). In most control problems, the coefficient matrices and are obtained from the discretization of operators defined on infinite dimensional space, and the fast and slow modes are exist. This means that the associated MRDEs will be fairly large and stiff.

The most important case of (1) is the symmetric MDREs

(2)

here, and It is obvious that the solution of symmetric MRDEs is symmetric as is also a solution. The symmetric MRDEs is one of the most widely studied equations due to its importance in linear-quadratic optimal control problems. If we set the equations (1) reduce to the so-called matrix Sylvester differential equations (MSDEs). For a thorough description of these equations and some qualitative issues, we refer the reader to Abou ; Fital ; Jacobs ; Juang and the references appearing therein.

Many numerical methods have been developed in the past for solving MRDEs. Perhaps the most natural numerical technique is to use Kronecker products to rewrite (1) as an

-vector ODEs, and then uses a standard numerical integrators such as Runge-Kutta or linear multi-step solvers

Butcher . However, these approaches are not suitable for solutions of large stiff MRDEs. They are generally computationally expensive and hard to exploit the structure inherited in some large practical problems. For stiff MRDEs, some matrix-valued versions of implicit time integration schemes, such as the BDF, Rosenbrock methods have been explored through a direct time discretization of (1), see e.g. Benner01 ; Choi ; Dieci . Recently, some other unconventional numerical methods have been also developed for MRDEs and related problems, including splitting methods Stillfjord1 ; Stillfjord2 and projection methods Behr ; Jbilou ; Hached ; Koskela , etc.

The aim of this paper is to introduce exponential integrators for large-scale stiff problems of the forms 1 and 2. In the past two decades exponential integrators have become a popular tool for solving large-scale stiff semi-linear systems of ODEs

(3)

A general derivation of exponential integrators is based on the variation-of-constants formula

(4)

By approximating the nonlinear terms by an appropriate algebra polynomial, various type of exponential integrators can be exploited. Different approximations result different types of exponential integrators of either multi-step type or Runge-Kutta type, see e.g. MH2 ; MH4 ; MH3 ; AK . A main advantage of exponential integrators with stiff order conditions don’t suffer from an order reduction even if the matrix is a discretization of a unbounded linear operator. For a full overview of exponential integrators and associated software, we refer the readers to the reviews Hochbruck2010 ; BV2005 and references therein. Although the matrix differential equations (1) can be reformulated as the form (3) and solved by an exponential integrator, this approach will generate very large and not be appropriate.

In the present work we present matrix-valued versions exponential integrators for stiff MRDEs (1). The methods provide an efficient alternative to implicit integrators for computing solutions of MRDEs. For large-scale systems, in many applications it is often observed, both practical and theoretical, the solution has low numerical rank and can be approximated by products of low-rank matrices Lang ; Stillfjord3 . To utilize such structure, we introduce how to the low-rank implementation can be applied to exponential integrators. Thus we are able to save computational and memory storage requirements compared to the simple application of an exponential integrator.

The remainder of the paper is organized as follows. In Section 2, we give some basic results and properties of MRDEs. In Section 3, the exponential Rosenbroc-type methods are introduced for the application to the MRDEs. Section 4 we show some issues of implementation and the low-rank approximations for both typical exponential integration schemes are exploited. Section 5 is devoted to some numerical examples and comparisons with splitting methods of similar orders. Finally, we draw some conclusions in Section 6.

2 Preliminaries

We start with recalling a general result on the solution of the MRDEs. The following result shows that the MRDEs (1) can be equivalently rewritten in an integral form (see e.g. Kucera ).

Theorem 1.

The exact solution of the MRDEs (1) is given by

(5)
Proof.

Over the time interval by using a change of variables in (5) to give

(6)

The formula (6) also holds for time-varying coefficient matrices and To make constructing methods for matrix differential equations easier, we use

(7)

to denote the linear operator from the right-hand side of MRDEs (1). The operator (7) is well-known Sylvester operator. The operator exponential satisfy the following relation (see Behr ):

(8)

here and Then, the expression (6) has the simplified form

(9)

The first term from the right-hand side of (9) involves operator exponential and represents the homogenous part of the solution, whereas the other two terms consists of integrals, again involving operator exponential. An natural idea to construct exponential integrators is to approximate the integrals on the right-hand side of (9) by a quadrature formula, in which only the nonlinearity term are approximated but the operator exponential are treated exactly. Particularly, for MSDEs, we have the following result.

Lemma 0.

Let and  is a sufficiently differential matrix-value function, then the exact solution of the matrix differential equations

(10)

can be represented by the expansion

(11)

where

(12)
(13)
Proof.

By the formula (8), the solution of equation (10) can be written

(14)

Substituting the Taylor series expansion of

(15)

into the formula (14) and applying the definition (13) we arrive at the required result. ∎

The functions satisfy the following recurrence relations

(16)

A particular case of the nonhomogeneous term in the equations (13) is an matrix polynomials, i.e., then the exact solution of the equations (13) can be represented by the expansion

(17)

3 Exponential Rosenbrock-type integrators for MRDEs

In this section, we consider the time discretization of MRDEs (1). Rewrite the equations (1) as

(18)

where denotes the Fréchet derivative of and the nonlinear remainder at respectively:

(19)

with and

It is obvious a Sylvester operator. Formally, by the variation of constants formula (9), the exact solution of (18) satisfy

(20)

The above expression has a similar structure with (4) but Sylvester operator exponential instead of matrix exponential. We can apply various existing exponential integrators to (18). The application of the general exponential Runge-Kutta type methods MH2 , to the MRDEs (18) yields

(21)

Here, is the nodes, and the coefficients are linear combinations of the respectively. They can be determined by a stiff error analysis which can be adapted from the stiff order theory presented in Hochbruck2010 ; Luan2013 . The process is highly sophisticated and is omitted here.

In our context, we only consider two specific exponential integration schemes. They will be used in our numerical experiments in Section 5. The first and simplest exponential integration scheme is the exponential Rosenbrock-type Euler scheme

(22)

The scheme is computationally attractive since it achieves second-order with one stage only. The second scheme is order-third (denoted Erow3), which can be regarded as an simple modification of exponential Rosenbrock-type Euler scheme

(23)

The internal stage has the same structure as the exponential Rosenbrock-type Euler scheme (22), and the external stage is a perturbation of the internal stage. The above two schemes are usually embedded to create an adaptive time stepping method.

4 Implementation issues

For exponential integrators, the main computational cost is to approximate the exponential and exponential-type functions at each time-step. To our knowledge, there is no explicit method to evaluate the functions of a Sylvester operator in the literatures. For the computation of the first -function, the following formula gives an indirect way.

Define the augmented matrix by

(24)

By the formula (10.40) arising in Higham 2008 , we have

(25)

Then the scheme (22) can be rewritten as

(26)

For the computation of a single matrix exponential or its action on a thin matrix, a number of methods have been proposed in the literatures for carrying out this task, see e.g. Al-Mohy 2009 ; Al-Mohy 2011 ; Sidje 1998 and the review Moler 2003 . This approach has the major advantage of simplicity but is likely to be too expensive for large and

A more general strategy for approximating -functions is to construct a quadrature formula. For a given function , the Sylvester operator and an matrix we approximate by a quadrature approximation with quadrature nodes and weights

(27)

Thus to evaluate we need to compute operator exponential acting on the same matrix. In practical application if the matrix has a low-rank factorization where both and are full column rank and is nonsingular, then the block Krylov subspace method can be apply to the computation of operator exponential involved.

In fact, as shown in the schemes (22) and (23), every stage in an exponential integrator can be expressed as a linear combination of the form

(28)

here Using the recurrence relation (16) we can calculate (28) recursively. Two alternatives are available. The first one is a forward recursion, i.e.,

(29)

then

(30)

The main computational cost of this process includes the action of Sylvester operator and a . The other approach is a backward recursion

(31)
(32)

This process requires the computation of algebra Sylvester equations and a Sylvester operator exponential acting on matrix.

In many practical applications the MRDEs have an low-rank structure and the solution has the low-rank property. In such cases it is necessary to avoid forming the matrices explicitly, because this in general leads to dense computations. In the remainder of the section we briefly introduce how to implement the above mentioned two schemes in a low-rank fashion to the symmetric MRDEs.

Provided and in (2) are symmetric positive semi-definite, and are given in the low-rank form

(33)

with and This implies that the solution to the MRDEs (2) is also symmetric positive semi-definite for all First, we consider the exponential Rosenbrock-type Euler scheme (22). Assume that the previous solution approximations admit a decomposition of the form with The in scheme (22) can be written the form of

(34)

The new matrix have more columns than and also more than their rank. As the number of columns in the decomposition increases, the computation cost will become prohibitively expensive. To overcome this difficult we can incorporate the column compression strategy Lang to and find more suitable low-rank factors. Then, we apply the numerical quadrature formula (27) to approximate and the decomposition is given by the factors

(35)

and

(36)

Note that the evaluation of requires computation of products between matrix exponential and a thin matrix. For large matrix the block Krylov projection algorithm is a popular choice Saad . An advantage of this computation is that one can project the operator exponential in the same search subspace and evaluate them simultaneously.

Now, using the splitting of and of the solution the approximation to is given by

(37)

Again, we can employ column compression strategy to obtain low-rank splitting factors.

We now describe an alternative low-rank implementation of the exponential Rosenbrock-type Euler scheme. Apply the backward recursion (31)-(32) to the scheme (22), giving

(38)

This results in solving one algebraic Lyapunov equation (ALE) which the right hand side has low-rank form in each time-step. There are many methods for solving Lyapunov equations where the right-hand side is of low rank, for instance by a low-rank ADI iteration Benner or Krylov subspace based methods Simoncini . Due to the availability of low-rank ADI iteration based codes Benner , here we limit ourselves to this procedure.

For order-third exponential integration scheme (23), again, the previous solution approximation is assumed to be given in low-rank format. Note that the interval stage is the same as exponential Rosenbrock-type Euler scheme, thus can be written in the low-rank form The external stage is a perturbation of the matrix by In order to find a low-rank factorization of the entire right hand side, we first consider the -type splitting of Direct calculation shows that

(39)

Inserting the splitting factors of and into (39) finally gives the splitting with

(40)

Again, using (27), the splitting factors of can be computed as follows:

(41)

and

(42)

Now, using the -type splitting with we obtain

(43)

with

(44)

and

(45)

In actual implementation, once the new splitting factors are formed, column compression strategy should be performed to eliminate the redundant information.

5 Numerical experiments

In this section, we present some numerical experiments to illustrate the behaviour of exponential integration methods. We compare the numerical performance of the exponential Rosenbrock-type Euler scheme (22) (denoted ExpEuler) and third order exponential integration scheme Erow3 (23) with the splitting schemes in Stillfjord2 . For ExpEuler, we consider all the above mentioned three different implementations. They are marked as follows: the general implementation (26) (denoted by GExpEuler), the low-rank implementation (denoted by LrExpEuler) and the backward recursion implementation (38) (denoted by BrExpEuler). For low-rank implementations, the tolerance for column compression strategies are set to where is the system dimension and denotes the machine precision. All experiments are performed under Windows 10 and MATLAB R2018b running on a laptop with an Intel Core i7 processor with 1.8 GHz and RAM 8 GB. Unless otherwise stated, we use the relative errors at the final time, measured in the Frobenius norm.

Experiment 1. As a first test, we consider the matrix obtained from the standard 5-point difference discretization of the two-dimensional PDE

(46)

on the domain with homogeneous Dirichlet boundary conditions, and with entries chosen randomly from [0; 1]. The matrix is a negative stiffness matrix which can be generated by MATLAB function fdm2dmatrix from LYAPACK toolbox Penzl . We consider the discretization of (46) for two different values of functions namely and respectively. The former generate a symmetric matrix (denoted fdm-sym), while the latter is unsymmetric (denoted fdm-nonsym). The corresponding initial values are choosen as a low-rank product where are randomly generated. To ensure the availability of a reference solution, we perform two sizes on the time interval [0, 1], one with and the other with The reference solutions are obtained by MATLAB built-in function ode15s with an absolute tolerance of and a relative tolerance of This has been done by vectorizing the MRDEs into a vector-valued ODEs with unknowns. We use the above mentioned methods to integrate the four systems over the time interval [0, 1] with time step size

Table 1 lists the relative errors at the final time as well as the total time (in seconds) of the methods to integrate these systems. The results show that the two exponential integration schemes achieve the high precision of about in all cases and obtained a higher order of convergence than we expected. An interpretation of this as the exponential integrators are suitable for the structure of the MRDEs and capture some qualitative properties. As a comparison, we also present the results for the additive symmetric scheme in Stillfjord2 of order 4 (denoted Additive4) with the same timestep and ode15s. The code for the Additive4 contains parallel loops which uses 4 workers on our machine. We can see the exponential integration schemes accomplish higher computational accuracy than Additive4 and take less runtimes than ode15s.

Figures 12 show plots of the F-norm of solutions obtained by ExpEuler and the reference solutions provided by ode15s for each test system. The Erow3 yields very similar behaviors with ExpEuler and is omitted here. From these figures we see that the ExpEuler follow well behaviours of the reference solutions. Although it is not very accurate in the start some time steps, the behaviours are completely corrected as the time increase. At the final time, the relative error even level out around This is also true in the subsequent experiments and we interpret this as exponential integration schemes being favorable for MRDEs.

  size(A) GExpEuler LrExpEuler BrExpEuler Erow3 Additive4 ode15s
Error Time Error Time Error Time Error Time Error Time Error Time
 fdm-sym 6464 1.22e-14 0.95 1.31e-14 3.36 4.58e-14 1.56 1.30e-14 4.25 2.09e-04 2.67 1.78e-14 334.81
100100 1.57e-14 1.81 1.73e-14 6.28 4.46e-13 1.82 1.77e-14 7.51 8.53e-04 3.07 2.29e-14 3708.20
 fdm-nonsym 6464 2.01e-14 0.96 2.16e-14 2.12 8.61e-14 2.12 2.15e-14 3.11 5.17e-04 4.02 2.89e-14 423.85
100100 2.26e-14 2.16 2.78e-14 12.67 3.21e-14 2.58 2.79e-14 15.37 1.89e-03 5.73 3.19e-14 4773.10
Table 1: The CPU and the relative errors with respect to the reference solution for
a.   fdm-sym,

b.  Zoomed image of a

c.  fdm-nonsym,

d.  Zoomed image of c

Figure 1: The F-norm of the solutions using LrExpEuler (o) and ode15s (-) for fdm-sym and fdm-nonsym of size on [0,1], respectively.
a.   fdm-sym,

b.  Zoomed image of a

c.  fdm-nonsym,

d.  Zoomed image of c

Figure 2: The F-norm of the solutions using LrExpEuler (o) and ode15s (-) for fdm-sym and fdm-nonsym of size on [0,1], respectively.

Experiment 2. As a second test, we consider a finite element discretization of a heat equation arising from the optimal control of steel cooling Benner . The matrix is symmetric and stable, and The initial value is set as or random vectors with elements from the normal (0,1) distribution. To ensure the availability of a reference solution, we perform two sizes on the time interval [0,45] with , one with and the other with The reference solutions are computed by MATLAB built-in function ode45 with an absolute tolerance of and a relative tolerance of Similar the above experiment Figures 3 and 4 show the convergence behaviour of the solutions of the ExpEuler to the reference solution for the size and From these Figures, we observe that the ExpEuler gives a fairly good approximations of the reference solutions. We also present the numerical results of the different methods mentioned above in Table 2. We see that the proposed method performs quite well in terms of accomplished accuracy and computational time.

  GExpEuler LrExpEuler BrExpEuler Erow3 Additive4
Error Time Error Time Error Time Error Time Error Time
 rail371 8.92e-14 0.75 7.19e-12 0.11 7.07e-11 1.10 7.19e-12 0.16 3.38e-11 0.33
rand(371,2) 8.61e-14 0.99 4.23e-12 0.13 7.25e-11 1.18 4.22e-12 0.19 3.26e-11 0.42
 rail1357 2.23e-14 21.29 7.66e-12 0.56 2.48e-10 14.55 7.66e-12 0.80 3.60e-11 0.52
rand(1357,2) 1.59e-14 29.54 4.18e-12 0.67 1.78e-10 16.11 4.27e-12 0.97 2.57e-11 0.56
Table 2: The CPU and the relative errors with respect to the reference solution for
a.   

b.  

Figure 3: The F-norm of the solutions using LrExpEuler (o) and ode45 (-) for for with on [0,45] with
a.   

b.  

Figure 4: The F-norm of the solutions using LrExpEuler (o) and ode45 (-) for for with on [0,45] with

Experiment 3. The third experiment is the same problem as in Experiment 1 for larger-scale dimensions. We choose the same setting as in Experiment 1. We compare the low-rank approximations, LrExpEuler, Erow3, BrExpEuler to the symmetric splitting of orders 4 and 6 for systems of dimensions For larger-scale stiff matrix, our tests indicate that the numerical integration formula (27) will cause low computational accuracy. In following tests, we use a relatively smaller sizes except for the BrExpEuler which is set as Due to the systems sizes, it is infeasible to use ode15s to compute an accurate reference solutions. Instead, we use the eighth-order symmetric splitting scheme in Stillfjord2 with the time step Table 3 present the relative errors and the corresponding computation times for each of the systems with different values of the system size It is noted that the BrExpEuler produces the smallest errors of all methods even though the time step is larger. This is due to the poor performance of numerical integrator to -function. This also shows that the exponential integrators have large computational potential and inspires us to exploit other efficiently numerical approximations to -function. We plan to investigate this option in our future work. In terms of the CPU times, we observe that in some cases the exponential integration methods spend more CPU times than the symmetric splitting schemes. This might have a weakened effect when the computation is done on a single core machine as the symmetric splitting methods employ parallel loops with four cores in our laptop. Again, the LrExpEuler obtained almost the same accuracy as Erow3, which illustrates the feasibility of adaptivity. We observe that the accuracy of the resulting solutions reduce as the size of system and its stiffness increase for all methods. We attribute this mainly to the pessimistic reference solutions.

  Matrix size(A) LrExpEuler BrExpEuler Erow3 Additive4 Additive6
Error Time Error Time Error Time Error Time Error Time
  fdm-sym 400400 8.21e-07 16.64 1.46e-08 10.79 8.21e-07 41.31 3.68e-05 64.69 8.03e-07 86.99
900900 7.67e-05 110.77 3.06e-06 77.50 7.67e-05 191.85 5.39e-04 148.54 4.18e-05 189.66
16001600 7.84e-04 500.91 6.21e-05 470.55 7.84e-04 711.84 2.53e-03 293.53 3.42e-04 348.13
25002500 3.14e-03 1641.29 3.70e-04 1953.69 3.14e-03 2067.08 7.51e-03 541.69 1.15e-03 957.70
  fdm-nonsym 400400 1.18e-06 28.03 1.96e-08 19.37 1.18e-06 87.31 4.40e-05 116.79 1.03e-06 170.70
900900 7.80e-05 112.67 3.37e-06 116.65 7.80e-05 263.88 5.64e-04 392.62 4.51e-05 491.76
16001600 8.16e-04 558.12 6.52e-05 702.36 8.16e-04 925.01 2.56e-03 917.33 3.58e-04 990.88
25002500 3.22e-03 2013.00 3.92e-04 2822.60 3.22e-03 2629.20 8.00e-03 1671.76 1.22e-03 2120.35
Table 3: The CPU and the relative errors with respect to the reference solution for

6 Conclusion

In this paper, we show how to apply exponential integrators to get approximate solutions of large stiff MRDEs. The low-rank implementation of such schemes for large-scale applications and their comparison with current state-of-the-art integrators must be addressed. Numerical experiments illustrate that the exponential integration methods can achieved converge than the expected order. Thus the EI schemes can provide an efficient alternative to standard integrators for a large-scale stiff problems. The study of the performance and application of the higher-order exponential integration schemes and their comparative performance with implicit schemes will be presented elsewhere. We also plan to develop new more efficient algorithms to approximate the exponential and related functions of a Sylvester operator acting on an matrix.

References

  • (1) H. Abou-Kandil, G. Freiling, V. Ionescu and G. Jank, Matrix Riccati Equations in Control and Systems Theory, Birkhäuser, Basel, Switzerland, 2003.
  • (2) A. Al-Mohy and N. Higham, A new scaling and modified squaring algorithm for matrix functions, SIAM J. Matrix Anal. Appl. 31(2009), pp. 970-989.
  • (3) A. Al-Mohy and N. Higham, Computing the action of the matrix exponential, with an application to exponential integrators, SIAM J. Sci. Comput., 33 (2011), pp. 488-511.
  • (4) A.C. Antoulas , Approximation of large-scale dynamical Systems, SIAM, Philadelphia, 2009.
  • (5) U.M. Ascher, R.M. Mattheij and R. G. Russell,

    Numerical solution of boundary value problems for ordinary differential equations,

    Prentice-Hall, Englewood Cliffs, NJ, 1988.
  • (6) M. Behr, P. Benner and J. Heiland, Solution Formulas for Differential Sylvester and Lyapunov Equations, arXiv:1811.08327[math.NA], 2018.
  • (7) P. Benner, M. Köhler, and J. Saak, M.E.S.S.-the matrix equations sparse solvers library, https://www.mpi-magdeburg.mpg.de/projects/mess.
  • (8) P. Benner and H. Mena, Rosenbrock methods for solving Riccati differential equations, IEEE Trans. Automat. Control, 58(2013), pp. 2950-2956.
  • (9) P. Benner and J. Saak. A semi-discretized heat transfer model for optimal cooling of steel profiles, In Dimension Reduction of Large-Scale Systems, volume 45 of Lect. Notes Comput. Sci. Eng.), P. Benner, V. Mehrmann, and D. Sorensen, Eds.Berlin/Heidelberg, Germany: Springer-Verlag, 2005, pp. 353-356.
  • (10) J.C. Butcher, Numerical Methods for Ordinary Differential Equations, John Wiley and Sons Ltd, Chichester, 2008.
  • (11) C.H. Choi and A. J. Laub, Efficient matrix-valued algorithms for solving stiff Riccati differential equations, IEEE T. Automat. Contr., 35 (1990), pp. 770-776.
  • (12) L. Dieci, Numerical integration of the differential Riccati equation and some related issues, SIAM J. Numer. Anal., 29 (1992), pp. 781-815.
  • (13) S. Fital, C.H. Guo, Convergence of the solution of a nonsymmetric matrix Riccati differential equation to its stable equilibrium solution, J. Math. Anal. Appl., 318 (2006), pp. 648-657.
  • (14) Y. Güldoǧan, M. Hached, K. Jbilou, M. Kurulay, Low rank approximate solutions to large-scale differential matrix Riccati equations, arXiv:1612.00499v2[math.NA], 2017.
  • (15) M. Hached and K. Jbilou, Approximate solution to large nonsymmetric differential Riccati problems, arXiv:1801.01291[math.NA], 2018.
  • (16) N.J. Higham, Functions of matrices: theory and computation, SIAM, Philadelphia, 2008.
  • (17) M. Hochbruck and A. Ostermann, Explicit Exponential Runge-Kutta Methods for Semilinear Parabolic Problems, SIAM J. Numer. Anal., 43 (2006), pp. 1069-1090.
  • (18) M. Hochbruck and A. Ostermann, Exponential multistep methods of Adams-type, BIT Numer. Math., 51 (2011), pp. 889-908.
  • (19) M. Hochbruck,and A. Ostermann, Exponential Integrators, Acta Numer., 19 (2010), pp. 209-286.
  • (20) M. Hochbruck, A. Ostermann and J. Schweitzer, Exponential Rosenbrock-Type Methods, SIAM J. Numer. Anal., 47 (2009), pp. 786-803.
  • (21) A. Ichikawa and H. Katayama, Remarks on the time-varying Riccati equations, Systems Control Lett., 37 (1999), pp. 335-345.
  • (22) C. Moler, C.V. Loan, Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later, SIAM Review, 45 (2003), pp. 3-49.
  • (23) O.L.R. Jacobs, Introduction to Control Theory, Oxford Science Publications, Oxford, UK, 2nd ed., 1993.
  • (24) J. Juang, Global existence and stability of solutions of matrix Riccati equations, J. Math. Anal. Appl., 258(2001), pp. 1-12.
  • (25) A.K. Kassam and L.N. Trefethen, Fourth-order time stepping for stiff PDEs, SIAM J. Sci. Comput., 26 (2005), pp. 1214-1233.
  • (26) A. Koskela and H. Mena, A structure preserving Krylov subspace method for large scale differential Riccati equations, arXiv:1705.07507v1 (2017).
  • (27) V. Kučera, A review of the matrix Riccati equation, Kybernetika, 9 (1973), pp. 42-61.
  • (28) N. Lang, H. Mena and H. Saak, On the benefits of the factorization for lorge-scale differential matrix equation solvers, Linear Algebra Appl., 480 (2015), pp. 44-71.
  • (29) V.T. Luan and A. Ostermann, Exponential B-series: the stiff case, SIAM J. Numer. Anal., 51 (2013), pp. 3431-3445.
  • (30) H. Mena, A. Ostermann, L. Pfurtscheller, C. Piazzola, Numerical low-rank approximation of matrix differential equations, arXiv:1705.10175, 2017.
  • (31) B.V. Minchev and W.M. Wright, A review of exponential integrators for first order semi-linear problems, Tech. report 2/05, Department of Mathematics, NTNU, 2005.
  • (32) J. Niesen and W.M. Wright, Algorithm 919: A Krylov subspace algorithm for evaluating the phi-functions appearing in exponential integrators, ACM Trans. Math. Softw., 38 (3), Article 22, 2012.
  • (33) T. Penzl, LYAPACK: A MATLAB Toolbox for Large Lyapunov and Riccati Equations, Model Reduction Problems, and Linear-Quadratic Optimal Control Problems, Users’ Guide (Version 1.0), 1999.
  • (34) W.T. Reid, Riccati Differential Equations, Academic Press, New York, 1972.
  • (35) Y. Saad, Iterative methods for sparse linear systems, SIAM, Philadelphia, 1972.
  • (36) L.A. Safonov, V.V. Strygin, The convergence of the solution of a matrix Riccati equation to the maximal stationary solution in the critical case, Differential Equations, 38 (2002), pp. 29-34.
  • (37) R.B. Sidje, Expokit: A software package for computing matrix exponentials, ACM Trans. Math. Softw., 24 (1998), pp. 130-156.
  • (38) T. Stillfjord, Low-rank second-order splitting of large-scale differential Riccati equations, IEEE Trans. Automat. Control, 60 (2015), pp. 2791-2796.
  • (39) T. Stillfjord, Adaptive high-order splitting schemes for large-scale differential Riccati equations, Numer. Algor., 78 (2018), pp. 1129-1151.
  • (40) T. Stillfjord, Singular value decay of operator-valued differential Lyapunov and Riccati equations, SIAM J. Control Optim., 56 (2018), pp. 3598-3618.
  • (41) V. Simoncini, A new iterative method for solving large-scale Lyapunov matrix equations, SIAM J. Scient. Computing, 29 (03) (2007), pp. 1268-1288.