1 Introduction
Many applications in science and engineering require the simulation of dynamical systems where different components evolve at different characteristic time scales. Clearly, these systems challenge traditional time discretizations that use a single time step for the entire system: either the fast components are resolved inaccurately, or the slow components are resolved with more accuracy than required, therefore increasing computational costs. Multirate methods exploit the partitioning of a system into components with different time scales, and use small step sizes to discretize the fast components, and large step sizes to discretize the slow components.
A first approach to constructing multirate methods is to employ traditional integrators with different time steps for different components, and to carefully orchestrate the coupling between these components. Early efforts to develop multirate RungeKutta methods are due to Rice [Rice_1960] and Andrus [Andrus_1979, Andrus_1993]. In the first discussion of “multirate methods” Gear and Wells [Gear_1984_MRLMM] propose pairing various linear multistep methods. This fundamental contribution already points to a number of challenges facing multirate methods such as coupling, automatic step size selection, and efficiency of the overall computational process. Other work to construct multirate linear multistep schemes includes [Kato_1999]. Günther et al. [Guenther_2001_MRPRK, Guenther_1994_partitioncircuits] developed multirate methods for partitioned RungeKutta schemes, as well as RosenbrockW methods [Guenther_1997_ROW] of order three that are wellsuited for treatment of systems with both stiff and nonstiff variables. Similarly, Kværnø and Rentrop [Kvaerno_2000_stabilityMRK, Kvaerno_1999_MRRK] constructed explicit multirate RungeKutta methods of order three. Bartel et al. [Bartel_2002_MRW] propose onestep methods where internal stages are used to provide the coupling between the fast and slow components. Constantinescu and Sandu developed strong stability preserving (SSP) multirate methods of RungeKutta [Sandu_2007_MR_RK2] and linear multistep [Sandu_2009_MR_LMM]
type that are suited for solving hyperbolic partial differential equations (PDEs).
Another approach, tracing back to Engstler et al. [Engstler_1997_MRextrapolation], derives multirate methods using Richardson extrapolation. Here, the solution is recursively improved on partitions of the system until required tolerances are satisfied. An important advantage of such schemes is the naturally available dynamic partitioning of the system. Constantinescu and Sandu [Sandu_2008_MREXTRAP, Sandu_2013_extrapolatedMR, Sandu_2009_ICNAAMMultirate] considered explicit and implicit base methods for multirate extrapolation methods and study their stability properties. Other multirate approaches include Galerkin [Logg_2003_MAG1], and combined multiscale [Engquist_2005] methodologies.
Günther and Sandu [Sandu_2016_GARKMR] built a class of multirate methods in based on the General Additive Runge–Kutta framework (GARK) [Sandu_2015_GARK]. BremickerTrübelhorn and Ortleb [Bremicker_2017_MGARK] developed third order multirate GARK (MrGARK) methods for fluidstructure interaction, and allowed for nonuniform fast steps in the order conditions of the methods.
This study develops a systematic design approach for constructing multirate methods in the MrGARK framework of Günther and Sandu [Sandu_2016_GARKMR, Sandu_2015_GARK]. Several highorder schemes are constructed that combine implicit and explicit components for the fast and slow subsystems.
The paper is organized as follows. Section 2 reviews the GARK framework and multirate GARK family. We study order conditions for high order MrGARK methods in Section 3, and their scalar linear stability in Section 4. Section 5 provides insight into fastslow coupling requirements. Section 6 discusses the design criteria for practical methods, and Section 7
studies error estimation and the adaptivity of both micro and macrosteps. Newly constructed methods are listed in
Section 8, and method coefficients are detailed in Appendix A. Numerical tests are performed on different test problems and the results are reported in Section 9.2 Multirate generalized additive Runge–Kutta schemes (MrGARK)
In this section we review some background on MrGARK methods.
2.1 GARK methods
The generalized additive Runge–Kutta (GARK) methods introduced in [Sandu_2014_SCEE, Sandu_2015_GARK] allow derivation of advanced multimethods for solving additively
partitioned systems of ordinary differential equations:
(1) 
where the righthand side function is split into different partitions based on properties such as stiffness, nonlinearity, dynamical behavior, and evaluation cost. We note that additive partitioning includes the case of component partitioning
in which the state vector is split into a number subsets.
A GARK method advances the numerical solution as follows [Sandu_2015_GARK]: equationparentequation
(2a)  
(2b) 
2.2 Multirate GARK methods
In the case of a twoway partitioned system Eq. 1 with slow component , and fast component we have:
(3) 
A multirate GARK method [Sandu_2015_GARK] integrates the slow component with a Runge–Kutta method and a large step size , and the fast component with another Runge–Kutta method and a small step size . Here represents the (integer) number of fast steps that are executed for each of the slow steps. MrGARK methods are formally derived in [Sandu_2016_GARKMR]. One step of the method utilizes slow stages, denoted by , and fast stages, denoted by : equationparentequation
(4a)  
(4b)  
(4c)  
The full step is then formed with:  
(4d) 
Let the coupling between the two methods be described by and for . The GARK Butcher tableau for the method (4) is [Sandu_2016_GARKMR]:
(5) 
Remark (Slow and fast stage numbers)
Definition 1 (Telescopic MrGARK schemes)
A telescopic MrGARK method [Sandu_2016_GARKMR] uses the same base scheme for the slow and fast partitions:
(6) 
In this paper we will focus on schemes with telescopic property since it allows a simple extension of the MrGARK method to an arbitrary number of partitions (time scales) using successively larger time steps, each an integer multiple of the previous one.
3 Order conditions for multirate GARK methods
We consider the following internal consistency conditions [Sandu_2016_GARKMR, Sandu_2015_GARK] to ensure that fast and slow righthand side evaluations are performed at the same points in time:
(7) 
where .
Assume that the internal consistency conditions Eq. 7 hold, and further assume that each individual method and has at least order 4. Then the MrGARK scheme (4)–(5) has order four if and only if the following coupling conditions hold [Sandu_2016_GARKMR]: equationparentequation
(8a)  
(8b)  
(8c)  
(8d)  
(8e)  
(8f)  
(8g)  
(8h)  
(8i)  
(8j)  
(8k)  
(8l) 
Here “” denotes componentwise multiplication of two vectors.
Without imposing any special structure on coupling matrices, we can rewrite Eq. 8 using the block structure shown in Eq. 5. Considering the following intermediate simplifications: equationparentequation
(9a)  
(9b)  
(9c)  
(9d) 
The order conditions Eq. 8 can be written in terms of base methods and coupling coefficients as follows: equationparentequation
(10a)  
(10b)  
(10c)  
(10d)  
(10e)  
(10f)  
(10g)  
(10h)  
(10i)  
(10j)  
(10k)  
(10l) 
Remark (Design process)
A practical design procedure for MrGARK schemes is to first select the base slow and fast methods of desired order, and then solve the order conditions Eq. 10 for the coupling coefficients and .
4 Linear stability analysis
Following [Sandu_2016_GARKMR, Sandu_2015_GARK] we consider the following scalar model problem:
(11) 
where the ratio of the fast to the slow variable is , the step size ratio. Denote , , and
(12) 
It was shown in [Sandu_2016_GARKMR, Sandu_2015_GARK] that application of MrGARK method Eq. 5 to Eq. 11 leads to the solution
with the stability function:
(13) 
In order to visualize the stability region of a method, we choose:
(14) 
such that the ratio of the variable magnitudes matches the ratio of the step sizes. This reduces Eq. 13
to a function of three real variables by assuming the fast eigenvalue is
times larger in magnitude than the slow eigenvalue. The stability region is the volume in the {, , } space where the magnitude of the stability function (13) is less than or equal to one. Stability regions for each method developed here are provided in Appendix A. For some of the methods the region decreases with increasing . The plots for different values of correspond to different test problems: when increases, the scale separation of the test problem also increases (we apply smaller microsteps to faster problems). The increasing stiffness of the fast component restricts the macrostep, a phenomenon known as stiffness leakage. Coupling coefficients whose magnitude increases rapidly with can lead to a degradation of stability.5 Decoupled MrGARK methods
We now discuss in detail the structure of the coupling coefficient matrices, and how this structure defines the way the fast and slow stage computations Eq. 4 are carried out, and therefore determines the practicality of the MrGARK method. In order to construct practical MrGARK methods we need to avoid complex couplings between multiple fast and slow stages. To this end we define decoupled MrGARK methods.
Definition 2 (Decoupled MrGARK methods)
An MrGARK method is decoupled if the computation of its stages proceeds in sequence, such that each slow stage uses only information from other slow stages and the already computed fast stages, and viceversa. There is no coupling that requires fast and slow stages to be solved together. Any form of implicitness is entirely within the fast or within the slow system.
5.1 Structure of the slowfast coupling (including fast information into the slow stage calculations)
We first introduce a notation for the order of computation of the slow stages with respect to the fast stages. Consider the th slow stage (4a) at abscissa , i.e., the row in the Butcher tableau Eq. 5. We denote its order of computation by , i.e., the th slow stage is computed immediately after the th stage of the th microstep is computed. This means that the first microsteps have been completed, and the th microstep has partially progressed to compute stage , when the th slow stage is evaluated.
When the slow stage is evaluated after the last stage of microstep , but before the before the first stage of the th microstep, we have . Equivalently, this situation can be represented by .
In order to construct practical MrGARK methods we require that the evaluation of the th slow stage depends only on those fast stages that have been completed; in the GARK Butcher tableau Eq. 5, the th slow stage depends only on the rows .
The th row of the slowfast coupling matrix: equationparentequation
(15a)  
contains the coefficients that bring fast information into the computation of slow stage . A decoupled MrGARK scheme enjoys the following properties: 

The coupling matrices with the completed microsteps can have full rows:

Rows of the coupling matrix can have nonzero entries only in the first positions:

The coupling matrices with the future microsteps are zero:
Consequently, for decoupled MrGARK schemes, the slowfast coupling matrix is lower block triangular:
(15b) 
where we used the fact that the fast stage of microstep has GARK stage number in the Butcher tableau Eq. 5.
5.2 Structure of the fastslow coupling (including slow information into the fast stage calculations)
We next introduce a notation for the order of computation of the fast stages with respect to the slow stages. We denote by the index of the last slow stage computed before starting the fast stage of microstep . Specifically, the fast stage at microstep is computed after the slow stage , but before the slow stage . For a decoupled MrGARK this stage can only use information from slow stages to . Consequently, the th row of the coupling matrix can have nonzero entries only in the first columns:
The coupling matrix: equationparentequation
(16a)  
has a block lower triangular structure:  
(16b) 
where we used the fact that the fast stage of microstep has GARK stage number in the Butcher tableau Eq. 5.
5.3 Relation between the coupling matrices
From Eqs. 16 and 15 we see that the two coupling matrices and have related sparsity structures. For decoupled MrGARK methods the sparsity structures are complementary, in the sense that one must have zeros in the entries where the other matrix can has nonzero elements:
(17) 
where denotes elementbyelement multiplication. This is schematically illustrated in Fig. 0(a).
For coupled MrGARK methods the sparsity structures can overlap, in the sense that they both can have nonzeros entries in the same location:
(18) 
This is illustrated in Fig. 0(c). The overlapping nonzero coupling coefficients, indicated by dashed boxes in Fig. 0(c), imply that the corresponding fast and slow stages need to be computed together, in a step that involves the entire nonpartitioned system. Coupled methods are not pursued further in this paper.
5.4 Order of the slow and fast stage evaluation
The sparsity structure of the coupling matrices and determines the order in which the fast and slow stages can be evaluated such as to respect the data dependencies between them. Using the notation defined in Section 5.1 and Section 5.2, the general order in which stages are evaluated is follows:

Start by evaluating the fast stages for which ; these are the fast stages needed by the computation of the first slow stage . If the entire first row then proceed with evaluating the first slow stage .

After stage of the th microstep is computed, the th stage of the slow method is evaluated.

Continue with evaluating stages of the fast method, and stop after stage of the st microstep to compute stage of the slow method.

Continue until all fast and slow stages are evaluated.
Remark (Time ordering)
Assuming that the slow stage abscissae are increasing, and that fast stage abscissae are nondecreasing in time:
(19) 
a natural order to evaluate the MrGARK stages follows the time ordering of their abscissae. Note that the th slow stage approximates the solution at time , while the th fast stage of the th microstep approximates the solution at time . The pairs are chosen such that the stages are evaluated in increasing order of their approximation times: equationparentequation
(20a)  
(20b)  
Remark (Simpler time ordering)
It is possible to simplify the time ordering such that the slow stages are evaluated at the end of full microsteps. The stage is evaluated after the end of microstep , but before microstep , if , in which case and . When we take .
5.5 Reordering the GARK Butcher tableau
In the standard form of the GARK Butcher tableau Eq. 5 the first stages are for the fast method, and stages to are for the slow method.
Let ic be the vector of GARK tableau stage indices sorted in the order of computations, as summarized in Fig. 2. A renumbering of stages leads to a row and column permutation of the Butcher tableau Eq. 5. The reordered Butcher matrix is . The reordering is illustrated in Fig. 1. We distinguish the following cases:

If the reordered Butcher tableau is strictly lower triangular then the MrGARK method is explicit, and each stage uses only previously computed information.

If the reordered Butcher tableau is lower triangular, with some nonzero diagonal entries, then the MrGARK method is implicit, but decoupled. The nonzero diagonal entries correspond to implicit fast or slow stages. This case is illustrated in Fig. 0(b).

Finally, if the reordered Butcher tableau has blockdiagonal entries then the MrGARK method is coupled, in the sense that several fast and slow stages need to be solved together, in a coupled manner. This case is illustrated in Fig. 0(d).
As an example let us consider the GARK Butcher matrix for the explicit method EX2EX2 2(1)[A] introduced in Section A.1 for :
where . The permuted Butcher tableau verifies the explicit nature of the method, and also provides a progression of stage computations that is practically easy to implement.
6 Design of practical decoupled MrGARK methods
In this section we discuss several desirable properties of practical MrGARK methods, and the design methodologies to incorporate them in the construction of new high order schemes.
6.1 Design principles
The following set of design principles incorporates properties that are important for ensuring the practicality of MrGARK schemes:

Practical MrGARK methods need to be high order (have an order of accuracy larger than two), and generic in the multirate ratio (have the same order of accuracy for any .) This typically requires that the coupling coefficients are functions of . We have addressed these requirements via the order conditions derived in Section 3.

Methods where the fast and slow base schemes are both either explicit or implicit should be telescopic Eq. 6 in order to be easily applicable to multipartitioned systems with multiple time scales. All explicitexplicit methods proposed in this paper are telescopic.

In order to maintain computational efficiency, a complex coupling between multiple fast and slow stages needs to be avoided. In this paper, we focus on decoupled multirate methods satisfying equation Eq. 17, as they are far less computationally demanding than coupled methods. However, the overall stability of the scheme may be affected by decoupling.

Methods should be optimized for generalpurpose time integration, i.e. have small principal error terms and large stability regions. Note that the principal error for MrGARK methods is a function of . If not properly controlled, the coupling errors can grow with , thereby reducing the overall efficiency of the method. All schemes derived herein have coefficients optimized for small principal error and large stability region.

Consider the case where one of the base methods is implicit. A favorable property of the entire MrGARK method is stiff accuracy [Sandu_2016_GARKMR]:
(21) This property implies that the base implicit Runge–Kutta method is stiffly accurate, and that the MrGARK stability function approaches zero as the th component of the system becomes infinitely stiff [Sandu_2016_GARKMR]. All schemes derived herein having an implicit base method enjoy the property Eq. 21.

Practical use of MrGARK methods demands an error control mechanism that is capable of adapting both the step size and the multirate ratio . Therefore the error control problem in multirate integration is fundamentally more complex than in traditional, single rate integration. We fully address the error control issue in Section 7.

It is desirable to derive multirate methods with reduced coupling errors by requiring that both the main and the embedded methods satisfy higher order coupling conditions. This strategy isolates the dominant local truncation errors to the solution of the slow and fast components and greatly simplifies the task of adaptively choosing and , as discussed in Section 7.
Definition 3 (Naturally adaptive schemes)
A MrGARK scheme of order is naturally adaptive if the fast and slow local truncation error terms (corresponding to Ntrees with nodes of the same color) are , and the coupling local truncation error terms (corresponding to Ntrees with nodes of different colors) are .
We construct naturally adaptive second and third order schemes in Appendix A.

Methods of type S are optimized for simplicity and stability. First, by design, one seeks to maximize the sparsity of the coupling coefficient matrices, such as to simplify the implementation and decrease data dependencies. Next, the coupling coefficients can potentially become large in magnitude when increases, and this can lead to large cancellation errors in the context of finite precision arithmetic, as well as to a degradation of numerical stability. It is desirable that the coupling coefficients are optimized such that their magnitude remains bounded for large values .
6.2 Design process
The order conditions Eq. 8 and the additional constraints associated with the design principles of Section 6.1 lead to large, nonlinear systems of equations that need to be solved for the method coefficients. The resulting coupling coefficients and are functions of the microstep number and the multirate step size ratio . Our proposed approach for designing MrGARK methods of order is a threestep process, as follows.

The first step is to construct optimized base slow and fast schemes of order . Implicit base methods are selected to be stifflyaccurate SDIRK schemes. Explicit base methods are chosen to have small principal error components. When both the slow and the fast methods are of the same type (explicit or implicit) we choose the same discretization coefficients , such as to obtain a telescopic MrGARK method.

The second step is to define the sparsity patterns of the coupling matrices and . These patterns determine the computational flow of the method and its implementation complexity, and influence the overall stability and accuracy properties. We manually test various sparsity patterns to balance all of these properties.

The third step computes the coupling coefficients and such as to satisfy the coupling conditions Eq. 8 up to order . Any free parameters in the family are used to minimize the Euclidean norm of the residuals of the order coupling conditions; for naturally adaptive MrGARK methods all these residuals are cancelled. In this work the solution of order conditions and the minimization of the error coefficients are carried out with Mathematica Version 11.2.
An alternative to this derivation strategy is a monolithic constrained optimization procedure that minimizes the residuals of the order coupling conditions, subject to solving the conditions Eq. 8 up to order , and subject to structural constraints such as decoupling (17) and stiff accuracy (21).
The proposed threestep procedure is preferable for two reasons. First, we expect MrGARK methods to be applied to problems with a rather weak coupling between partitions, where the primary sources of error are the base methods and not the coupling. Our approach gives precedence to the base errors first. Second, our procedure is practical as it reduces the number of nonlinear equations that need to be solved together during the design.
7 Error estimation and adaptive MrGARK methods
Adaptivity of traditional (single rate) methods adjusts the step size such as to ensure the desired accuracy of the solution at a minimal computational effort. In the context of Runge–Kutta schemes a second “embedded” method is used to provide an aposteriori estimate of the local truncation error [Hairer_book_I, CH II.4]. The step size is adjusted to bring the local error estimate to the user prescribed level using the asymptotic relation that this error is proportional to .
Adaptivity of multirate methods is more complex, as there are two independent parameters that control the solution accuracy and efficiency: the macrostep size and the multirate ratio (or, equivalently, the macrostep and the microstep .) In order to achieve adaptivity of both and we propose to construct error estimates using not one, but several embedded methods, such as to obtain additional information about the structure of the local truncation error. Analytical asymptotic formulas are used to quantify the dependency of the local error terms on both and . Armed with this information, we develop several criteria for adaptively selecting both the macrostep size and the multirate ratio.
7.1 Local error structure for a second order MrGARK method
In order to understand the structure of the local truncation error we first consider a second order, internally consistent MrGARK scheme. The principal error terms of order are associated with the third order conditions, and are provided in Table 1.
Error  Error  Elementary  Order condition  Residuals for 

type  term  differential  IM2EX2 2(1)[A]  
Slow  
Slow  
Fast  
Fast  
Coupling  
Coupling 
As an example, consider the second order MrGARK method IM2EX2 2(1)[A] from Section A.4. We note from Table 1 that the third order residuals for this method are all asymptotically bounded as increases, and the local truncation error behaves like:
Comments
There are no comments yet.