## 1 Introduction

Many dynamical systems of practical interest consist of multiple components that evolve at different characteristic time scales. As a representative model we consider a two-way additively partitioned ordinary differential equation (ODE) driven by a slow process

and a fast process , acting simultaneously:(1) |

Multirate methods are efficient numerical solution strategies for (1) that use small step sizes to discretize the fast components, and large step sizes to discretize the slow components. The approach goes back to the pioneering work on multirate Runge-Kutta methods of Rice [38] and Andrus [2, 3].

The main approach for the construction of multirate methods is to employ traditional integrators with different time steps for different components, and to couple the fast and slow components such as to ensure the overall stability and accuracy of the discretization. Following this philosophy multirate schemes have been developed in the context of Runge-Kutta methods [9, 19, 20, 30, 31], linear multistep methods [15, 26, 39], Rosenbrock-W methods [18], extrapolation methods [10, 11, 14, 40], Galerkin discretizations [33], and combined multiscale methodologies [13].

Multirate time integration has been adopted by many applications including the simulation of electronic circuits [20, 8], subsurface flows [1], ocean modeling [12, 46], energy conversion [36], multibody dynamics [4], electromagnetic fields [16], and fluids in complex geometries [35], to name just a few.

In a seminal paper Knoth and Wolke [29] proposed a hybrid approach to solve (1). First, the slow component is discretized with an -stage explicit Runge-Kutta method with increasing abscissae, . Next, the solution is advanced between the consecutive stages of this method by solving a modified fast ODE system. For autonomous systems (1) the solution process reads:

(2) |

The modified fast ODE (2) is composed of the original fast component from (1) plus a piece-wise constant “slow tendency” term. Wensch, Knoth, and Galant [48] generalized (2) by adding linear combinations of stages (, ) to both the initial condition and to the slow tendency term. Since the fast ODE is solved exactly (or, equivalently, is solved numerically with an infinitesimally small step size) they aptly named the resulting schemes “multirate infinitesimal step” methods. The methods were interpreted as exponential integrators, and order conditions up to order three were derived. Schlegel, Knoth, Wolke, and Arnold casted MIS schemes as traditional partitioned Runge-Kutta methods [44], and furthered the theoretical and practical understanding of this family [28, 42, 34, 43].

The hybrid dynamical approach has proved fruitful not only in the construction of new numerical schemes, but also in the analysis of existing ones. For example, a hybrid approach has been used to abstract away the details of subsystem integration and study only the coupling aspects of waveform relaxation methods [5, 6]. The hybrid dynamical approach is also related to the engineering concept of co-simulation [17, 45].

Günther and Sandu [21] explained MIS schemes in the framework of General-structure Additive Runge-Kutta (GARK) methods [41]. A multirate GARK (MR-GARK) method [21] integrates the slow component with a Runge–Kutta method and a large step size , and the fast component with another Runge–Kutta method and a small step size . Here represents the (integer) number of fast steps that are executed for each of the slow steps. One step of the method computes slow stages, denoted by , and fast stages, denoted by : equationparentequation

(3a) | ||||

(3b) | ||||

(3c) | ||||

(3d) |

The Butcher tableau for the MR-GARK method (3) is [21]:

(4) |

This work constructs a family of multirate infinitesimal GARK schemes (named MRI-GARK) that extend the hybrid dynamics approach of [29, 48] in several ways. Time dependent “slow tendency” terms are used to construct the modified fast system. The MRI-GARK general order conditions theory is developed by leveraging the GARK accuracy theory [41]. This allows the construction of the first fourth order multirate infinitesimal methods, as well as the first multirate infinitesimal schemes that are implicit in the slow component. Matrix stability analyses are carried out using a new simplified two-dimensional linear test problem.

The paper is organized as follows. The new family of MRI-GARK schemes is defined in Section 2. The order condition theory is developed in Section 3, and the stability analysis in Section 4. Practical explicit methods of order up to four are presented in Section 6, and decoupled implicit methods in Section 7. Numerical results are reported in Section 8, and conclusions are drawn in Section 9.

## 2 Multirate Infinitesimal GARK Methods

We start with a “slow” Runge-Kutta base method with non-increasing abscissae: equationparentequation

(5a) | |||

and define the increments between consecutive stages of the base method: | |||

(5b) |

###### Definition 1 (MRI-GARK methods for additively partitioned systems)

A Multirate Infinitesimal GARK (MRI-GARK) scheme applied to the additively partitioned system (1) advances the solution from to as follows: equationparentequation

(6a) | |||

(6b) | |||

(6c) |

Linear combinations of the slow function values are added as forcing to the modified fast ODE system (6b); in order to use only already computed slow stages one needs for .

###### Definition 2 (Slow tendency coefficients)

It is convenient to define the time-dependent combination coefficients as polynomials in time, and to consider their integrals:

(7) |

###### Remark 1 (Equal abscissae)

###### Remark 2 (Embedded method)

The base slow scheme (5a) computes the main solution using the weights , and the embedded solution using the weights . The MRI-GARK scheme computes the main solution (6c) via the last stage (6b), that advances to by solving a modified fast system with the slow weights . An embedded solution is computed via an additional stage that advances to using with the modified weights .

###### Definition 3 (MRI-GARK methods for component partitioned systems)

Consider the component partitioned system:

(8) |

An MRI-GARK scheme applied to (8) computes the solution as follows: equationparentequation

(9a) | |||

(9b) | |||

(9c) |

For we require that such that only previously computed slow stage values are used in the formulation of the modified fast ODE. The additive (6) and component-based (9) formulations are equivalent to each other.

###### Example 1 (Second order MRI-GARK methods)

Consider the explicit midpoint method and the implicit trapezoidal method, each paired with a first order embedded scheme:

The explicit midpoint method is the slow component (5a) of the following second order explicit MRI-GARK scheme (6):

(10) |

The implicit trapezoidal method is the slow component (5a) of the following second order implicit MRI-GARK scheme (6):

(11) |

## 3 Order conditions

For accuracy analysis we replace the continuous fast process by a discrete Runge Kutta method of arbitrary accuracy and having an arbitrary number of stages . This approach casts the MRI-GARK scheme (6) into the multirate GARK framework (3), where each sub-step is carried out between one slow stage and the next. We denote the fast sub-steps by , where each advances the fast system from to .

###### Remark 3 (Matrix notation)

It is convenient to gather the gamma coefficients in the matrices , and express (7) as follows:

The structure of the coefficient matrices is lower triangular, in the sense that:

(12) |

###### Lemma 1 (The MRI-GARK method (6) as a particular instance of a GARK method)

##### Fast component

equationparentequation

(13a) | |||||

(13b) | |||||

(13c) |

Here is the total number of stages of the method and is the Kronecker product.

##### Slow component

equationparentequation

(14a) | |||||

(14b) | |||||

(14c) |

##### Slow-fast coupling

equationparentequation

(15a) | |||||

(15b) |

where for each we define:

##### Fast-slow coupling

equationparentequation

(16a) | |||||

(16b) |

###### Proof

Replacing the -th continuous stage (6b) by the equivalent -th fast discrete sub-step to advance from to leads to: equationparentequation

(17a) | |||

where is the -th stage of the fast discrete sub-step . The corresponding slow stages are advanced as follows: | |||

(17b) | |||

Iterating after the slow stages (17b) yields: | |||

(17c) |

where the last equation writes (17c) as a slow MRI-GARK stage (3a). From this we identify the slow method coefficients as:

Using (7) and the fact that for any due to the arbitrary accuracy of the fast scheme, we have that:

Consequently, the slow method coefficients satisfy:

which establishes the equations (14).

We identify the slow-fast coupling coefficients as follows:

which establishes (15a). Moreover:

which proves (15b).

### 3.1 Order conditions

In this section we derive order conditions of the MRI-GARK schemes for up to order four. First, we define a set of useful coefficients.

###### Definition 4 (Some B-series coefficients)

Consider the following Butcher tree [22]:

where is the tree of order one and is the operation of joining subtrees by a root, and the following B-series coefficients of the exact solution of the fast sub-system: equationparentequation

(19a) | |||

For a fast Runge-Kutta method of arbitrary accuracy it holds that: | |||

(19b) |

Using (19) we define the following matrix:

(20) |

#### 3.1.1 Internal consistency

###### Theorem 1 (Internal consistency conditions)

The MRI-GARK method fulfills the “internal consistency” conditions:

(21) |

for any fast method iff the following conditions hold:

(22) |

###### Proof

From (14c) and (15b) we see that the first equation of (21) is automatically satisfied. Next, we write the second internal consistency equation (21) in the following equivalent form by equating (18) and (16b):

This relation has to be satisfied for any independently of the choice of the fast discretization method. In order to achieve this we equate separately different powers to obtain:

which establishes (22).

###### Remark 4

Assuming that both the fast and the slow methods have order at least two, the internal consistency conditions (22) imply that the overall scheme is second order.

#### 3.1.2 Fast order conditions

Since the fast component is solved exactly (or equivalently, using a base Runge-Kutta scheme of arbitrary accuracy) the fast order conditions for any order are implicitly satisfied.

#### 3.1.3 Slow order conditions

The slow component of the MRI-GARK method (14) is the Runge-Kutta scheme . The MRI-GARK slow order conditions are fulfilled by selecting a slow base method of order .

###### Remark 5 (Coefficients of the slow base method)

###### Remark 6 (Coefficients of the slow embedded method)

Consider now the embedded slow method , and the coefficients of corresponding MRI-GARK method given by the equations (14a) and (14b). From the structure of the error controller discussed in Remark 2 the coefficients of the main and the embedded methods differ only in the last row. Using (23), equation (14b) for the embedded method reads:

(24) |

#### 3.1.4 Third order coupling conditions

###### Theorem 2 (Third order coupling condition)

###### Proof

For internally consistent MR-GARK schemes there are two third order coupling conditions [21, 41]. We proceed with checking them. If the slow base method is of at least third order, then (25) implies that the first coupling condition is automatically satisfied:

Using (29) the second coupling condition reads:

which establishes (30).

#### 3.1.5 Fourth order conditions

###### Theorem 3 (Fourth order coupling conditions)

###### Proof

For internally consistent MR-GARK schemes there are ten coupling conditions for order four [41]. We proceed with checking each of them.

Condition a.

Condition b. Using (25) and the fourth order of the slow base method one checks that the second MR-GARK coupling condition is satisfied automatically:

Condition d. Using (15), (13c), and (28) the fourth coupling condition reads:

We have that:

(32) |

for order four slow base methods with . Consequently, the fourth MR-GARK coupling condition is automatically satisfied.

Condition e. Using (25) one verifies that the fifth coupling condition always holds:

Condition j. Using (16a) and (29) the tenth coupling condition reads:

which is the same as condition (31b).

## 4 Linear stability analysis

### 4.1 Scalar stability analysis

For additively partitioned systems (1) we consider the following scalar model problem:

(33) |

Let and . The stage computation (6b) applied to (33) solves exactly a linear ODE with the following analytical solution:

where the coefficients are functions of the fast variable:

and are defined using the following family of analytical functions:

(34) |

The MRI-GARK (9) applied to (33) reveals a stability function that depends on both slow and fast variables:

###### Definition 5 (Scalar stability)

The scalar slow stability region is defined as:

(35) |

Thus ensures that the solution is stable for any system (33) with in an -wedge in the left complex semi-plane, and of absolute value bounded by .

###### Example 2 (Stability functions of second order methods)

The stability function of the explicit midpoint MRI-GARK scheme Eq. 10 is a quadratic polynomial in , with coefficients analytical functions of :

Similarly, the stability function of the implicit trapezoidal MRI-GARK scheme of Eq. 11 is a rational function in , with coefficients analytical functions of :

### 4.2 Matrix stability analysis

Following Kværnø [30], for component partitioned systems (8) we consider the following model problem:

(36) |

where

The eigenvalues of

are linear combinations of the slow and fast diagonal terms, and . For the fast sub-system has a weak influence on the slow one; the first eigenvalue is slow and the second one is fast. For the slow sub-system has a weak influence on the fast one, and the first eigenvalue is fast.###### Remark 7 (Scale considerations)

In order to have the slow and fast contributions to of similar magnitude, and the contributions to of similar magnitude as well, one needs a coupling coefficient inversely proportional to the scale ratio of the system, .

The general system (36) is complex, and in order to ease the analysis process some simplifications are needed. First, the factor represents a scaling of the fast variable, as can be seen from rewriting the system (36) in terms of the variables . To simplify the analysis we take , i.e., we assume that the multirate method is applied to the system (36) after rescaling the fast variables. It is possible, however, that the stability of some multirate schemes is affected by the scaling, in the sense that the same scheme, applied to systems with different scalings, has different stability properties.

In order to further simplify analysis we restrict ourselves to considering the case where

Comments

There are no comments yet.