Design of High-Order Decoupled Multirate GARK Schemes

04/20/2018 ∙ by Arash Sarshar, et al. ∙ Virginia Polytechnic Institute and State University 0

Multirate time integration methods apply different step sizes to resolve different components of the system based on the local activity levels. This local selection of step sizes allows increased computational efficiency while achieving the desired solution accuracy. While the multirate idea is elegant and has been around for decades, multirate methods are not yet widely used in applications. This is due, in part, to the difficulties raised by the construction of high order multirate schemes. Seeking to overcome these challenges, this work focuses on the design of practical high-order multirate methods using the theoretical framework of generalized additive Runge-Kutta (MrGARK) methods, which provides the generic order conditions and the linear and nonlinear stability analyses. A set of design criteria for practical multirate methods is defined herein: method coefficients should be generic in the step size ratio, but should not depend strongly on this ratio; unnecessary coupling between the fast and the slow components should be avoided; and the step size controllers should adjust both the micro- and the macro-steps. Using these criteria, we develop MrGARK schemes of up to order four that are explicit-explicit (both the fast and slow component are treated explicitly), implicit-explicit (implicit in the fast component and explicit in the slow one), and explicit-implicit (explicit in the fast component and implicit in the slow one). Numerical experiments illustrate the performance of these new schemes.



There are no comments yet.


page 21

page 22

page 26

page 30

page 31

page 32

page 33

page 37

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many applications in science and engineering require the simulation of dynamical systems where different components evolve at different characteristic time scales. Clearly, these systems challenge traditional time discretizations that use a single time step for the entire system: either the fast components are resolved inaccurately, or the slow components are resolved with more accuracy than required, therefore increasing computational costs. Multirate methods exploit the partitioning of a system into components with different time scales, and use small step sizes to discretize the fast components, and large step sizes to discretize the slow components.

A first approach to constructing multirate methods is to employ traditional integrators with different time steps for different components, and to carefully orchestrate the coupling between these components. Early efforts to develop multirate Runge-Kutta methods are due to Rice [Rice_1960] and Andrus [Andrus_1979, Andrus_1993]. In the first discussion of “multirate methods” Gear and Wells [Gear_1984_MR-LMM] propose pairing various linear multistep methods. This fundamental contribution already points to a number of challenges facing multirate methods such as coupling, automatic step size selection, and efficiency of the overall computational process. Other work to construct multirate linear multistep schemes includes [Kato_1999]. Günther et al. [Guenther_2001_MR-PRK, Guenther_1994_partition-circuits] developed multirate methods for partitioned Runge-Kutta schemes, as well as Rosenbrock-W methods [Guenther_1997_ROW] of order three that are well-suited for treatment of systems with both stiff and non-stiff variables. Similarly, Kværnø and Rentrop [Kvaerno_2000_stability-MRK, Kvaerno_1999_MR-RK] constructed explicit multirate Runge-Kutta methods of order three. Bartel et al. [Bartel_2002_MR-W] propose one-step methods where internal stages are used to provide the coupling between the fast and slow components. Constantinescu and Sandu developed strong stability preserving (SSP) multirate methods of Runge-Kutta [Sandu_2007_MR_RK2] and linear multistep [Sandu_2009_MR_LMM]

type that are suited for solving hyperbolic partial differential equations (PDEs).

Another approach, tracing back to Engstler et al. [Engstler_1997_MR-extrapolation], derives multirate methods using Richardson extrapolation. Here, the solution is recursively improved on partitions of the system until required tolerances are satisfied. An important advantage of such schemes is the naturally available dynamic partitioning of the system. Constantinescu and Sandu [Sandu_2008_MR-EXTRAP, Sandu_2013_extrapolatedMR, Sandu_2009_ICNAAM-Multirate] considered explicit and implicit base methods for multirate extrapolation methods and study their stability properties. Other multirate approaches include Galerkin [Logg_2003_MAG1], and combined multiscale [Engquist_2005] methodologies.

Günther and Sandu [Sandu_2016_GARK-MR] built a class of multirate methods in based on the General Additive Runge–Kutta framework (GARK) [Sandu_2015_GARK]. Bremicker-Trübelhorn and Ortleb [Bremicker_2017_MGARK] developed third order multirate GARK (MrGARK) methods for fluid-structure interaction, and allowed for non-uniform fast steps in the order conditions of the methods.

This study develops a systematic design approach for constructing multirate methods in the MrGARK framework of Günther and Sandu [Sandu_2016_GARK-MR, Sandu_2015_GARK]. Several high-order schemes are constructed that combine implicit and explicit components for the fast and slow subsystems.

The paper is organized as follows. Section 2 reviews the GARK framework and multirate GARK family. We study order conditions for high order MrGARK methods in Section 3, and their scalar linear stability in Section 4. Section 5 provides insight into fast-slow coupling requirements. Section 6 discusses the design criteria for practical methods, and Section 7

studies error estimation and the adaptivity of both micro- and macro-steps. Newly constructed methods are listed in

Section 8, and method coefficients are detailed in Appendix A. Numerical tests are performed on different test problems and the results are reported in Section 9.

2 Multirate generalized additive Runge–Kutta schemes (MrGARK)

In this section we review some background on MrGARK methods.

2.1 GARK methods

The generalized additive Runge–Kutta (GARK) methods introduced in [Sandu_2014_SCEE, Sandu_2015_GARK] allow derivation of advanced multi-methods for solving additively

partitioned systems of ordinary differential equations:


where the right-hand side function is split into different partitions based on properties such as stiffness, non-linearity, dynamical behavior, and evaluation cost. We note that additive partitioning includes the case of component partitioning

in which the state vector is split into a number subsets.

A GARK method advances the numerical solution as follows [Sandu_2015_GARK]: equationparentequation


2.2 Multirate GARK methods

In the case of a two-way partitioned system Eq. 1 with slow component , and fast component we have:


A multirate GARK method [Sandu_2015_GARK] integrates the slow component with a Runge–Kutta method and a large step size , and the fast component with another Runge–Kutta method and a small step size . Here represents the (integer) number of fast steps that are executed for each of the slow steps. MrGARK methods are formally derived in [Sandu_2016_GARK-MR]. One step of the method utilizes slow stages, denoted by , and fast stages, denoted by : equationparentequation

The full step is then formed with:

Let the coupling between the two methods be described by and for . The GARK Butcher tableau for the method (4) is [Sandu_2016_GARK-MR]:

Remark (Slow and fast stage numbers)

The structure of the Butcher tableau implies that the fast stage of the fast micro-step corresponds to row in Eq. 5, and the slow stage corresponds to row in Eq. 5.

Definition 1 (Telescopic MrGARK schemes)

A telescopic MrGARK method [Sandu_2016_GARK-MR] uses the same base scheme for the slow and fast partitions:


In this paper we will focus on schemes with telescopic property since it allows a simple extension of the MrGARK method to an arbitrary number of partitions (time scales) using successively larger time steps, each an integer multiple of the previous one.

3 Order conditions for multirate GARK methods

We consider the following internal consistency conditions [Sandu_2016_GARK-MR, Sandu_2015_GARK] to ensure that fast and slow right-hand side evaluations are performed at the same points in time:


where .

Assume that the internal consistency conditions Eq. 7 hold, and further assume that each individual method and has at least order 4. Then the MrGARK scheme (4)–(5) has order four if and only if the following coupling conditions hold [Sandu_2016_GARK-MR]: equationparentequation


Here “” denotes component-wise multiplication of two vectors.

Without imposing any special structure on coupling matrices, we can rewrite Eq. 8 using the block structure shown in Eq. 5. Considering the following intermediate simplifications: equationparentequation


The order conditions Eq. 8 can be written in terms of base methods and coupling coefficients as follows: equationparentequation

Remark (Design process)

A practical design procedure for MrGARK schemes is to first select the base slow and fast methods of desired order, and then solve the order conditions Eq. 10 for the coupling coefficients and .

4 Linear stability analysis

Following [Sandu_2016_GARK-MR, Sandu_2015_GARK] we consider the following scalar model problem:


where the ratio of the fast to the slow variable is , the step size ratio. Denote , , and


It was shown in [Sandu_2016_GARK-MR, Sandu_2015_GARK] that application of MrGARK method Eq. 5 to Eq. 11 leads to the solution

with the stability function:


In order to visualize the stability region of a method, we choose:


such that the ratio of the variable magnitudes matches the ratio of the step sizes. This reduces Eq. 13

to a function of three real variables by assuming the fast eigenvalue is

times larger in magnitude than the slow eigenvalue. The stability region is the volume in the {, , } space where the magnitude of the stability function (13) is less than or equal to one. Stability regions for each method developed here are provided in Appendix A. For some of the methods the region decreases with increasing . The plots for different values of correspond to different test problems: when increases, the scale separation of the test problem also increases (we apply smaller micro-steps to faster problems). The increasing stiffness of the fast component restricts the macro-step, a phenomenon known as stiffness leakage. Coupling coefficients whose magnitude increases rapidly with can lead to a degradation of stability.

5 Decoupled MrGARK methods

We now discuss in detail the structure of the coupling coefficient matrices, and how this structure defines the way the fast and slow stage computations Eq. 4 are carried out, and therefore determines the practicality of the MrGARK method. In order to construct practical MrGARK methods we need to avoid complex couplings between multiple fast and slow stages. To this end we define decoupled MrGARK methods.

Definition 2 (Decoupled MrGARK methods)

An MrGARK method is decoupled if the computation of its stages proceeds in sequence, such that each slow stage uses only information from other slow stages and the already computed fast stages, and vice-versa. There is no coupling that requires fast and slow stages to be solved together. Any form of implicitness is entirely within the fast or within the slow system.

5.1 Structure of the slow-fast coupling (including fast information into the slow stage calculations)

We first introduce a notation for the order of computation of the slow stages with respect to the fast stages. Consider the -th slow stage (4a) at abscissa , i.e., the row in the Butcher tableau Eq. 5. We denote its order of computation by , i.e., the -th slow stage is computed immediately after the -th stage of the -th micro-step is computed. This means that the first micro-steps have been completed, and the -th micro-step has partially progressed to compute stage , when the -th slow stage is evaluated.

When the slow stage is evaluated after the last stage of micro-step , but before the before the first stage of the -th micro-step, we have . Equivalently, this situation can be represented by .

In order to construct practical MrGARK methods we require that the evaluation of the -th slow stage depends only on those fast stages that have been completed; in the GARK Butcher tableau Eq. 5, the -th slow stage depends only on the rows .

The -th row of the slow-fast coupling matrix: equationparentequation

contains the coefficients that bring fast information into the computation of slow stage . A decoupled MrGARK scheme enjoys the following properties:
  • The coupling matrices with the completed microsteps can have full rows:

  • Rows of the coupling matrix can have non-zero entries only in the first positions:

  • The coupling matrices with the future microsteps are zero:

Consequently, for decoupled MrGARK schemes, the slow-fast coupling matrix is lower block triangular:


where we used the fact that the fast stage of micro-step has GARK stage number in the Butcher tableau Eq. 5.

5.2 Structure of the fast-slow coupling (including slow information into the fast stage calculations)

We next introduce a notation for the order of computation of the fast stages with respect to the slow stages. We denote by the index of the last slow stage computed before starting the fast stage of micro-step . Specifically, the fast stage at micro-step is computed after the slow stage , but before the slow stage . For a decoupled MrGARK this stage can only use information from slow stages to . Consequently, the -th row of the coupling matrix can have nonzero entries only in the first columns:

The coupling matrix: equationparentequation

has a block lower triangular structure:

where we used the fact that the fast stage of micro-step has GARK stage number in the Butcher tableau Eq. 5.

5.3 Relation between the coupling matrices

From Eqs. 16 and 15 we see that the two coupling matrices and have related sparsity structures. For decoupled MrGARK methods the sparsity structures are complementary, in the sense that one must have zeros in the entries where the other matrix can has nonzero elements:


where denotes element-by-element multiplication. This is schematically illustrated in Fig. 0(a).

For coupled MrGARK methods the sparsity structures can overlap, in the sense that they both can have non-zeros entries in the same location:


This is illustrated in Fig. 0(c). The overlapping non-zero coupling coefficients, indicated by dashed boxes in Fig. 0(c), imply that the corresponding fast and slow stages need to be computed together, in a step that involves the entire non-partitioned system. Coupled methods are not pursued further in this paper.

(a) Butcher tableau of a decoupled MrGARK. The coupling matrices have complementary sparsity patterns.
(b) Permuted Butcher tableau of a decoupled MrGARK shows a lower diagonal structure.
(c) Butcher tableau of a coupled MrGARK. The dashed entries of the coupling matrices violate the sparsity complementarity.
(d) Permuted Butcher tableau of a coupled MrGARK. The first slow and the first fast steps, dashed, need to computed together in a coupled manner.
Figure 1: Example of decoupled and coupled MrGARK with and . Blue is the slow method, pink the first fast step, dark pink the second fast step, and green and yellow are the couplings. The permuted versions of the tableaus reflect the sequential order of stage computations. Note the entry above the diagonal for the coupled, permuted tableau due to the non-complementary coupling structure.

5.4 Order of the slow and fast stage evaluation

The sparsity structure of the coupling matrices and determines the order in which the fast and slow stages can be evaluated such as to respect the data dependencies between them. Using the notation defined in Section 5.1 and Section 5.2, the general order in which stages are evaluated is follows:

  1. Start by evaluating the fast stages for which ; these are the fast stages needed by the computation of the first slow stage . If the entire first row then proceed with evaluating the first slow stage .

  2. After stage of the -th micro-step is computed, the -th stage of the slow method is evaluated.

  3. Continue with evaluating stages of the fast method, and stop after stage of the -st micro-step to compute stage of the slow method.

  4. Continue until all fast and slow stages are evaluated.

Remark (Time ordering)

Assuming that the slow stage abscissae are increasing, and that fast stage abscissae are non-decreasing in time:


a natural order to evaluate the MrGARK stages follows the time ordering of their abscissae. Note that the -th slow stage approximates the solution at time , while the -th fast stage of the -th microstep approximates the solution at time . The pairs are chosen such that the stages are evaluated in increasing order of their approximation times: equationparentequation


Remark (Simpler time ordering)

It is possible to simplify the time ordering such that the slow stages are evaluated at the end of full micro-steps. The stage is evaluated after the end of micro-step , but before micro-step , if , in which case and . When we take .

5.5 Reordering the GARK Butcher tableau

In the standard form of the GARK Butcher tableau Eq. 5 the first stages are for the fast method, and stages to are for the slow method.

Figure 2: An example of the order of computing stages in MrGARK with stage sequence .

Let ic be the vector of GARK tableau stage indices sorted in the order of computations, as summarized in Fig. 2. A renumbering of stages leads to a row and column permutation of the Butcher tableau Eq. 5. The reordered Butcher matrix is . The reordering is illustrated in Fig. 1. We distinguish the following cases:

  • If the reordered Butcher tableau is strictly lower triangular then the MrGARK method is explicit, and each stage uses only previously computed information.

  • If the reordered Butcher tableau is lower triangular, with some non-zero diagonal entries, then the MrGARK method is implicit, but decoupled. The non-zero diagonal entries correspond to implicit fast or slow stages. This case is illustrated in Fig. 0(b).

  • Finally, if the reordered Butcher tableau has block-diagonal entries then the MrGARK method is coupled, in the sense that several fast and slow stages need to be solved together, in a coupled manner. This case is illustrated in Fig. 0(d).

As an example let us consider the GARK Butcher matrix for the explicit method EX2-EX2 2(1)[A] introduced in Section A.1 for :

where . The permuted Butcher tableau verifies the explicit nature of the method, and also provides a progression of stage computations that is practically easy to implement.

6 Design of practical decoupled MrGARK methods

In this section we discuss several desirable properties of practical MrGARK methods, and the design methodologies to incorporate them in the construction of new high order schemes.

6.1 Design principles

The following set of design principles incorporates properties that are important for ensuring the practicality of MrGARK schemes:

  1. Practical MrGARK methods need to be high order (have an order of accuracy larger than two), and generic in the multirate ratio (have the same order of accuracy for any .) This typically requires that the coupling coefficients are functions of . We have addressed these requirements via the order conditions derived in Section 3.

  2. Methods where the fast and slow base schemes are both either explicit or implicit should be telescopic Eq. 6 in order to be easily applicable to multi-partitioned systems with multiple time scales. All explicit-explicit methods proposed in this paper are telescopic.

  3. In order to maintain computational efficiency, a complex coupling between multiple fast and slow stages needs to be avoided. In this paper, we focus on decoupled multirate methods satisfying equation Eq. 17, as they are far less computationally demanding than coupled methods. However, the overall stability of the scheme may be affected by decoupling.

  4. Methods should be optimized for general-purpose time integration, i.e. have small principal error terms and large stability regions. Note that the principal error for MrGARK methods is a function of . If not properly controlled, the coupling errors can grow with , thereby reducing the overall efficiency of the method. All schemes derived herein have coefficients optimized for small principal error and large stability region.

  5. Consider the case where one of the base methods is implicit. A favorable property of the entire MrGARK method is stiff accuracy [Sandu_2016_GARK-MR]:


    This property implies that the base implicit Runge–Kutta method is stiffly accurate, and that the MrGARK stability function approaches zero as the -th component of the system becomes infinitely stiff [Sandu_2016_GARK-MR]. All schemes derived herein having an implicit base method enjoy the property Eq. 21.

  6. Practical use of MrGARK methods demands an error control mechanism that is capable of adapting both the step size and the multirate ratio . Therefore the error control problem in multirate integration is fundamentally more complex than in traditional, single rate integration. We fully address the error control issue in Section 7.

  7. It is desirable to derive multirate methods with reduced coupling errors by requiring that both the main and the embedded methods satisfy higher order coupling conditions. This strategy isolates the dominant local truncation errors to the solution of the slow and fast components and greatly simplifies the task of adaptively choosing and , as discussed in Section 7.

    Definition 3 (Naturally adaptive schemes)

    A MrGARK scheme of order is naturally adaptive if the fast and slow local truncation error terms (corresponding to N-trees with nodes of the same color) are , and the coupling local truncation error terms (corresponding to N-trees with nodes of different colors) are .

    We construct naturally adaptive second and third order schemes in Appendix A.

  8. Methods of type S are optimized for simplicity and stability. First, by design, one seeks to maximize the sparsity of the coupling coefficient matrices, such as to simplify the implementation and decrease data dependencies. Next, the coupling coefficients can potentially become large in magnitude when increases, and this can lead to large cancellation errors in the context of finite precision arithmetic, as well as to a degradation of numerical stability. It is desirable that the coupling coefficients are optimized such that their magnitude remains bounded for large values .

6.2 Design process

The order conditions Eq. 8 and the additional constraints associated with the design principles of Section 6.1 lead to large, nonlinear systems of equations that need to be solved for the method coefficients. The resulting coupling coefficients and are functions of the micro-step number and the multi-rate step size ratio . Our proposed approach for designing MrGARK methods of order is a three-step process, as follows.

  1. The first step is to construct optimized base slow and fast schemes of order . Implicit base methods are selected to be stiffly-accurate SDIRK schemes. Explicit base methods are chosen to have small principal error components. When both the slow and the fast methods are of the same type (explicit or implicit) we choose the same discretization coefficients , such as to obtain a telescopic MrGARK method.

  2. The second step is to define the sparsity patterns of the coupling matrices and . These patterns determine the computational flow of the method and its implementation complexity, and influence the overall stability and accuracy properties. We manually test various sparsity patterns to balance all of these properties.

  3. The third step computes the coupling coefficients and such as to satisfy the coupling conditions Eq. 8 up to order . Any free parameters in the family are used to minimize the Euclidean norm of the residuals of the order coupling conditions; for naturally adaptive MrGARK methods all these residuals are cancelled. In this work the solution of order conditions and the minimization of the error coefficients are carried out with Mathematica Version 11.2.

An alternative to this derivation strategy is a monolithic constrained optimization procedure that minimizes the residuals of the order coupling conditions, subject to solving the conditions Eq. 8 up to order , and subject to structural constraints such as decoupling (17) and stiff accuracy (21).

The proposed three-step procedure is preferable for two reasons. First, we expect MrGARK methods to be applied to problems with a rather weak coupling between partitions, where the primary sources of error are the base methods and not the coupling. Our approach gives precedence to the base errors first. Second, our procedure is practical as it reduces the number of nonlinear equations that need to be solved together during the design.

7 Error estimation and adaptive MrGARK methods

Adaptivity of traditional (single rate) methods adjusts the step size such as to ensure the desired accuracy of the solution at a minimal computational effort. In the context of Runge–Kutta schemes a second “embedded” method is used to provide an aposteriori estimate of the local truncation error [Hairer_book_I, CH II.4]. The step size is adjusted to bring the local error estimate to the user prescribed level using the asymptotic relation that this error is proportional to .

Adaptivity of multirate methods is more complex, as there are two independent parameters that control the solution accuracy and efficiency: the macro-step size and the multirate ratio (or, equivalently, the macro-step and the micro-step .) In order to achieve adaptivity of both and we propose to construct error estimates using not one, but several embedded methods, such as to obtain additional information about the structure of the local truncation error. Analytical asymptotic formulas are used to quantify the dependency of the local error terms on both and . Armed with this information, we develop several criteria for adaptively selecting both the macro-step size and the multirate ratio.

7.1 Local error structure for a second order MrGARK method

In order to understand the structure of the local truncation error we first consider a second order, internally consistent MrGARK scheme. The principal error terms of order are associated with the third order conditions, and are provided in Table 1.

Error Error Elementary Order condition Residuals for
type term differential IM2-EX2 2(1)[A]
Table 1: Principal error terms of for second order MrGARK schemes.

As an example, consider the second order MrGARK method IM2-EX2 2(1)[A] from Section A.4. We note from Table 1 that the third order residuals for this method are all asymptotically bounded as increases, and the local truncation error behaves like: