# Inter/extrapolation-based multirate schemes – a dynamic-iteration perspective

Multirate behavior of ordinary differential equations (ODEs) and differential-algebraic equations (DAEs) is characterized by widely separated time constants in different components of the solution or different additive terms of the right-hand side. Here, classical multirate schemes are dedicated solvers, which apply (e.g.) micro and macro steps to resolve fast and slow changes in a transient simulation accordingly. The use of extrapolation and interpolation procedures is a genuine way for coupling the different parts, which are defined on different time grids. This paper contains for the first time, to the best knowledge of the authors, a complete convergence theory for inter/extrapolation-based multirate schemes for both ODEs and DAEs of index one, which are based on the fully-decoupled approach, the slowest-first and the fastest-first approach. The convergence theory is based on linking these schemes to multirate dynamic iteration schemes, i.e., dynamic iteration schemes without further iterations. This link defines naturally stability conditions for the DAE case.

## Authors

• 2 publications
• 9 publications
• ### Symplectic GARK methods for Hamiltonian systems

Generalized Additive Runge-Kutta schemes have shown to be a suitable too...
03/06/2021 ∙ by Michael Günther, et al. ∙ 0

• ### Dynamic iteration schemes and port-Hamiltonian formulation in coupled DAE circuit simulation

Electric circuits are usually described by charge- and flux-oriented mod...
04/27/2020 ∙ by Michael Günther, et al. ∙ 0

• ### High-Order Multiderivative IMEX Schemes

Recently, a 4th-order asymptotic preserving multiderivative implicit-exp...
08/07/2020 ∙ by Alexander J. Dittmann, et al. ∙ 0

• ### A Class of Multirate Infinitesimal GARK Methods

Differential equations arising in many practical applications are charac...
08/07/2018 ∙ by Adrian Sandu, et al. ∙ 0

• ### Design of High-Order Decoupled Multirate GARK Schemes

Multirate time integration methods apply different step sizes to resolve...
04/20/2018 ∙ by Arash Sarshar, et al. ∙ 0

• ### Coupled Multirate Infinitesimal GARK Schemes for Stiff Systems with Multiple Time Scales

Differential equations derived from many real-world applications are dom...
11/30/2018 ∙ by Steven Roberts, et al. ∙ 0

• ### A Relaxation-based Network Decomposition Algorithm for Parallel Transient Stability Simulation with Improved Convergence

Transient stability simulation of a large-scale and interconnected elect...
10/05/2018 ∙ by Jian Shi, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In practice, technical applications are often modeled as coupled systems of ordinary differential equations (ODEs) or differential algebraic equations (DAEs). Furthermore, it is a very common aspect of technical applications that the transient behavior is characterized by different time constants. At a given instance of time, certain parts of a dynamical system are slowly evolving, while others have a fast dynamics in the direct comparison. Here, this is referred to multirate behavior. To name but a few applications: multibody systems [1, 10], electric circuits [8, 12], climate models [18] and, of course, multiphysical systems, e.g. field/circuit coupling [17]. Now, to have an efficient numerical treatment of systems with multirate behavior, special integration schemes are developed, so-called multirate schemes. To the best knowledge of the authors, the multirate history goes back to Rice [20]

in 1960, where step sizes for time integration are adapted to the activity level of subsystems. Many work followed, and we give only a partial list here: based on BDF-methods

[11], based on ROW methods [15], based on extrapolation methods [9] partitioned RK and compound step [13], mixed multirate with ROW [3], based on a refinement strategy [21], for conservation laws [6], compound-fast [22], infinitesimal step [23], implicit-explicit [7], based on GARK-methods [14].

The fundamental idea of a multirate scheme is the following: an efficient algorithm should (if there are no stability issues) sample a certain component/subsystem according to the activity level. The more active a component is, the shorter are the time scales and the higher the sampling rate should be chosen to achieve a given level of accuracy. In other words, there is not a global time step, but a local one, which should reflect the inherent time scale of an unknown or some subsystem. For simplicity, we work here with only two time scales. That is, we allow for an fast subsystem (of higher dynamics), which employs a small step of size (micro step) and a slow subsystem, which employs a larger step size (macro step). Furthermore, we assume for simplicity the relation with . In fact, the main feature of a certain multirate scheme is to define the coupling variables in an appropriate way. Here we focus on inter- and extrapolation strategies for coupling both subsystems, since we aim at highlighting the connection to dynamic iteration schemes.

The work is structured as follows: In Sec 2, the formulation of multirate initial value problems is given on the basis of ordinary differential equations (ODEs). Furthermore, various known versions of extra- and interpolation coupling is explained. Following this, the consistency of multirate one-step methods are discussed for ODEs (Sec. 3). Then, in Sec. 4, the ODE results are generalized to the DAE case. Conclusions complete the presentation.

## 2 Notation for coupled systems and multirate extra/interpolation

We start from an initial value problem (IVP) based on a model of ordinary differential equations (ODEs):

 ˙w=h(t,w),w(t0)=w0,t∈(t0,t\scriptsize end% ], (1)

where is continuous and Lipschitz continuous in , is given. Moreover, let or , resp., be comprised of some slower changing parts (in time domain), whereas the remaining parts are faster changing. This is referred to as multirate behavior Now, there are two equivalent ways of partitioning:

1. [label=)]

2. The component-wise partitioning splits the unknown into slow and fast components , such that and

 ˙yS=fS(t,yS,yF),yS(t0)=yS,0,˙yF=fF(t,yS,yF),yF(t0)=yF,0, (2)

with corresponding splitting of the right-hand side.

3. The right-hand side partitioning is an additive splitting of into slow and fast summands:

 ˙w=hs(t,w)+hf(t,w),w(t0)=w0, (3)

such that with and . Of course, the initial data needs to be split in a suitable way. If the dynamics are solely determined by and , the splitting is arbitrary to some extent.

Since both ways of partitioning are equivalent, we choose for the work at hand the formulation (2), without loss of generality. Moreover, the partitioning (2) can be generalized to the case of differential algebraic equations (DAEs) with certain index-1 assumptions. This DAE setting is treated in Sec. 4.

In this work, we study multirate methods, which belong to the framework of one-step-methods (and multi-step schemes, too, see remark 7 below) and which are based on extrapolation and interpolation for the coupling variables. To describe these methods, let us assume that the computation of the coupled system (2) has reached time with

 ˙yS=fS(t,yS,yF),yS(¯t)=yS,¯t,˙yF=fF(t,yS,yF),yF(¯t)=yF,¯t. (4)

Now, the multirate integration of the whole coupled system is defined for one macro step, i.e., on . It comprises a single step of macro step size for the subsystem and steps of (micro step) size for . To this end, the respective coupling variables need to be evaluated. Here, our presentation is restricted to extrapolation and interpolation for the coupling variables, although there are several other techniques. Depending on the sequence of computation of the unknowns and , one distinguishes the following three versions of extra-/and interpolation techniques:

1. [label=)]

2. fully-decoupled approach  [5]: fast and slow variables are integrated in parallel using in both cases extrapolated waveforms based on information from the initial data of the current macro step at ;

3. slowest-first approach [11]: in a first step, the slow variables are integrated, using an extrapolated waveform of based on information available at for evaluating the coupling variable in the current macro step. In a second step, micro steps are performed to integrate the fast variables from to , using an interpolated waveform of based on information from the current macro step size for evaluating the coupling variable .

4. fastest-first approach [11]: in a first step, micro steps are performed to integrate the fast variables, using an extrapolated waveform of based on information available at for evaluating the coupling variable in the current macro step. In a second step, one macro step is performed to integrate the slow variables from to , using an interpolated waveform of based on information from the current macro step size for evaluating the coupling variable .

###### Remark 1

The restriction that the extrapolation can only be based on the information at can be relaxed to the data of the preceding macro step . In fact, one can encode such an information e.g. as a spline model, which is also updated and transported from macro step to macro step.

## 3 The ODE case

The details presented in this section are based on a result first presented in [5]. Starting from this result, we use the underlying strategy to extend it to our case of the three multirate versions named in the previous section. Basically, for ODE systems, all variants of extrapolation/interpolation-based multirate schemes have convergence order (in the final asymptotic phase) provided that it holds:

1. [label=)]

2. the basic integration scheme (i.e., the scheme for both the slow and the fast subsystems with given coupling data) has order and

3. the extrapolation/interpolation schemes are of approximation order .

For the fully decoupled approach, this is a consequence of the following result, which is a particular case of a more general setting presented in [5]:

###### Theorem 2 (Consistency of fully-decoupled multirate schemes)

Given the coupled ODE-IVP (2), where and are Lipschitz w.r.t. the sought solution. Furthermore, we apply two basic integration schemes of order : one for with macro step size , a second for with fixed multirate factor steps of size . If these integration schemes are combined with two extrapolation procedures for the coupling variables of order , the resulting fully decoupled multirate scheme has order .

Since the strategy of the proof is needed for the further new results, we present the proof in details, although a slightly more general version can be found in [5].

Proof. We consider the case that we have computed the IVP system (2) until time with initial data , , i.e., we have the setting given in system (4). Moreover, the unique solution of (4) is referred to as

 (yS(t;yS,¯t,yF,¯t)⊤,yF(t;yS,¯t,yF,¯t)⊤) or % (yS(t)⊤,yF(t)⊤) as short-hand.

Next, we provide extrapolated, known quantities and for the coupling variables of order : (for constants respective )

 yS(t)−˜yS(t)=LS⋅Hp+O(Hp+1)for any t∈[¯t,¯t+H], and % yF(t)−˜yF(t)=LF⋅Hp+O(Hp+1)for any t∈[¯t,¯t+H]. (5)

Replacing the coupling variables in (4) by and , we obtain the following modified system

 ˙yS=fS(t,yS,˜yF)=:˜fS(t,yS),yS(¯t)=yS,¯t,˙yF=fF(t,˜yS,yF)=:˜fF(t,yF),yF(¯t)=yF,¯t, (6)

which is fully decoupled (for ). Its unique solution is referred to as

 (ˆyS(t;yS,¯t,yF,¯t)⊤,ˆyF(t;yS,¯t,yF,¯t)⊤).

Now, we apply the two basic integration schemes of order in multirate fashion to the decoupled model (6) and we refer to the numerical solution at as

 (yS,H(t∗),yF,H(t∗))⊤.

Then, the distance between multirate and exact solution can be estimated as follows:

 (∥yS,H(t∗)−yS(t∗)∥∥yF,H(t∗)−yF(t∗)∥) ≤(∥yS,H(t∗)−ˆyS(t∗)∥∥yF,H(t∗)−ˆyF(t∗)∥)+(∥ˆyS(t∗)−yS(t∗)∥∥ˆyF(t∗)−yF(t∗)∥). (7)

The fully decoupled multirate scheme gives for the first term on the right-hand side:

 (∥yS,H(t∗)−ˆyS(t∗)∥∥yF,H(t∗)−ˆyF(t∗)∥)≤(cSHp+1+O(Hp+2)cFHp+1+O(Hp+2)) (8)

employing constants (for leading errors). Using Lipschitz continuity of for the second summand on the right-hand side of (7), we find

 (∥ˆyS(t∗)−yS(t∗)∥∥ˆyF(t∗)−yF(t∗)∥) (9)

with respective Lipschitz constants (for system and dependent variables ). We remark that this estimate is decoupled. Inserting the extrapolation estimates (5), we deduce further

 (∥ˆyS(t∗)−yS(t∗)∥∥ˆyF(t∗)−yF(t∗)∥) ≤⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝LS,F⋅LF⋅Hp+1+LS,St∗∫¯t∥ˆyS(τ)−yS(τ)∥dτ+O(Hp+2)LF,S⋅LS⋅Hp+1+LF,Ft∗∫¯t∥ˆyF(τ)−yF(τ)∥dτ+O(Hp+2)⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.

Via Gronwall’s lemma, we deduce:

 (∥ˆyS(t∗)−yS(t∗)∥∥ˆyF(t∗)−yF(t∗)∥) (10)

In combination with the integration estimate (8), the error (7) of the fully-decoupled multirate scheme has consistency order on the macro scale level, which is the claim.

The proof can be slightly adapted to verify the convergence result for the both remaining variants as well:

###### Corollary 3 (Consistency of slowest-first multirate schemes)

The convergence result of Theorem 2 remains valid if the fully-decoupled approach is replaced by the slowest-first approach, i.e., the coupling variables (during the integration of ) are evaluated using interpolation of the already computed slow data in the current macro step.

Proof. We just give the changes of the above proof. For the slowest-first variant, the modified equation on the current macro step reads

 ˙yS=fS(t,yS,˜yF)=:˜fS(t,yS),yS(¯t)=yS,¯t,˙yF=fF(t,yintS,yF)=:˜fF(t,yF),yF(¯t)=yF,¯t (11)

with extrapolated values as in the fully-decoupled approach and interpolated values of order based on the numerical approximations with such that it holds:

 ˆyS(t)−yintS(t)=˜LS⋅Hp+O(Hp+1)for any t∈[¯t,¯t+H]. (12)

Again, the hat-notation is again employed for the exact solution of system (11). The computation of the slow part still employs extrapolated coupling variables. This decouples the slow part from the fast part as before and hence the error estimates of are unchanged. In fact, we can use the estimates (8) and (10): for any time .

Now, for the fast part, the corresponding estimate to (9) reads (with using

 ∥ˆyF(t∗)−yF(t∗)∥ ≤ ∫t∗¯tLF,S(∥ˆyS(τ)−yintS(τ)∥+∥ˆyS(τ)−yS(τ)∥) +LF,F∥ˆyF(τ)−yF(τ)∥dτ.

Using (10) (with instead of and using (12), we find

 ∥ˆyF(t∗)−yF(t∗)∥ ≤∫t∗¯t(LF,S˜LSHp+O(Hp+1)+LF,SLS,FLFeLS,S(τ−¯t)Hp+1+O(Hp+2) +LF,F∥ˆyF(τ)−yF(τ)∥)dτ ≤LF,S˜LSHp+1+LF,F∫t∗¯t∥ˆyF(τ)−yF(τ)∥dτ+O(Hp+2).

Now, the application of Gronwall’s lemma leads to

Finally, we need to form the total error in the fast components, the equivalent to (7). Since the numerical scheme for the fast component is of order , we can still employ (8), and we get the estimate

 ∥yF,H(t∗)−yF(t∗)∥ ≤ (13)

###### Remark 4

If one uses interpolation schemes of order instead of , which is the case if dense output is used within embedded Runge-Kutte schemes, for example, one has to replace the term by , which yields the estimate

 ∥yF,H(t∗)−yF(t∗)∥ ≤ cFHp+1+O(Hp+2), (14)

that is, the extra-/interpolation error is dominated by the error of the numerical integration scheme.

###### Corollary 5 (Consistency of fastest-first multirate schemes)

The convergence result of Theorem 2 remains valid if the fully-decoupled approach is replaced by the fastest-first one, i.e., the coupling variables (during the integration of ) are evaluated using interpolation instead of extrapolation.

Proof. For the fastest-first variant, the modified equation (6) reads on

 (15)

with extrapolated values as in the fully-decoupled approach and interpolated values of order based on the numerical approximations with :

 ˆyF(t)−yintF(t)=˜LF⋅Hp+O(Hp+1)for any t∈[¯t,¯t+H]. (16)

Here, the second equation for is unchanged with respect to Theorem 2, since the extrapolation of is still used. Hence, we still have all respective estimates for the fast part, in particular (8) and (10). For the slow part, the corresponding estimate to (9) now reads (with using )

 ∥ˆyS(t∗)−yS(t∗)∥ ≤ ∫t∗¯t(LS,F(∥ˆyF(τ)−yintF(τ)∥+∥ˆyF(τ)−yF(τ)∥)

Using 10 (with replaced by ) and using (16), we find

 ∥ˆyS(t∗)− yS(t∗)∥≤∫t∗¯t([rgb]0,0,0LS,F˜LFHp+O(Hp+1)+LS,FLF,SLSeLF,F(τ−¯t)Hp+1 +O(Hp+2)+LS,S∥ˆyS(τ)−yS(τ)∥)dτ ≤LS,F˜LFHp+1+∫t∗¯tLS,S∥ˆyF(τ)−yF(τ)∥dτ+O(Hp+2).

Applying now Gronwall’s lemma leads to

 ∥ˆyS(t∗)−yS(t∗)∥≤LS,F˜LFeLS,SHHp+1+O(Hp+2).

Finally, we use both the above deduced error and the numerical error (8) in the general error sum (7) and we find for the slow part

 ∥yS,H(t∗)−yS(t∗)∥ ≤ (17)
###### Remark 6

If one uses interpolation schemes of order instead of , which is the case if dense output is used within embedded Runge-Kutte schemes, for example, one has to replace the term by , which yields the estimate

 ∥yS,H(t∗)−yS(t∗)∥ ≤ cSHp+1+O(Hp+2), (18)

that is, the extra-/interpolation error is dominated by the error of the numerical integration scheme.

###### Remark 7

For the basic integration schemes employed in Thm. 2, Cor. 3 and Cor. 5 we can use either

1. one-step integration schemes, or

2. multistep schemes, where both schemes are -stable.

###### Remark 8 (Schemes)

Extrapolation of order 0 and 1 can be easily obtained from the initial data at and a derivative information, which is provided by the ODE. This allows directly the construction of multirate methods of order 2.

###### Remark 9

Notice that for a working multirate scheme, we still have to specify the extrapolation/interpolation formulas. In fact, arbitrary high orders of the extra-/interpolation are only possible if information of previous time steps is used. Generally, this may turn a one-step scheme into a multi-step scheme, and raise questions concerning stability. However, if the extrapolation is computed sequentially in a spline-oriented fashion (see Remark 1), the modified functions and are the same for all time intervals inside , and the extrapolation/interpolation based multirate scheme can still be considered as a one-step scheme applied to the modified ODE equations.

## 4 The DAE case

The component-wise partitioning (2) (as well as the right-hand side partitioning (3)) can be generalized to the case of differential algebraic equations (DAEs). Let us assume that the slow and the fast subsystem can be written as semi-explicit system of index-1, each for given corresponding coupling terms as time functions. This reads:

 ˙yS =fS(t,yS,yF,zS,zF),yS(t0)=yS,0, ˙yF =fF(t,yS,yF,zS,zF),yF(t0)=yF,0, 0 =gS(t,yS,yF,zS,zF), 0 =gF(t,yS,yF,zS,zF). (19)

Moreover, the overall system is assumed to be index-1 as well. All index-1 conditions lead to the assumption that the following Jacobians

 ∂gS∂zS,∂gF∂zFand⎛⎜⎝∂gS∂zS∂gS∂zF∂gF∂zS∂gF∂zF⎞⎟⎠ are regular (20)

in a neighborhood of the solution. For later use, we introduce Lipschitz constants with respect to the algebraic variables:

 ||gS(t,yS,yF,zS,zF)−gS(t,yS,yF,ˆzS,ˆzF)||≤LgSS||zS−ˆzS||+LgSF||zF−ˆzF|| (21)

and analogously , and with . Furthermore, for the Lipschitz constants with respect to the differential variables, we use the symbol (with ), e.g.,

 ||fS(t,yS,yF,zS,zF)−fS(t,ˆyS,ˆyF,zS,zF)||≤MfSS||yS−ˆyS||+MfSF||yF−ˆyF||. (22)

To analyze inter-/extrapolation based multirate schemes for these general index-1 DAEs, we consider dynamic iteration schemes with old, known iterates and to be computed, new iterates defined by the following dynamic system

 ˙y(i+1)S =FS(t,y(i+1)S,y(i+1)F,z(i+1)S,z(i+1)F,y(i)S,y(i)F,z(i)S,z(i)F), 0 =GS(t,y(i+1)S,y(i+1)F,z(i+1)S,z(i+1)F,y(i)S,y(i)F,z(i)S,z(i)F), ˙y(i+1)F =FF(t,y(i+1)S,y(i+1)F,z(i+1)S,z(i+1)F,y(i)S,y(i)F,z(i)S,z(i)F), 0 =GF(t,y(i+1)S,y(i+1)F,z(i+1)S,z(i+1)F,y(i)S,y(i)F,z(i)S,z(i)F)

based on splitting functions and . To have a simpler notation, we introduce the abbreviations

 x:=(yS,yF,zS,zF).xS:=(yS,zS),xF:=(yF,zF).

The above splitting functions have to be consistent, this reads,

 Fλ(t,x,x) =fλ(t,x), Gλ(t,x,x) =gλ(t,x),for λ∈{F,S}.

For the different multirate approaches, we have the following splitting functions:

1. [label=)]

2. Fully-decoupled approach:

 FS(t,x(i+1),x(i)) =fS(t,x(i+1)S,x(i)F), FF(t,x(i+1),x(i)) =fF(t,x(i)S,x(i+1)F), GS(t,x(i+1),x(i)) =gS(t,x(i+1)S,x(i)F), GF(t,x(i+1),x(i)) =gF(t,x(i)S,x(i+1)F).
3. Slowest-first approach:

 FS(t,x(i+1),x(i)) =fS(t,x(i+1)S,x(i)F), FF(t,x(i+1),x(i)) =fF(t,x(i+1)S,x(i+1)F), GS(t,x(i+1),x(i)) =gS(t,x(i+1)S,x(i)F), GF(t,x(i+1),x(i)) =gF(t,x(i+1)S,x(i+1)F).
4. Fastest-first approach:

 FS(t,x(i+1),x(i)) =fS(t,x(i+1)S,x(i+1)F), FF(t,x(i+1),x(i)) =fF(t,x(i)S,x(i+1)F), GS(t,x(i+1),x(i)) =gS(t,x(i+1)S,x(i+1)F), GF(t,x(i+1),x(i)) =gF(t,x(i)S,x(i+1)F).

It has been shown that convergence of a dynamic iteration scheme for DAEs can no longer be guaranteed by choosing a window step size small enough, see e.g. [2, 19]. An additional contractivity condition has to hold to guarantee convergence. We have to distinguish the following two aspects for contraction:

1. [label=)]

2. Convergence within one window : In this case, it is sufficient to have [19]:

using the -norm and evaluation at the analytic solution . The quantity is referred to as contraction number. For the type of norm employed on the above left-hand side, we use later the following short-hand

 ∥∥ ∥ ∥∥⎛⎝∂Gρ∂x(i+1)λ⎞⎠−1∂Gλ∂x(i)τ∥∥ ∥ ∥∥:=max¯t≤τ≤¯t+H∥∥ ∥ ∥∥⎛⎝∂Gρ∂x(i+1)λ⎞⎠−1⋅∂Gλ∂x(i)τ∣∣ ∣ ∣∣\raisebox−17.2pt$(τ,x(τ),x(τ))$∥∥ ∥ ∥∥

(for ).

3. Stable error propagation from window to window: Let us assume that iterations are performed on the current time window. Then a sufficient condition for a stable error propagation from window to window is given by [2]

 LΦαk<1

with Lipschitz constant for the extrapolation operator.

###### Remark 10

i) Notice that for the stable error propagation in b) it might be necessary that more than one iteration is performed, although the error reduction (i.e., ) holds.

ii) If one employs a dynamic iteration with only one iteration (one solve of the DAEs), then a multirate scheme is obtained. These schemes are referred to as multirate co-simulation, see [4].

As we did for the ODE case, interpolation/extrapolation based multirate schemes of convergence order for coupled index-1 DAEs can now be obtained by replacing the exact solution of the DAE system with splitting functions

1. [label=)]

2. by a numerical integration of convergence order ,

3. with stopping after the first iteration (i.e., ), plus

4. employing extrapolation/interpolation schemes of order and

5. having satisfied the contractivity condition .

For the different coupling strategies, this condition reads

1. [label=)]

2. fully-decoupled approach:

 LΦmax¯t≤τ≤¯t+H ∥∥ ∥ ∥ ∥∥⎛⎜ ⎜ ⎜⎝∂GS∂z(i+1)S00∂GF∂z(i+1)F⎞⎟ ⎟ ⎟⎠−1⋅⎛⎜ ⎜ ⎜⎝0∂GS∂z(i)F∂GF∂z(i)S0⎞⎟ ⎟ ⎟⎠∥∥ ∥ ∥ ∥∥<1 ⇔max¯t≤τ≤¯t+H∥∥ ∥ ∥ ∥ ∥∥⎛⎜ ⎜ ⎜ ⎜ ⎜⎝(∂GS