Multirate Timestepping for the Incompressible Navier-Stokes Equations in Overlapping Grids

02/29/2020
by   Ketan Mittal, et al.
0

We develop a multirate timestepper for semi-implicit solutions of the unsteady incompressible Navier-Stokes equations (INSE) based on a recently-developed multidomain spectral element method (SEM). For incompressible flows, multirate timestepping (MTS) is particularly challenging because of the tight coupling implied by the incompressibility constraint, which manifests as an elliptic subproblem for the pressure at each timestep. The novelty of our approach stems from the development of a stable overlapping Schwarz method applied directly to the Navier-Stokes equations, rather than to the convective, viscous, and pressure substeps that are at the heart of most INSE solvers. Our MTS approach is based on a predictor-corrector (PC) strategy that preserves the temporal convergence of the underlying semi-implicit timestepper. We present numerical results demonstrating that this approach scales to an arbitrary number of overlapping grids, accurately models complex turbulent flow phenomenon, and improves computational efficiency in comparison to singlerate timestepping-based calculations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 31

08/20/2020

Non-overlapping block smoothers for the Stokes equations

Overlapping block smoothers efficiently damp the error contributions fro...
07/11/2021

A projection-based, semi-implicit time-stepping approach for the Cahn-Hilliard Navier-Stokes equations on adaptive octree meshes

We present a projection-based framework for solving a thermodynamically-...
11/12/2018

Nonconforming Schwarz-Spectral Element Methods For Incompressible Flow

We present scalable implementations of spectral-element-based Schwarz ov...
07/28/2021

A semi-implicit low-regularity integrator for Navier-Stokes equations

A new type of low-regularity integrator is proposed for Navier-Stokes eq...
09/30/2020

Stability analysis of a singlerate and multirate predictor-corrector scheme for overlapping grids

We use matrix stability analysis for a singlerate and multirate predicto...
06/05/2019

On the accuracy of stiff-accurate diagonal implicit Runge-Kutta methods for finite volume based Navier-Stokes equations

The paper aims at developing low-storage implicit Runge-Kutta methods wh...
02/03/2019

Convergence study and optimal weight functions of an explicit particle method for the incompressible Navier--Stokes equations

To increase the reliability of simulations by particle methods for incom...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Computational simulations, driven by the advent of high-performance computing resources, have become ubiquitous for modeling and understanding complex flow phenomena. The accuracy of these computational simulations primarily depends on two key aspects of the method used to solve the partial differential equations (PDE) of interest; (i) the spatial discretization and (ii) the temporal discretization. For spatial discretization, various methods like finite difference (FD), finite element method (FEM), finite volume method (FVM), and the spectral element method (SEM) have become popular for representing the solution of the PDE on a discrete set of nodes/volumes/elements covering the domain

gresho1978time; rosenfeld1988solution; dfm02. The spatial accuracy of the discrete solution obtained using these methods depends on the size of the local grid spacing used to model the domain and the order of accuracy of the spatial discretization. Similarly, the temporal accuracy of the solution depends on the timestep size () and the order () of the timestepper used for temporal integration. Some popular methods for time-integration include the Runge-Kutta (RK) method, the Adams-Bashforth (AB) method, and the backward differentiation formula (BDF) method. The focus of this work is on the temporal integration of the incompressible Navier-Stokes equations for modeling fluid dynamics and heat transfer in complex domains.

For the unsteady Navier-Stokes equations (NSE), the nonlinear convective term is typically treaded explicitly tomboulides1997 and the maximum allowable timestep size for stable time-integration is determined by the Courant-Friedrichs-Lewy number (CFL) lewy1928partiellen. For a problem in space dimensions, we define a local CFL,

(1)

where, at gridpoint , is the th component of velocity and is an approximate grid spacing in the direction, and is the timestep size. We also define a global CFL,

CFL (2)

To within a scaling factor, the CFL is a robust and easily evaluated surrogate for , where

is the spectral radius of the (assumed skew-symmetric) convection operator. The CFL limit associated with explicit time-advancement of the convection operator is governed by the scale factor,

, such that and the chosen timestepper. For example, for second-order centered differences on a one-dimensional periodic domain with third-order Adams-Bashforth timestepping, we have and . For Fourier methods, , and for the SEM, for dfm02.

Most numerical approximations use the same timestep size throughout the domain. These methods are classified as singlerate timestepping (STS) methods. A well-known challenge in STS-based methods, due to the nature of (

2), is that even a single point in the domain having a high speed-to-grid-size ratio can have the undesirable effect of limiting the allowable timestep size throughout the domain. This situation occurs, for example, near airfoil trailing edges where flow speeds are high and computational meshes are often dense. Another common case is in the simulation of plumes fabregat2016dynamics, as illustrated in Fig. 1. Figure 1(a) shows a slice-view of the spectral element mesh used to model the domain and Fig. 1(c) shows the instantaneous velocity magnitude contours. Here, resolution requirements for the turbulence in and near the inlet pipe result in having the finest (min ) meshes in the region where velocities are the largest. Away from the inlet pipe, the turbulence intensity is lower and the meshes are correspondingly coarser. Consequently, we observe that the local CFL is almost two to three orders of magnitude higher for the elements in the plume region as compared to elements in the far-field. The CFL variation throughout the domain is shown on a log-scale in Fig 1(d). Using the same timestep size for integrating the NSE, thus leads to unnecessary computational effort for elements in the far-field.

The scale disparity in CFL is even more evident when nonconforming overlapping grids are used for this domain (Fig. 1(b)). It is well known that overlapping grids are highly effective in reducing the computational cost of calculations for domains featuring flow structures with widely varying spatial scales in different regions of the domain. Since overlapping grids relax the constraint of mesh conformity, the reduce the total element count (10% in this case) with grids that are constructed according to the physics in the region that they cover. In this example, since the outer grid is much coarser than the inner grid, the outer grid should be able to use orders of magnitude bigger timestep size than the inner grid. Most overlapping grid-based methods, however, use an STS-based approach mittal2019nonconforming; henshaw2012cgins; angel2018, which results in superfluous computational work for the outer grid.

Figure 1: Slice-view of the (a) conforming monodomain spectral element mesh for modeling an axisymmetric thermally buoyant plume, (b) nonconforming overlapping spectral element meshes, (c) velocity magnitude plot, and (d) CFL distribution (log-scale) for a given timestep size.

Multirate timestepping methods were first introduced in the seminal work of Rice in 1960 rice1960split

. Rice developed a Runge-Kutta based timestepping strategy for solving a system of two ordinary differential equations (ODEs) with different integration step sizes. Shortly after, multirate timestepping methods were popularized by Gear

gear1974multirate; gear1984multirate and Andrus andrus1979numerical; andrus1993stability. In this pioneering work, Gear analyzed the stability and accuracy of Euler method-based strategies for MTS. Similarly, Andrus derived conditions for absolute stability of a high-order Runge-Kutta-based approach for MTS in a system of first-order ODEs. The methods developed by Gear and Andrus were slowest-first- or fastest-first-based, where the ODE with the slower component is solved first followed by the ODE with the faster component, or vice-versa. Other similar works include gunther2001multirate, where Gunther describes a multirate partitioned Runge-Kutta-based (slowest-first) scheme for solving a system of ODE with stiff components, and verhoeven2007stability, where Verhoeven analyzes the stability of BDF-based (slowest-first) MTS methods for understanding the time behavior of electrical circuits.

While generally useful, a drawback of slowest-first or fastest-first schemes is that they limit the parallelism of the calculation since the ODEs/PDEs are integrated sequentially. In engstler1997multirate, Engstler proposed a method based on Richardson extrapolation to simultaneously solve for the slow and the fast-moving components of a system of ODEs. Similarly, for PDEs, Dawson dawson2001high, Constantinescu constantinescu2007multirate, and Seny seny2013multirate have developed parallel-in-time MTS methods for solving hyperbolic or parabolic equations with different timestep size for different element groups in a mesh, based on the local CFL number.

Other notable developments in the area of MTS methods include savcenco2007multirate; rybak2015multirate; emmett2014high; trahan2012local; gupta2016multi; mikida2018multi. In savcenco2007multirate, Savcenco introduced a novel approach for MTS where all the equations in a system of ODEs are first integrated using a large global timestep size everywhere in the domain, followed by error indicators to determine the equations that require a smaller timestep size. This approach thus avoids unnecessary computation by using a smaller timestep size for ODEs only that require it. Rybak rybak2015multirate has proposed an MTS method for solving fluid flow in coupled free flow domain and porous media. In rybak2015multirate, the PDE for the free flow domain (INSE) is first temporally integrated using a CFL-dependent smaller timestep size, followed by a larger timestep size to solve the PDE for porous media. While Rybak’s and Savcenco’s approaches are effective for MTS, they are sequential and lack parallelism, similar to the slowest-first- and fastest-first-based methods. In emmett2014high, Emmet has used different timestep sizes for solving fluid motion and relatively stiff chemical mechanism to model compressible reacting flow with complex chemistry. This approach can also be extended to conjugate heat transfer problems where the time scale associated with the energy transfer in fluid and solid medium are very different. Trahan trahan2012local has developed a fastest-first approach for solving the shallow water equations in monodomain conforming grids, Gupta et al. gupta2016multi use multirate timestepping for modeling subsurface methane hydrate reservoirs, and Mikida et al. mikida2018multi solve the compressible Navier-Stokes equations using different timestep sizes in overlapping grids with a fastest-first Adams-Bashforth-based scheme.

A survey of the literature shows multirate timestepping methods have mainly been developed for parabolic and hyperbolic problems gear1984multirate; verhoeven2007stability; engstler1997multirate; constantinescu2007multirate; seny2013multirate; rybak2015multirate; emmett2014high; trahan2012local; mikida2018multi; klockner2010high. MTS methods are virtually nonexistent for the incompressible Navier-Stokes equations because the solution is very sensitive to the pressure, which satisfies an elliptic Poisson problem at every timestep dfm02. Since the characteristic propagation speed of pressure perturbations is infinite in incompressible flows, existing approaches for multirate timestepping do not extend to a single conforming mesh. Overlapping grids however, can decouple the pressure Poisson solve across the different grids modeling a domain, which allows us to develop a multirate timestepping method. Note that while the MTS method of rybak2015multirate pertains to INSE, it solves the INSE with a fixed timestep size in the entire domain followed by a different timestep size for the shallow water equations.

In the current work, we develop a parallel multirate timestepping strategy where the INSE are integrated simultaneously in all the overlapping grids. This method circumvents the difficulty of the global divergence-free constraint through a combination of stable high-order predictor-corrector time integrators and mass-flux corrections for time advancement of the unsteady Stokes problems. The nonlinear terms continue to be treated explicitly in time, as in the case of single conforming domain, but are now advanced without the widely disparate values in CFL throughout the global domain. The method scales to an arbitrary number of overlapping grids and supports arbitrarily high (integer) timestep size ratio. Additionally, the approach presented in this paper is agnostic to the spatial discretization (FEM, FVM, SEM, etc.) and can be readily integrated into existing solvers.

The remainder of the paper is organized as follows. Section 2 summarizes the monodomain and overlapping grid-based framework for solving the incompressible Navier-Stokes equations using an STS-based approach tomboulides1997; mittal2019nonconforming. Section 3 builds upon the STS-based method to describe the MTS-based approach for solving the INSE in overlapping grids. In Section 4, we demonstrate that this novel MTS-based approach maintains the temporal accuracy of the underlying BDF/EXT-based timestepper and accurately models complex turbulent flow and heat transfer phenomenon. Here, we also demonstrate that multirate timestepping reduces the computational cost of a calculation in comparison to the STS-based approach. Finally, in Section 5 we discuss some directions for future work.

2 Preliminaries

This section provides a description of the singlerate timestepping-based framework for solving the incompressible Navier-Stokes equations in mono- and multi-domain settings, with the latter based on overlapping grids.

All the work presented here is based on the SEM dfm02, but the approach readily extends to other spatial discretizations. Introduced by Patera in 1984, the SEM is a high-order weighted residual method that combines the geometric flexibility of finite elements ( is decomposed into E smaller elements) with the rapid convergence of spectral methods. The basis functions in the SEM are

th-order tensor-product Lagrange polynomials on the Gauss-Lobatto-Legendre (GLL) quadrature points inside each element

patera84. Due to this tensor-product configuration, all operators in SEM can be expressed in a factored matrix-free form, which leads to fast operator-evaluation () and low operator-storage (O()). The method requires only function continuity at element interfaces yet yields exponential convergence of the solution with , resulting in a flexible method with low numerical dispersion. These features make the SEM well-suited for simulation of turbulent flow in complex domains.

2.1 Governing Equations

We consider the incompressible Navier-Stokes equations in nondimensional form,

(3)
(4)

where and represent the unknown velocity and pressure that are a function of position () and time (), and is the prescribed forcing. Here, is the Reynolds number based on the characteristic length scale , velocity scale , and kinematic viscosity of the fluid . In addition to the INSE, we also consider the energy equation

(5)

where represent the temperature solution and is an energy source term. is the Peclet number, which depends on the Reynolds number and the Prandtl number. The Prandtl number () is the ratio of the momentum diffusivity () and the thermal diffusivity (). The solution of (3)-(5) also depends on the initial conditions (for time-dependent problems) and boundary conditions.

2.2 Solution of the INSE in a Monodomain Grid

In our framework, we solve the unsteady INSE in velocity-pressure form using semi-implicit BDF/EXT timestepping in which the time derivative is approximated by a th-order backward difference formula (BDF), the nonlinear terms (and any other forcing) are treated with a th-order extrapolation (EXT)555From here on, we will use to represent the order of accuracy of our temporal discretization, unless otherwise stated., and the viscous and pressure terms are treated implicitly. This approach leads to a linear unsteady Stokes problem to be solved at each timestep, which is split into independent viscous and pressure (Poisson) updates tomboulides1997.

Assuming the solution is known at and that a constant timestep size is used for all timesteps, we compute a tentative velocity field at time with contributions from the BDF and the explicit terms as

(6)

where we use to represent the explicit contributions:

(7)

and the superscript indicates quantities evaluated at earlier timesteps, , and and are the BDF and EXT coefficients, respectively. constitutes the nonlinear update but does not account for the divergence-free constraint or viscous effects. The divergence-free constraint (4) is enforced through a pressure correction. A pressure Poisson equation is obtained by taking the divergence of the momentum equation, assuming the solution is divergence-free at time , , and using the identity :

(8)
(9)

where , and

(10)

The advantage of using the curl-curl form for the viscous term to decouple the velocity and pressure solve is that the equation governing the error in divergence () is an elliptic PDE instead of a parabolic PDE. As a result, this formulation is stable with splitting-induced divergence errors that are only tomboulides1997; fischer17.

Substituting the pressure solution in (3), is obtained by solving the Helmholtz equation

(11)

Similar to (11), using implicit treatment of the diffusion term and explicit treatment of the advection term for the energy equation, the solution for temperature is obtained by solving the Helmholtz equation

(12)

Spatial discretization of (8)-(12) is based on variational projection operators dfm02. We impose either essential (Dirichlet) boundary conditions or natural (Neumann) boundary conditions on a surface for velocity (and temperature). As expected, surfaces that have Dirichlet conditions for velocity have Neumann conditions for pressure, and vice-versa. We note that we use to denote the subset of domain boundary on which Dirichlet conditions are imposed on velocity and for the subset (e.g., outflow) on which pressure is prescribed.

Using (8)-(12), the Navier-Stokes solution time-advancement can be summarized as:

  1. Compute the tentative velocity field using (6), which accounts for the BDF and time extrapolated nonlinear terms (EXT terms).

  2. Solve the linear Stokes subproblems (8) and (11) to compute the velocity-pressure solution,

    (13)

Here , determined using , accounts for all inhomogeneities for both pressure and velocity, given on the right-hand sides of (8) and (11), respectively:

(14)

In (13), is the prescribed velocity on all Dirichlet surfaces () of the domain, homogeneous Dirichlet conditions are imposed for pressure on outflow surfaces (), and homogeneous Neumann conditions are imposed for velocity on . It is straightforward to show that the Neumann conditions for pressure on can be represented as a function of the Dirichlet condition for velocity () tomboulides1997. In (13), we have omitted the solution to temperature (12) for brevity since it is similar to the Helmholtz solve for velocity. Note that since we always impose Dirichlet conditions for velocity and pressure on and , respectively, we will omit them in the description of timestepping for overlapping grids.

2.3 Solution of the INSE on Overlapping Grids

The overlapping Schwarz method for solving a PDE in overlapping domains was introduced by Schwarz in 1870 schwarz1870. The decomposition for Schwarz’s initial model problem is illustrated in Fig. 2 where the domain is partitioned into two subdomains, a rectangle () and a circle (), with nonzero overlap such that and . We use to denote the “interdomain boundary”, namely the segment of the subdomain boundary that is interior to another subdomain. The interdomain boundaries and are highlighted in Fig. 2(b).

Figure 2: (left to right) (a) Composite domain (b) modeled by overlapping rectangular () and circular () subdomains. denotes the segment of the subdomain boundary that is interior to another subdomain

There are two key aspects for solving a PDE (INSE) in overlapping grids. First, since overlapping grids introduce interdomain boundaries, a robust mechanism is required to interpolate boundary data for the grid points discretizing

from the subdomain that they overlap. Second, Schwarz iterations are required to ensure that the solution is consistent across the different overlapping grids.

There are two popular approaches for Schwarz iterations. In the alternating Schwarz method, given overlapping subdomains, the PDE in solved in the first subdomain and that solution is used to update the interdomain boundary data in all other subdomains. This process is repeated sequentially for subdomains. A drawback of the alternating Schwarz method is that it does not scale with the number of subdomains since it requires at-least steps to obtain the solution of a PDE. In contrast, the simultaneous Schwarz method solves the PDE simultaneously in all subdomains followed by interdomain boundary data exchange. This iterative process is repeated until the solution converges to desired accuracy in the overlap region. Thus, assuming there is a robust mechanism to effect interdomain boundary data exchange, the scalability of the simultaneous Schwarz iterations is not restricted by the number of subdomains. The Schwarz-SEM framework that we describe next is based on the simultaneous Schwarz method, and the reader is referred to smith2004domain for additional details on different OS-based techniques.

For notational purposes, we introduce as the solution on the th Schwarz iteration in subdomain at time level , for . Thus, assuming that the solution is known up to time and has been converged using Schwarz iterations at the previous timestep, represents the solution at time . With this notation, and assuming a constant timestep size (which is equal for all overlapping grids), we define the Schwarz update procedure as follows:

  1. Compute the tentative velocity field using (6) with the solution from previous timesteps in each subdomain :

    (15)

    where has contributions from the BDF and EXT terms. We note that we do not use the superscript in because it depends only on the solution at previous timesteps and does not change at each Schwarz iteration.

  2. Use simultaneous Schwarz iterations to solve the linear Stokes subproblems (8) and (11) to yield the velocity-pressure pair, .

Since the solution is known up to , the initial iterate (, the predictor step) uses interdomain boundary data based on th-order extrapolation in time. The subsequent Schwarz iterations (the corrector steps) directly interpolate the interdomain boundary data from the most recent iteration:

(16)
(17)

where is the order of extrapolation for the interdomain boundary data at iteration, are the corresponding extrapolation weights that are computed using the routines described in fornberg1998practical, is the interpolation operator that we describe in Section 2.3.1, and

(18)

In (16) and (17), we solve the full unsteady Stokes problem for velocity and pressure in at each Schwarz iteration. Here, we have omitted the Dirichlet boundary conditions for velocity and pressure on and , respectively, for brevity.

The temporal accuracy of this singlerate timestepping scheme (16)–(17) is . We typically set , unless otherwise stated, and Peet and Fischer peet2012 have shown that is sufficient for and is sufficient for from a stability point of view. In mittal2019nonconforming, we further demonstrated that is sufficient from an accuracy point of view for basic statistics (e.g., mean or rms) of turbulent flows. This predictor-corrector approach has been used in the Schwarz-SEM framework merrill2017effect; merrill2019moving; mittal2019direct to demonstrate that it maintains the spatial and temporal convergence of the underlying monodomain SEM framework, and is effective for solving highly turbulent flow phenomenon in complex domains using an arbitrary number of overlapping grids.

2.3.1 Interpolation

Since overlapping grids rely on interpolation for interdomain boundary data, the interpolation operator () is of central significance for overlapping Schwarz based methods. In our framework, is effected (in parallel) via findpts, a scalable high-order interpolation utility that is part of gslib gslibrepo

, an open-source communication library that readily links with Fortran, C, and C++ codes.

findpts provides two key functionalities. First, for a given set of interdomain boundary points that are tagged with the associated subdomain number , findpts determines the computational coordinates of each point. These computational coordinates () for each point specify the subdomain number that it overlaps, the element number () in which the point was found, and the corresponding reference-space coordinates ()) inside that element. Since a mesh could be partitioned on to many MPI ranks, findpts also specifies the MPI rank on which the donor element is located. For cases where , the donor element search is straightforward noorani2016particle because findpts is only concerned with the elements that are not located in the same subdomain as the sought-point (). In cases where , an interdomain boundary point can overlap multiple subdomains. In these cases, the donor element is chosen from the subdomain that minimizes the error due to simultaneous Schwarz iterations mittal2019nonconforming. If the nodal positions of all the overlapping grids are fixed in time, the computational coordinate search needs to be done only at the beginning of the calculation. Otherwise, the computational coordinate search is done at the beginning of each time-step.

The second key functionality of findpts is that for a given set of computational coordinates, it can interpolate any scalar function defined on the spectral element mesh. All the parallel communication in findpts is handled by gslib’s generalized and scalable all-to-all utility, gs_crystal, which is based on the crystal router algorithm of tufo2001. Using gslib, findpts has demonstrated excellent scaling in parallel for finding computational coordinates of a given set of points and interpolating solution in a mondomain mesh duttaefficient and in overlapping meshes mittal2019nonconforming. In mittal2019nonconforming, we demonstrated that the computational coordinate search and interpolation account for about 10% and 1%, respectively, of the total time to solution per time-step.

3 Methodology for Multirate Timestepping

In this section, we introduce the parallel multirate timestepping scheme for solving INSE in overlapping subdomains. We consider only integer timestep ratios,

(19)

where corresponds to the subdomain () with slower time-scales and corresponds to the subdomain () with faster time-scales. Figure 3 shows a schematic of the discrete time-levels for the STS scheme and the MTS scheme with . Here, the black circles () indicate the timestep levels for both and and the blue circles () indicate the sub-timestep levels for .

Figure 3: Schematic showing discrete time-levels for singlerate and multirate timestepping ().

3.1 Multirate Timestepping for

For simplicity, we first introduce the multirate timestepping scheme for and then extend it to an arbitrary . In a slowest- or fastest-first method, assuming the solution is known up to , the PDE of interest is temporally integrated in either of the domains (e.g., say ) to obtain the solution at time , which is then used to obtain interdomain boundary data for advancing the solution in the other domain (e.g., ). For a parallel multirate scheme, however, we wish to simultaneously advance the solution in and . As a result, the interdomain boundary data is exchanged prior to starting the solution process such that the sub-timesteps in can be completed independently of . The synchronization time-levels at which the interdomain boundary data is exchanged are indicated by () in Fig. 3.

Similar to the singlerate timestepping scheme (e.g., see (16)), high-order temporal accuracy is achieved in the multirate setting by extrapolating the interdomain boundary data obtained from the solution at previous (sub-) timesteps. For , the interdomain boundary data dependency for the predictor step is depicted in Fig. 4 and discussed in context of the unsteady Stokes solve later in this section (e.g., (20)). For the solutions and the boundary data is interpolated from the known solutions in : , , and . Simultaneously, the interdomain boundary data for the solution is interpolated from the known solutions in : , , and . This interdomain boundary data exchange occurs at synchronization time-level , prior to starting the solution process for times and .

Figure 4: Schematic showing the dependence of the interdomain boundary data for the predictor step.

Figure 5: Schematic showing the dependence of the interdomain boundary data for corrector steps.

Once the solution and have been determined, correction iterations are needed (similar to the singlerate timestepping scheme) to stabilize the solution if high-order extrapolation is used for interdomain boundary data during the predictor step. The interdomain boundary data dependency for the corrector steps is depicted in Fig. 5. In , the interdomain boundary data for comes from the most recent iteration in () and the converged solution at previous timesteps, and . For the solution at time ( and ), the interdomain boundary data only depends on the solution from the most recent iteration ( and ).

Using this approach for obtaining interdomain boundary data, we now summarize the multirate timestepping scheme for . Recall our notation for singlerate timestepping, denotes the solution in at the th Schwarz iteration at time .

  1. For the predictor step (), assuming that the solution is known up to time , compute the tentative velocity field using (6), and solve the linear Stokes problem in each subdomain:

    • Unsteady Stokes solve for the first sub-timestep in :

      (20a)
      (20b)
    • Unsteady Stokes solve for the second sub-timestep in :

      (21a)
      (21b)
    • Unsteady Stokes solve for the only timestep in :

      (22a)
      (22b)

    In (20), we first compute the tentative velocity field, similar to (15) for singlerate timestepping, and then solve for using the interdomain boundary data extrapolated from the solution at previous timesteps in . The definition of is unchanged from (18) for singlerate timestepping scheme, and we have introduced the notation to denote the coefficients that are used to extrapolate the interdomain boundary data at the th sub-timestep for .

    Once is determined, is computed using (21), which completes the predictor step for advancing the solution of INSE in . Parallel to (20) and (21), (22) is used for the solution at time in .

    From an implementation perspective, the extrapolation coefficients used in (20)-(22) are computed based on the time-levels (e.g., , , etc.) and the order of extrapolation , using the routines described in fornberg1998practical.

  2. Once the predictor step, , is complete, corrector iterations can be done to improve the accuracy of the solution and stabilize the method:

    • Unsteady Stokes solve for the first sub-timestep in :

      (23a)
      (23b)
    • Unsteady Stokes solve for the second sub-timestep in :

      (24a)
      (24b)
    • Unsteady Stokes solve for the only timestep in :

      (25)

    In (23), we compute using the interdomain boundary data obtained from . To maintain high-order temporal accuracy, this boundary data is interpolated from the most recent solution at the current timestep () and the converged solution at previous timesteps ( and ). The corresponding coefficients for this temporal interpolation are represented by in (23). In our framework, is computed assuming linear interpolation when or , and quadratic interpolation when . This approach ensures that the desired temporal accuracy is maintained. After (23) is used to compute , (24) is used to determine in . In a similar fashion is (concurrently) computed using (25).

    We note that in the singlerate timestepping scheme, the tentative velocity field () was computed only once for the corrector iterations (15). In contrast, we recompute the tentative velocity field in (23)–(24) at each corrector iteration for , because the solution process spans multiple sub-timesteps. Saving for each of the sub-timesteps is not a scalable approach (e.g., will require us to save tentative velocity field for 100 sub-timesteps). Using (23)-(25), simultaneous Schwarz iterations can be used to determine the solution in and at time .

Equations (20)-(25) describe the predictor-corrector strategy for multirate timestepping with . We now extend this methodology to an arbitrary timestep ratio.

3.2 Multirate Timestepping for Arbitrary

From the preceding discussion, we can anticipate that the generalization of this multirate scheme will require sub-timesteps in and only one timestep in . Figure 6 shows a schematic with time-levels for an arbitrary timestep ratio.

Figure 6: Schematic showing discrete time-levels for the multirate timestepping with an arbitrary timestep ratio.

Similar to (20)-(23), the timestepping strategy for arbitrary (integer) is,

  1. Compute the tentative velocity field and solve the linear Stokes problem in each subdomain for the predictor step ().

    • Unsteady Stokes solve for the sub-timesteps of :

      (26a)
      (26b)
    • Unsteady Stokes solve for the only timestep of :

      (27a)
      (27b)

    In (26), we compute the sub-timestep solution for , sequentially from , and in (27), we compute the solution in at time .

  2. Once the predictor step is complete, corrector iterations are done as

    • Unsteady Stokes solve for the sub-timesteps of :

      (28a)
      (28b)
    • Unsteady Stokes solve for the only timestep of :

      (29)

Using the high-order multirate timestepping strategy in (26)-(29), the solution to the incompressible Navier-Stokes equations can be advanced in two overlapping grids for an arbitrary timestep ratio.

3.3 Multirate Timestepping in Overlapping Domains

With the multirate timestepping scheme that we have described, it is straightforward to scale this method to an arbitrary number of domains. Figure 7 shows an example of a schematic with time-levels for MTS in subdomains.

Figure 7: Schematic showing discrete time-levels for the multirate timestepping in an arbitrary number of subdomains.

For notational purposes, we will use to represent the subdomain with slowest time-scales. With each subdomain, we associate the timestep ratio with respect to the timestep size of :

(30)

such that and for . For the example in Fig. 7, and the timestep ratios for different subdomains are , and .

Using (30) and assuming that the interdomain boundary data for points on is interpolated from , the system in (26)–(29) can be simplified for all subdomains ():

  1. Unsteady Stokes solve for the sub-timesteps during the predictor stage:

    (31a)
    (31b)
  2. Once the predictor step is complete, corrector iterations are done for the sub-timesteps:

    (32a)
    (32b)

Equations (31)-(32) describe the MTS method for solving the INSE in an arbitrary number of overlapping grids. From an implementation perspective, since different interdomain boundary points in a grid can overlap different grids, the coefficients and are computed for each point based on the timestep size of the donor subdomain (). Note that for cases where all the meshes are fixed and the time-size is constant in each subdomain, these coefficients thus need to be computed only at the beginning of the calculation.

Figure 8 shows an example of the interdomain boundary data dependency for the schematic shown in Fig. 7. Here, we assume that the gridpoints on overlap or . Assuming that the solution is know up to time , the boundary data for is extrapolated from the known solutions: , , and , where or 2. Similarly, for the corrector steps (), the boundary data for is extrapolated from the most recent Schwarz iteration and the known solutions:, and .

Figure 8: Schematic showing interdomain boundary data dependency for from and .

We note that unlike the singlerate timestepping scheme where only 1 interpolation is required at each predictor and corrector iteration, the multirate timestepping scheme requires interpolations at the beginning of each predictor step and 1 interpolation at the beginning of each corrector iteration. Thus, for -th order temporal accuracy with an example corresponding to timestep ratio , the STS scheme requires a total of interpolations ( at each sub-timestep) and the MT scheme requires a total of interpolations. Consequently, the MTS typically requires fewer interpolations in comparison to the STS scheme. In Section 4.3, we will use the example of a thermally-buoyant plume to compare the total time to solution between the STS and MTS scheme with .

3.4 Stability Considerations

An underlying assumption of the MTS scheme is that each subdomain has a timestep size that satisfies its CFL stability criterion (2). This requirement ensures intradomain stability, i.e., the unsteady Stokes solve for time-advancing the solution of the INSE is stable in each subdomain. Interdomain stability, however, is similar to the singlerate timestepping scheme and depends on the order of extrapolation () used for interdomain boundary data and the number of Schwarz iterations () used at each timestep.

Peet and Fischer peet2012 have analyzed the stability of the singlerate timestepping scheme using an FD-based framework to show than i