1 Introduction
Computational simulations, driven by the advent of highperformance computing resources, have become ubiquitous for modeling and understanding complex flow phenomena. The accuracy of these computational simulations primarily depends on two key aspects of the method used to solve the partial differential equations (PDE) of interest; (i) the spatial discretization and (ii) the temporal discretization. For spatial discretization, various methods like finite difference (FD), finite element method (FEM), finite volume method (FVM), and the spectral element method (SEM) have become popular for representing the solution of the PDE on a discrete set of nodes/volumes/elements covering the domain
gresho1978time; rosenfeld1988solution; dfm02. The spatial accuracy of the discrete solution obtained using these methods depends on the size of the local grid spacing used to model the domain and the order of accuracy of the spatial discretization. Similarly, the temporal accuracy of the solution depends on the timestep size () and the order () of the timestepper used for temporal integration. Some popular methods for timeintegration include the RungeKutta (RK) method, the AdamsBashforth (AB) method, and the backward differentiation formula (BDF) method. The focus of this work is on the temporal integration of the incompressible NavierStokes equations for modeling fluid dynamics and heat transfer in complex domains.For the unsteady NavierStokes equations (NSE), the nonlinear convective term is typically treaded explicitly tomboulides1997 and the maximum allowable timestep size for stable timeintegration is determined by the CourantFriedrichsLewy number (CFL) lewy1928partiellen. For a problem in space dimensions, we define a local CFL,
(1) 
where, at gridpoint , is the th component of velocity and is an approximate grid spacing in the direction, and is the timestep size. We also define a global CFL,
CFL  (2) 
To within a scaling factor, the CFL is a robust and easily evaluated surrogate for , where
is the spectral radius of the (assumed skewsymmetric) convection operator. The CFL limit associated with explicit timeadvancement of the convection operator is governed by the scale factor,
, such that and the chosen timestepper. For example, for secondorder centered differences on a onedimensional periodic domain with thirdorder AdamsBashforth timestepping, we have and . For Fourier methods, , and for the SEM, for dfm02.Most numerical approximations use the same timestep size throughout the domain. These methods are classified as singlerate timestepping (STS) methods. A wellknown challenge in STSbased methods, due to the nature of (
2), is that even a single point in the domain having a high speedtogridsize ratio can have the undesirable effect of limiting the allowable timestep size throughout the domain. This situation occurs, for example, near airfoil trailing edges where flow speeds are high and computational meshes are often dense. Another common case is in the simulation of plumes fabregat2016dynamics, as illustrated in Fig. 1. Figure 1(a) shows a sliceview of the spectral element mesh used to model the domain and Fig. 1(c) shows the instantaneous velocity magnitude contours. Here, resolution requirements for the turbulence in and near the inlet pipe result in having the finest (min ) meshes in the region where velocities are the largest. Away from the inlet pipe, the turbulence intensity is lower and the meshes are correspondingly coarser. Consequently, we observe that the local CFL is almost two to three orders of magnitude higher for the elements in the plume region as compared to elements in the farfield. The CFL variation throughout the domain is shown on a logscale in Fig 1(d). Using the same timestep size for integrating the NSE, thus leads to unnecessary computational effort for elements in the farfield.The scale disparity in CFL is even more evident when nonconforming overlapping grids are used for this domain (Fig. 1(b)). It is well known that overlapping grids are highly effective in reducing the computational cost of calculations for domains featuring flow structures with widely varying spatial scales in different regions of the domain. Since overlapping grids relax the constraint of mesh conformity, the reduce the total element count (10% in this case) with grids that are constructed according to the physics in the region that they cover. In this example, since the outer grid is much coarser than the inner grid, the outer grid should be able to use orders of magnitude bigger timestep size than the inner grid. Most overlapping gridbased methods, however, use an STSbased approach mittal2019nonconforming; henshaw2012cgins; angel2018, which results in superfluous computational work for the outer grid.
Multirate timestepping methods were first introduced in the seminal work of Rice in 1960 rice1960split
. Rice developed a RungeKutta based timestepping strategy for solving a system of two ordinary differential equations (ODEs) with different integration step sizes. Shortly after, multirate timestepping methods were popularized by Gear
gear1974multirate; gear1984multirate and Andrus andrus1979numerical; andrus1993stability. In this pioneering work, Gear analyzed the stability and accuracy of Euler methodbased strategies for MTS. Similarly, Andrus derived conditions for absolute stability of a highorder RungeKuttabased approach for MTS in a system of firstorder ODEs. The methods developed by Gear and Andrus were slowestfirst or fastestfirstbased, where the ODE with the slower component is solved first followed by the ODE with the faster component, or viceversa. Other similar works include gunther2001multirate, where Gunther describes a multirate partitioned RungeKuttabased (slowestfirst) scheme for solving a system of ODE with stiff components, and verhoeven2007stability, where Verhoeven analyzes the stability of BDFbased (slowestfirst) MTS methods for understanding the time behavior of electrical circuits.While generally useful, a drawback of slowestfirst or fastestfirst schemes is that they limit the parallelism of the calculation since the ODEs/PDEs are integrated sequentially. In engstler1997multirate, Engstler proposed a method based on Richardson extrapolation to simultaneously solve for the slow and the fastmoving components of a system of ODEs. Similarly, for PDEs, Dawson dawson2001high, Constantinescu constantinescu2007multirate, and Seny seny2013multirate have developed parallelintime MTS methods for solving hyperbolic or parabolic equations with different timestep size for different element groups in a mesh, based on the local CFL number.
Other notable developments in the area of MTS methods include savcenco2007multirate; rybak2015multirate; emmett2014high; trahan2012local; gupta2016multi; mikida2018multi. In savcenco2007multirate, Savcenco introduced a novel approach for MTS where all the equations in a system of ODEs are first integrated using a large global timestep size everywhere in the domain, followed by error indicators to determine the equations that require a smaller timestep size. This approach thus avoids unnecessary computation by using a smaller timestep size for ODEs only that require it. Rybak rybak2015multirate has proposed an MTS method for solving fluid flow in coupled free flow domain and porous media. In rybak2015multirate, the PDE for the free flow domain (INSE) is first temporally integrated using a CFLdependent smaller timestep size, followed by a larger timestep size to solve the PDE for porous media. While Rybak’s and Savcenco’s approaches are effective for MTS, they are sequential and lack parallelism, similar to the slowestfirst and fastestfirstbased methods. In emmett2014high, Emmet has used different timestep sizes for solving fluid motion and relatively stiff chemical mechanism to model compressible reacting flow with complex chemistry. This approach can also be extended to conjugate heat transfer problems where the time scale associated with the energy transfer in fluid and solid medium are very different. Trahan trahan2012local has developed a fastestfirst approach for solving the shallow water equations in monodomain conforming grids, Gupta et al. gupta2016multi use multirate timestepping for modeling subsurface methane hydrate reservoirs, and Mikida et al. mikida2018multi solve the compressible NavierStokes equations using different timestep sizes in overlapping grids with a fastestfirst AdamsBashforthbased scheme.
A survey of the literature shows multirate timestepping methods have mainly been developed for parabolic and hyperbolic problems gear1984multirate; verhoeven2007stability; engstler1997multirate; constantinescu2007multirate; seny2013multirate; rybak2015multirate; emmett2014high; trahan2012local; mikida2018multi; klockner2010high. MTS methods are virtually nonexistent for the incompressible NavierStokes equations because the solution is very sensitive to the pressure, which satisfies an elliptic Poisson problem at every timestep dfm02. Since the characteristic propagation speed of pressure perturbations is infinite in incompressible flows, existing approaches for multirate timestepping do not extend to a single conforming mesh. Overlapping grids however, can decouple the pressure Poisson solve across the different grids modeling a domain, which allows us to develop a multirate timestepping method. Note that while the MTS method of rybak2015multirate pertains to INSE, it solves the INSE with a fixed timestep size in the entire domain followed by a different timestep size for the shallow water equations.
In the current work, we develop a parallel multirate timestepping strategy where the INSE are integrated simultaneously in all the overlapping grids. This method circumvents the difficulty of the global divergencefree constraint through a combination of stable highorder predictorcorrector time integrators and massflux corrections for time advancement of the unsteady Stokes problems. The nonlinear terms continue to be treated explicitly in time, as in the case of single conforming domain, but are now advanced without the widely disparate values in CFL throughout the global domain. The method scales to an arbitrary number of overlapping grids and supports arbitrarily high (integer) timestep size ratio. Additionally, the approach presented in this paper is agnostic to the spatial discretization (FEM, FVM, SEM, etc.) and can be readily integrated into existing solvers.
The remainder of the paper is organized as follows. Section 2 summarizes the monodomain and overlapping gridbased framework for solving the incompressible NavierStokes equations using an STSbased approach tomboulides1997; mittal2019nonconforming. Section 3 builds upon the STSbased method to describe the MTSbased approach for solving the INSE in overlapping grids. In Section 4, we demonstrate that this novel MTSbased approach maintains the temporal accuracy of the underlying BDF/EXTbased timestepper and accurately models complex turbulent flow and heat transfer phenomenon. Here, we also demonstrate that multirate timestepping reduces the computational cost of a calculation in comparison to the STSbased approach. Finally, in Section 5 we discuss some directions for future work.
2 Preliminaries
This section provides a description of the singlerate timesteppingbased framework for solving the incompressible NavierStokes equations in mono and multidomain settings, with the latter based on overlapping grids.
All the work presented here is based on the SEM dfm02, but the approach readily extends to other spatial discretizations. Introduced by Patera in 1984, the SEM is a highorder weighted residual method that combines the geometric flexibility of finite elements ( is decomposed into E smaller elements) with the rapid convergence of spectral methods. The basis functions in the SEM are
thorder tensorproduct Lagrange polynomials on the GaussLobattoLegendre (GLL) quadrature points inside each element
patera84. Due to this tensorproduct configuration, all operators in SEM can be expressed in a factored matrixfree form, which leads to fast operatorevaluation () and low operatorstorage (O()). The method requires only function continuity at element interfaces yet yields exponential convergence of the solution with , resulting in a flexible method with low numerical dispersion. These features make the SEM wellsuited for simulation of turbulent flow in complex domains.2.1 Governing Equations
We consider the incompressible NavierStokes equations in nondimensional form,
(3)  
(4) 
where and represent the unknown velocity and pressure that are a function of position () and time (), and is the prescribed forcing. Here, is the Reynolds number based on the characteristic length scale , velocity scale , and kinematic viscosity of the fluid . In addition to the INSE, we also consider the energy equation
(5) 
where represent the temperature solution and is an energy source term. is the Peclet number, which depends on the Reynolds number and the Prandtl number. The Prandtl number () is the ratio of the momentum diffusivity () and the thermal diffusivity (). The solution of (3)(5) also depends on the initial conditions (for timedependent problems) and boundary conditions.
2.2 Solution of the INSE in a Monodomain Grid
In our framework, we solve the unsteady INSE in velocitypressure form using semiimplicit BDF/EXT timestepping in which the time derivative is approximated by a thorder backward difference formula (BDF), the nonlinear terms (and any other forcing) are treated with a thorder extrapolation (EXT)^{5}^{5}5From here on, we will use to represent the order of accuracy of our temporal discretization, unless otherwise stated., and the viscous and pressure terms are treated implicitly. This approach leads to a linear unsteady Stokes problem to be solved at each timestep, which is split into independent viscous and pressure (Poisson) updates tomboulides1997.
Assuming the solution is known at and that a constant timestep size is used for all timesteps, we compute a tentative velocity field at time with contributions from the BDF and the explicit terms as
(6) 
where we use to represent the explicit contributions:
(7) 
and the superscript indicates quantities evaluated at earlier timesteps, , and and are the BDF and EXT coefficients, respectively. constitutes the nonlinear update but does not account for the divergencefree constraint or viscous effects. The divergencefree constraint (4) is enforced through a pressure correction. A pressure Poisson equation is obtained by taking the divergence of the momentum equation, assuming the solution is divergencefree at time , , and using the identity :
(8)  
(9) 
where , and
(10) 
The advantage of using the curlcurl form for the viscous term to decouple the velocity and pressure solve is that the equation governing the error in divergence () is an elliptic PDE instead of a parabolic PDE. As a result, this formulation is stable with splittinginduced divergence errors that are only tomboulides1997; fischer17.
Substituting the pressure solution in (3), is obtained by solving the Helmholtz equation
(11) 
Similar to (11), using implicit treatment of the diffusion term and explicit treatment of the advection term for the energy equation, the solution for temperature is obtained by solving the Helmholtz equation
(12) 
Spatial discretization of (8)(12) is based on variational projection operators dfm02. We impose either essential (Dirichlet) boundary conditions or natural (Neumann) boundary conditions on a surface for velocity (and temperature). As expected, surfaces that have Dirichlet conditions for velocity have Neumann conditions for pressure, and viceversa. We note that we use to denote the subset of domain boundary on which Dirichlet conditions are imposed on velocity and for the subset (e.g., outflow) on which pressure is prescribed.

Compute the tentative velocity field using (6), which accounts for the BDF and time extrapolated nonlinear terms (EXT terms).
Here , determined using , accounts for all inhomogeneities for both pressure and velocity, given on the righthand sides of (8) and (11), respectively:
(14)  
In (13), is the prescribed velocity on all Dirichlet surfaces () of the domain, homogeneous Dirichlet conditions are imposed for pressure on outflow surfaces (), and homogeneous Neumann conditions are imposed for velocity on . It is straightforward to show that the Neumann conditions for pressure on can be represented as a function of the Dirichlet condition for velocity () tomboulides1997. In (13), we have omitted the solution to temperature (12) for brevity since it is similar to the Helmholtz solve for velocity. Note that since we always impose Dirichlet conditions for velocity and pressure on and , respectively, we will omit them in the description of timestepping for overlapping grids.
2.3 Solution of the INSE on Overlapping Grids
The overlapping Schwarz method for solving a PDE in overlapping domains was introduced by Schwarz in 1870 schwarz1870. The decomposition for Schwarz’s initial model problem is illustrated in Fig. 2 where the domain is partitioned into two subdomains, a rectangle () and a circle (), with nonzero overlap such that and . We use to denote the “interdomain boundary”, namely the segment of the subdomain boundary that is interior to another subdomain. The interdomain boundaries and are highlighted in Fig. 2(b).
There are two key aspects for solving a PDE (INSE) in overlapping grids. First, since overlapping grids introduce interdomain boundaries, a robust mechanism is required to interpolate boundary data for the grid points discretizing
from the subdomain that they overlap. Second, Schwarz iterations are required to ensure that the solution is consistent across the different overlapping grids.There are two popular approaches for Schwarz iterations. In the alternating Schwarz method, given overlapping subdomains, the PDE in solved in the first subdomain and that solution is used to update the interdomain boundary data in all other subdomains. This process is repeated sequentially for subdomains. A drawback of the alternating Schwarz method is that it does not scale with the number of subdomains since it requires atleast steps to obtain the solution of a PDE. In contrast, the simultaneous Schwarz method solves the PDE simultaneously in all subdomains followed by interdomain boundary data exchange. This iterative process is repeated until the solution converges to desired accuracy in the overlap region. Thus, assuming there is a robust mechanism to effect interdomain boundary data exchange, the scalability of the simultaneous Schwarz iterations is not restricted by the number of subdomains. The SchwarzSEM framework that we describe next is based on the simultaneous Schwarz method, and the reader is referred to smith2004domain for additional details on different OSbased techniques.
For notational purposes, we introduce as the solution on the th Schwarz iteration in subdomain at time level , for . Thus, assuming that the solution is known up to time and has been converged using Schwarz iterations at the previous timestep, represents the solution at time . With this notation, and assuming a constant timestep size (which is equal for all overlapping grids), we define the Schwarz update procedure as follows:

Compute the tentative velocity field using (6) with the solution from previous timesteps in each subdomain :
(15) where has contributions from the BDF and EXT terms. We note that we do not use the superscript in because it depends only on the solution at previous timesteps and does not change at each Schwarz iteration.
Since the solution is known up to , the initial iterate (, the predictor step) uses interdomain boundary data based on thorder extrapolation in time. The subsequent Schwarz iterations (the corrector steps) directly interpolate the interdomain boundary data from the most recent iteration:
(16)  
(17) 
where is the order of extrapolation for the interdomain boundary data at iteration, are the corresponding extrapolation weights that are computed using the routines described in fornberg1998practical, is the interpolation operator that we describe in Section 2.3.1, and
(18)  
In (16) and (17), we solve the full unsteady Stokes problem for velocity and pressure in at each Schwarz iteration. Here, we have omitted the Dirichlet boundary conditions for velocity and pressure on and , respectively, for brevity.
The temporal accuracy of this singlerate timestepping scheme (16)–(17) is . We typically set , unless otherwise stated, and Peet and Fischer peet2012 have shown that is sufficient for and is sufficient for from a stability point of view. In mittal2019nonconforming, we further demonstrated that is sufficient from an accuracy point of view for basic statistics (e.g., mean or rms) of turbulent flows. This predictorcorrector approach has been used in the SchwarzSEM framework merrill2017effect; merrill2019moving; mittal2019direct to demonstrate that it maintains the spatial and temporal convergence of the underlying monodomain SEM framework, and is effective for solving highly turbulent flow phenomenon in complex domains using an arbitrary number of overlapping grids.
2.3.1 Interpolation
Since overlapping grids rely on interpolation for interdomain boundary data, the interpolation operator () is of central significance for overlapping Schwarz based methods. In our framework, is effected (in parallel) via findpts, a scalable highorder interpolation utility that is part of gslib gslibrepo
, an opensource communication library that readily links with Fortran, C, and C++ codes.
findpts provides two key functionalities. First, for a given set of interdomain boundary points that are tagged with the associated subdomain number , findpts determines the computational coordinates of each point. These computational coordinates () for each point specify the subdomain number that it overlaps, the element number () in which the point was found, and the corresponding referencespace coordinates ()) inside that element. Since a mesh could be partitioned on to many MPI ranks, findpts also specifies the MPI rank on which the donor element is located. For cases where , the donor element search is straightforward noorani2016particle because findpts is only concerned with the elements that are not located in the same subdomain as the soughtpoint (). In cases where , an interdomain boundary point can overlap multiple subdomains. In these cases, the donor element is chosen from the subdomain that minimizes the error due to simultaneous Schwarz iterations mittal2019nonconforming. If the nodal positions of all the overlapping grids are fixed in time, the computational coordinate search needs to be done only at the beginning of the calculation. Otherwise, the computational coordinate search is done at the beginning of each timestep.
The second key functionality of findpts is that for a given set of computational coordinates, it can interpolate any scalar function defined on the spectral element mesh. All the parallel communication in findpts is handled by gslib’s generalized and scalable alltoall utility, gs_crystal, which is based on the crystal router algorithm of tufo2001. Using gslib, findpts has demonstrated excellent scaling in parallel for finding computational coordinates of a given set of points and interpolating solution in a mondomain mesh duttaefficient and in overlapping meshes mittal2019nonconforming. In mittal2019nonconforming, we demonstrated that the computational coordinate search and interpolation account for about 10% and 1%, respectively, of the total time to solution per timestep.
3 Methodology for Multirate Timestepping
In this section, we introduce the parallel multirate timestepping scheme for solving INSE in overlapping subdomains. We consider only integer timestep ratios,
(19) 
where corresponds to the subdomain () with slower timescales and corresponds to the subdomain () with faster timescales. Figure 3 shows a schematic of the discrete timelevels for the STS scheme and the MTS scheme with . Here, the black circles () indicate the timestep levels for both and and the blue circles () indicate the subtimestep levels for .
3.1 Multirate Timestepping for
For simplicity, we first introduce the multirate timestepping scheme for and then extend it to an arbitrary . In a slowest or fastestfirst method, assuming the solution is known up to , the PDE of interest is temporally integrated in either of the domains (e.g., say ) to obtain the solution at time , which is then used to obtain interdomain boundary data for advancing the solution in the other domain (e.g., ). For a parallel multirate scheme, however, we wish to simultaneously advance the solution in and . As a result, the interdomain boundary data is exchanged prior to starting the solution process such that the subtimesteps in can be completed independently of . The synchronization timelevels at which the interdomain boundary data is exchanged are indicated by () in Fig. 3.
Similar to the singlerate timestepping scheme (e.g., see (16)), highorder temporal accuracy is achieved in the multirate setting by extrapolating the interdomain boundary data obtained from the solution at previous (sub) timesteps. For , the interdomain boundary data dependency for the predictor step is depicted in Fig. 4 and discussed in context of the unsteady Stokes solve later in this section (e.g., (20)). For the solutions and the boundary data is interpolated from the known solutions in : , , and . Simultaneously, the interdomain boundary data for the solution is interpolated from the known solutions in : , , and . This interdomain boundary data exchange occurs at synchronization timelevel , prior to starting the solution process for times and .
Once the solution and have been determined, correction iterations are needed (similar to the singlerate timestepping scheme) to stabilize the solution if highorder extrapolation is used for interdomain boundary data during the predictor step. The interdomain boundary data dependency for the corrector steps is depicted in Fig. 5. In , the interdomain boundary data for comes from the most recent iteration in () and the converged solution at previous timesteps, and . For the solution at time ( and ), the interdomain boundary data only depends on the solution from the most recent iteration ( and ).
Using this approach for obtaining interdomain boundary data, we now summarize the multirate timestepping scheme for . Recall our notation for singlerate timestepping, denotes the solution in at the th Schwarz iteration at time .

For the predictor step (), assuming that the solution is known up to time , compute the tentative velocity field using (6), and solve the linear Stokes problem in each subdomain:

Unsteady Stokes solve for the first subtimestep in :
(20a) (20b) 
Unsteady Stokes solve for the second subtimestep in :
(21a) (21b) 
Unsteady Stokes solve for the only timestep in :
(22a) (22b)
In (20), we first compute the tentative velocity field, similar to (15) for singlerate timestepping, and then solve for using the interdomain boundary data extrapolated from the solution at previous timesteps in . The definition of is unchanged from (18) for singlerate timestepping scheme, and we have introduced the notation to denote the coefficients that are used to extrapolate the interdomain boundary data at the th subtimestep for .


Once the predictor step, , is complete, corrector iterations can be done to improve the accuracy of the solution and stabilize the method:

Unsteady Stokes solve for the first subtimestep in :
(23a) (23b) 
Unsteady Stokes solve for the second subtimestep in :
(24a) (24b) 
Unsteady Stokes solve for the only timestep in :
(25)
In (23), we compute using the interdomain boundary data obtained from . To maintain highorder temporal accuracy, this boundary data is interpolated from the most recent solution at the current timestep () and the converged solution at previous timesteps ( and ). The corresponding coefficients for this temporal interpolation are represented by in (23). In our framework, is computed assuming linear interpolation when or , and quadratic interpolation when . This approach ensures that the desired temporal accuracy is maintained. After (23) is used to compute , (24) is used to determine in . In a similar fashion is (concurrently) computed using (25).
We note that in the singlerate timestepping scheme, the tentative velocity field () was computed only once for the corrector iterations (15). In contrast, we recompute the tentative velocity field in (23)–(24) at each corrector iteration for , because the solution process spans multiple subtimesteps. Saving for each of the subtimesteps is not a scalable approach (e.g., will require us to save tentative velocity field for 100 subtimesteps). Using (23)(25), simultaneous Schwarz iterations can be used to determine the solution in and at time .

3.2 Multirate Timestepping for Arbitrary
From the preceding discussion, we can anticipate that the generalization of this multirate scheme will require subtimesteps in and only one timestep in . Figure 6 shows a schematic with timelevels for an arbitrary timestep ratio.

Compute the tentative velocity field and solve the linear Stokes problem in each subdomain for the predictor step ().

Unsteady Stokes solve for the subtimesteps of :
(26a) (26b) 
Unsteady Stokes solve for the only timestep of :
(27a) (27b)


Once the predictor step is complete, corrector iterations are done as

Unsteady Stokes solve for the subtimesteps of :
(28a) (28b) 
Unsteady Stokes solve for the only timestep of :
(29)

Using the highorder multirate timestepping strategy in (26)(29), the solution to the incompressible NavierStokes equations can be advanced in two overlapping grids for an arbitrary timestep ratio.
3.3 Multirate Timestepping in Overlapping Domains
With the multirate timestepping scheme that we have described, it is straightforward to scale this method to an arbitrary number of domains. Figure 7 shows an example of a schematic with timelevels for MTS in subdomains.
For notational purposes, we will use to represent the subdomain with slowest timescales. With each subdomain, we associate the timestep ratio with respect to the timestep size of :
(30) 
such that and for . For the example in Fig. 7, and the timestep ratios for different subdomains are , and .
Using (30) and assuming that the interdomain boundary data for points on is interpolated from , the system in (26)–(29) can be simplified for all subdomains ():

Unsteady Stokes solve for the subtimesteps during the predictor stage:
(31a) (31b) 
Once the predictor step is complete, corrector iterations are done for the subtimesteps:
(32a) (32b)
Equations (31)(32) describe the MTS method for solving the INSE in an arbitrary number of overlapping grids. From an implementation perspective, since different interdomain boundary points in a grid can overlap different grids, the coefficients and are computed for each point based on the timestep size of the donor subdomain (). Note that for cases where all the meshes are fixed and the timesize is constant in each subdomain, these coefficients thus need to be computed only at the beginning of the calculation.
Figure 8 shows an example of the interdomain boundary data dependency for the schematic shown in Fig. 7. Here, we assume that the gridpoints on overlap or . Assuming that the solution is know up to time , the boundary data for is extrapolated from the known solutions: , , and , where or 2. Similarly, for the corrector steps (), the boundary data for is extrapolated from the most recent Schwarz iteration and the known solutions:, and .
We note that unlike the singlerate timestepping scheme where only 1 interpolation is required at each predictor and corrector iteration, the multirate timestepping scheme requires interpolations at the beginning of each predictor step and 1 interpolation at the beginning of each corrector iteration. Thus, for th order temporal accuracy with an example corresponding to timestep ratio , the STS scheme requires a total of interpolations ( at each subtimestep) and the MT scheme requires a total of interpolations. Consequently, the MTS typically requires fewer interpolations in comparison to the STS scheme. In Section 4.3, we will use the example of a thermallybuoyant plume to compare the total time to solution between the STS and MTS scheme with .
3.4 Stability Considerations
An underlying assumption of the MTS scheme is that each subdomain has a timestep size that satisfies its CFL stability criterion (2). This requirement ensures intradomain stability, i.e., the unsteady Stokes solve for timeadvancing the solution of the INSE is stable in each subdomain. Interdomain stability, however, is similar to the singlerate timestepping scheme and depends on the order of extrapolation () used for interdomain boundary data and the number of Schwarz iterations () used at each timestep.
Peet and Fischer peet2012 have analyzed the stability of the singlerate timestepping scheme using an FDbased framework to show than i
Comments
There are no comments yet.