## 1 Introduction

Accurate and efficient simulation of waves is important in many areas in science and engineering due to the ability of waves to carry information over large distances. This ability stems from the fact that waves do not change shape in free space. On the other hand when the background medium is changing this induces a change in the wave forms that propagate through the medium and the waves can be used for probing the interior material properties of objects.

In order to preserve the properties of waves from the continuous setting it is preferable to use high order accurate discretizations that are able to control dispersive errors. The development of high order methods for wave propagation problems has been an active area of research for a long time and there are by now many attractive methods. Examples include (but are not limited to) finite difference methods, SBP ; Virta2014 ; Wang2016 ; Petersson2018 ; Hagstrom2012 , embedded boundary finite differences, appelo2012fourth ; FCAD1 ; FCAD2 ; Li2004295 ; Wandzura2004763 , element based methods like discontinuous Galerkin (DG) methods, Wilcox:2010uq ; Upwind2 ; ChouShuXing2014 ; ChungEngquist06 ; ChungEngquist09 ; GSSwave , hybridized discontinuous Galerkin (HDG) methods, Nguyen2011 ; Stanglmeier2016 , cut-cell finite elements STICKO2016364 ; sticko2016higher and Galerkin-difference methods BANKS2016310 .

An advantage of summation-by-parts finite differences and Galerkin type methods is that stability is guaranteed, however this guarantee also comes with some drawbacks. For diagonal norm summation-by-parts finite differences the order of accuracy is reduced to roughly half of that in the interior near boundaries. Further the need for multi-block grids also restricts the geometrical flexibility.

As DG and HDG methods are naturally formulated on unstructured grids they have good geometric flexibility. However, Galerkin based polynomial methods often have the drawback that they require small timesteps (the difference Galerkin and cut-cell finite element methods are less affected by this) when combined with explicit timestepping methods, but on the other hand they preserve high order accuracy all the way up to the boundary and it is easy to implement boundary conditions independent of the order of the method.

The pioneering work by Henshaw and co-authors, see for example chess1990

, describe techniques for generating overset grids as well as how they can be used to solve elliptic and first order time-dependent partial differential equations (PDE) by second order accurate finite differences. In an overset grid method the geometry is discretized by narrow body-fitted curvilinear grids while the volume is discretized on one or more Cartesian grids. The generation of such body-fitted grids is local and typically produces grids of very high quality,

OGEN . The grids overlap (we say that they are overset) so that the solution on an interior (often referred to as non-physical or ghost) boundary can be transferred from the interior of another grid. In chess1990and in most other overset grid methods the transfer of solutions between grids is done by interpolation. Since the bulk of the domain can be discretized on a Cartesian grid the efficiency asymptotically approaches that of a Cartesian solver but still retains the geometrical flexibility of an unstructured grid method. The same type of efficiency can be expected for embedded boundary and cut-cell finite elements but the errors close to physical boundaries are typically smoother and smaller when body-fitted grids are used.

Here we are concerned with the approximation of the scalar wave equation on overset grids. To our knowledge, high order overset grid methods for wave equations in second order form have been restricted to finite difference discretizations. For example, in henshaw:1730 high order centered finite difference approximations to Maxwell’s equations (written as a system of second order wave equations) was introduced. More recently, in ANGEL2018534 , the upwind discretizations by Banks and Henshaw introduced in BANKS20125854 were generalized to overset grids. A second order accurate overset grid method for elastic waves can be found in smog .

We use the recently introduced dissipative Hermite methods for the scalar wave equation in second order form, secondHermite , for the approximation on Cartesian grids. To handle geometry we use the energy based DG methods of Upwind2 on thin grids that are grown out from physical boundaries. We use projection to transfer the solutions between grids rather than interpolation.

Both the Hermite and DG methods we employ increase the order of accuracy by increasing the number of degrees of freedom on an element or cell. This has practical implications for grid generation as a single grid with minimal overlap can be used independent of order, reducing the complexity of the grid generation step. This can be important for example in problems like optimal shape design, where the boundary changes throughout the optimization. This is different from the finite difference methods where, due to the wider finite difference stencils, the overlap must grow as the order is increased.

The transfer of solutions between overset grids typically causes a perturbation to the discrete operators which, especially for hyperbolic problems, results in instabilities, see smog for example. These instabilities are often weak and can thus be suppressed by a small amount of artificial dissipation. There are two drawbacks of this added dissipation, first it is often not easy to determine the suitable amount needed, i.e. big enough to suppress instabilities but small enough not to reduce the accuracy or timestep too severely. Second, in certain cases the instabilities are strong enough that the dissipation must scale with the discretization parameter (the grid size) in such a way that the order of accuracy of the overall method is reduced by one.

Similar to ANGEL2018534 , we use a dissipative method that has naturally built–in damping that is sufficient to suppress the weak instabilities caused by the overset grids. The order of the hybrid overset grid method is the design order of the Hermite method or DG method, whichever is the smallest.

In the hybrid H–DG overset grid method the Hermite method is used on a Cartesian grid in the interior of the domain, and the discontinuous Galerkin method on another, curvilinear grid at the boundary. The numerical solution is evolved independently on these grids for one timestep of the Hermite method. By using the Hermite method in the interior the strict timestep constraints of the DG method are relaxed by a factor that grows with the order of the method. Asymptotically, as discussed above, the complexity of the hybrid H–DG solver approaches that of the Cartesian Hermite solver secondHermite .

The paper is organized as follows. The Hermite method is described in the next section. We first explain the method in simple one dimensional case and then explain how the method generalizes to two dimensions. The DG method is described in section 3. The details of the overset grids and a hybridization of the DG and the Hermite methods are described in section 4. We illustrate the hybrid H–DG method with numerical simulations in the section 5.

## 2 Dissipative Hermite method for the scalar wave equation

We present the Hermite method in some detail here and refer the reader to the original work secondHermite

for convergence analysis and error estimates.

Consider the one dimensional wave equation in second order form in space and first order in time

(1) | |||||

(2) |

Here , and for optimal convergence. We refer to as the displacement, and as the velocity. The speed of sound is . We consider boundary conditions of Dirichlet or Neumann type

and initial conditions

Let the spatial domain be . The domain will be discretized by a primal grid

and a dual grid

The use of staggered grids is essential for being able to take large timesteps. In time we discretize using a uniform grid with increments , that is

At each grid point the approximation to the solution is represented by its degrees of freedom (DOF) that approximate the values and spatial derivatives of and . Equivalently, the approximations to and can be represented as polynomials centered at grid points . The Taylor coefficients of these polynomials are scaled versions of the degrees of freedom. To achieve the optimal order of accuracy we require the and first derivatives of and respectively to be stored at each grid point.

At the initial time (which we take to be ) these polynomials are approximations to the initial condition on the primal grid

The coefficients and are assumed to be accurate approximations to the scaled Taylor coefficients of the initial data. If expressions for the derivatives of the initial data are known we simply set

(3) |

Alternatively, if only the functions and are known, we may use a projection or interpolation procedure to find the coefficients in (3).

The numerical algorithm for a single timestep consists of two phases, an interpolation step and an evolution step. First, during the interpolation phase the spatial piecewise polynomials are constructed to approximate the solution at the current time. Then, in the time evolution phase we use the spatial derivatives of the interpolation polynomials to compute time derivatives of the solution using the PDE. We compute new values of the DOF on the next time level by evaluating the obtained Taylor series. We now describe each step separately.

### 2.1 Hermite interpolation

At the beginning of a timestep at time (or at the initial time) we consider a cell and construct the unique local Hermite interpolant of degree for the displacement and degree for the velocity. The interpolating polynomials are centered at the dual grid points and can be written in Taylor form

(4) | |||||

(5) |

The interpolants and are determined by the local interpolation conditions:

We find the coefficients in (4) and (5) by forming a generalized Newton table as described in hagstrom2015solving .

### 2.2 Time evolution

To evolve the solution in time we further expand the coefficients of and . At each point on the dual grid, we seek temporal Taylor series

(6) | |||||

(7) |

where and . The coefficients and are given by the coefficients of (4) and (5). At this time the scaled time derivatives, and , are unknown and must be determined. Once they are determined we may simply evaluate (6) and (7) at to find the solution at the next half timestep.

In Hermite methods the coefficients of temporal Taylor polynomials are determined by collocating the differential equation, good2006 ; secondHermite ; hagstrom2015solving . In particular, by differentiating (1) and (2) in space and time the time derivatives of the solution can be directly expressed in terms of spatial derivatives

(8) | |||||

(9) |

Substituting (6) and (7) into (8) and (9) and evaluating at and , we can match the powers of the coefficients to find the recursion relations

(10) | |||||

(11) |

Here are the coefficients of the Taylor expansion of , or of the polynomial which interpolates in time around . Note that since there are a finite number of coefficients, representing the spatial derivatives at the time , the recursions truncate and only and terms need to be considered.

To complete a half timestep we evaluate the approximation at for the and first derivatives

(12) | |||

(13) |

###### Remark 1

For a piecewise polynomial solution to the wave equation all the terms in the Taylor series are present in the right-hand side of equations (12)-(13). A consequence is that the time evolution is exact whenever the forcing is zero (or a polynomial of degree ) and each cell includes the domain of the dependence of the solution at a dual grid point at time , that is, when

### 2.3 Imposing boundary conditions

Physical boundary conditions are enforced at the half time level, i.e. when the solution on the dual grid is to be advanced back to the primal grid. As there are many degrees of freedom that are located on the boundary the physical boundary condition must be augmented by the differential equation to generate more equations so that the degrees of freedom can be uniquely determined. The basic principle, often referred to as compatibility boundary conditions (see e.g. henshaw:1730 ), is to take tangential derivatives of the PDE and the boundary conditions so that a sufficient number of boundary conditions are obtained.

For example, assume we want to impose the boundary condition

(14) |

Then, as is a boundary grid point the Taylor polynomials (6)-(7) centered at should satisfy the boundary condition (14). In addition it should also satisfy the differential equation and derivatives of (14). We thus seek a polynomial outside the domain which together with the polynomial just inside the boundary forms a Hermite interpolant that satisfies the boundary and compatibility conditions.

Precisely, to find the polynomial describing at time we must determine unknowns that specify the polynomial at the boundary. First, this polynomial must interpolate the data describing the current approximation of at , this yields independent linear equations. Second, the first of the remaining independent linear equations can be obtained by requiring that the polynomial coincides with the boundary condition . The next equation is , and so forth.

Once the interpolant is determined on the boundary we evolve it as in the interior (see section 2.2).

###### Remark 2

We note that in the special case of a flat boundary and homogeneous Dirichlet or Neumann boundary conditions then enforcing the boundary conditions reduces to enforcing that the polynomial on the boundary is either odd or even, respectively, in the normal direction. Then the correct odd polynomial can be obtained by constructing the polynomial outside the domain

(often referred as ghost-polynomial) by mirroring the coefficients corresponding to even powers in the normal coordinate variable with a negative sign and the coefficients corresponding to odd powers with the same sign.Boundary conditions at interior overset grid boundaries are supplied by projection of the known solutions from other grids and will be discussed below.

### 2.4 Higher dimensions

In higher dimensions the approximations to and

take the form of centered tensor product Taylor polynomials. In two dimensions (plus time) the coefficients would be of the form

, with the two first indices representing the powers in the two spatial directions, and the third representing time.For the scalar wave equation

the recursion relations for computing the time derivatives are a straightforward generalization of the one dimensional case

(15) | |||

(16) |

As noted in secondHermite , using this recursion for all the time derivatives does not produce a method with order independent CFL condition but a method whose time-step size decrease slightly as the order increases. For optimally large timesteps it is necessary to use the special start up procedure

Here are the coefficients of the interpolating polynomial of degree in and degree in and are the coefficients of the interpolating polynomial of degree in and degree in . For the remaining coefficients we use (15) and (16) with . Further details of the two dimensional method can be found in secondHermite .

## 3 Energy based discontinuous Galerkin methods for the wave equation

Our spatial discontinuous Galerkin discretization is a direct application of the energy based formulation described for general second order wave equations in Upwind2 ; el_dg_dath ; fluid_solid_DG . Here, our energy based DG method starts from the energy of the scalar wave equation

where is the potential energy density, is the velocity or the time derivative of the displacement, .

Now, the wave equation, written as a second order equation in space and first order in time takes the form

where is the variational derivative of the potential energy

For the continuous problem the change in energy is

where the last equality follows from integration by parts together with the wave equation.

A variational formulation that mimics the above energy identity can be obtained if the equation is tested with the variational derivative of the potential energy. Let be an element and and be the spaces of tensor product polynomials of degrees and . Then, the variational formulation on that element is:

###### Problem 1

Find , such that for all ,

(17) | ||||

(18) |

Let and denote the jump and average of a quantity at the interface between two elements, then, choosing the numerical fluxes as

yields a contribution from each element face to the change of the discrete energy, guaranteeing that

Physical boundary conditions are enforced through the numerical fluxes, see Upwind2 for details.

Note that the above energy estimate follows directly from the formulation (17) - (18) but as the energy is invariant to constants equation (17) must be supplemented by the equation

Our implementation uses quadrilaterals and approximations by tensor product Chebyshev polynomials of the solution on the reference element . That is, on each quadrilateral we have approximations on the form

We choose (so called upwind or Sommerfeld fluxes) which result in methods where is observed to be order accurate in space Upwind2 .

### 3.1 Taylor series time-stepping

In order to match the order of accuracy in space and time for the DG method we employ Taylor series time-stepping. Assuming that all the degrees of freedom have been assembled into a vector

we can write the semi-discrete method as with being the matrix representing the spatial discretization. If we know the discrete solution at the time we can advance it to the next time step by the simple formulaAs we use dissipative fluxes this timestepping method is stable as long as the number of stages in the Taylor series is greater than the order of accuracy in space and with the timestep small enough.

## 4 Overset grid methods

In this section we explain how we use the two discretization techniques described above on overset grids to approximate solutions to the scalar wave equation.

The idea behind the overset grid methods is to cover the bulk of the domain with a Cartesian grid, where efficient methods can be employed, and to discretize the geometry with narrow body-fitted grids. In Figure 1 we display two overset grids, a blue Cartesian grid, which we denote , and a red curvilinear grid, which we denote , that are used to discretize a geometry consisting of a circular hole cut out from a square region. Note that the grids overlap, hence the name overset grids. Also, note that the annular grid cuts out a part of the Cartesian grid. This cut of the Cartesian grid creates a internal, non-physical boundary in the blue grid.

Here physical boundary conditions are enforced on the red grid at the black boundary which defines the inner circle and on the outermost boundary on the blue grid.

In order to use the Hermite or DG methods on the grids we will need to supply boundary conditions at the interior boundaries. In the example in Figure 1 this means that we would have to specify the solution on the outer part of the annular grid and on the staircase boundary (marked with filled black circles) that has been cut out from the Cartesian grid.

In most methods that use overset grids, in particular those using finite differences, the communication of the solution on the interior boundaries is done by interpolation, see e.g. chess1990 . For the methods we use here we have found that the stability properties are greatly enhanced if we instead transfer volumetric data (numerical solution) in the elements / gridpoints near the internal boundaries by projection rather than by interpolation. In fact, when we use volume data the resulting methods are stable without adding artificial dissipation, when we use interpolation they are not.

As mentioned above, in a Hermite method, we can think of the degrees of freedom as either being nodal data, consisting of function and derivative values, or as coefficients in a Taylor polynomial. Thus, when transferring data to a grid where a Hermite method is used (like the example in the left subfigure of Figure 1) we must determine a tensor product polynomial centered around a gridpoint local to that grid (the points we would center around are indicated by black points in Figure 1). Below we will explain in detail how we determine this polynomial.

For elements with an internal boundary face (denoted by thick red lines in Figure 1) we could in principle transfer the solution by specifying a numerical flux on that face, however we have found that this approach results in weakly unstable methods. Instead we transfer volumetric data to each element that has an internal boundary face, we give details below. Given the timestep constraints of DG methods we must march the DG solution using much smaller timesteps than those used for the Hermite method. This necessitates the evaluation of the Hermite data not only at the beginning of a Hermite timestep but at many intermediate times.

### 4.1 Determining internal boundary data for the Hermite solver

We first consider the problem of determining internal boundary data required by the Hermite method. An example of how to compute solution data at the gridpoints at the boundary of Cartesian grid (filled black circles) is depicted in Figure 1.

In general, the tensor product polynomial centered around is found by a two step procedure. First we project into a local basis spanned by Legendre polynomials and perform a numerically stable and fast change of basis into the monomial basis. Then we truncate the monomial to the degree required by the Hermite method.

To carry out the projection we introduce a local tensor product Gauss-Legendre-Lobatto (GLL) grid centered around . These points are marked as filled blue circles in the left subfigure of Figure 2. The number of grid points in the local grids are determined by the order of the projection. To maintain the order of the method, the order of the projection should be at least the same as the order of the spatial discretization, thus it is sufficient to have points in each direction. The GLL quadrature nodes are defined on the reference element that maps to a cell defined by the dual gridpoints closest to .

Let be the numerical solution on the red grid. In the first step of the communication we compute the coefficients of a polynomial approximating by projecting on the space of tensor product Legandre polynomials , that is

(19) |

Here denotes the inner product on and is the norm induced by the inner product. Note that the expression (19) is particularly simple since the Legendre polynomials are orthogonal on the domain of integration. To do this we evaluate at the underlying blue quadrature points in the left subfigure of Figure 2.

Once the polynomial (19) has been found we perform a change of basis into the local monomial used by the Hermite method. Such a change of basis can be done by the fast Vandermonde techniques by Björk and Pereyra, see e.g. bp70 ; Dahlquist:2008fu . At this stage the polynomial is of total degree so the final step is to truncate it to total degree or depending on whether we are considering the displacement or the velocity. With the and degrees of freedom determined everywhere on a Hermite grid we may evolve the solution as described in section 2.

### 4.2 Determining data for DG elements with internal boundary faces

We now consider the problem of determining the data required by the DG method. Here we show how to obtain the data at a single DG element with at least one internal boundary face. As the timesteps of the DG method are significantly smaller than for the Hermite method we must repeat the transfer of data many times. We must also explicitly transfer time derivative data in order to use a Taylor series timestepping approach.

The tensor product polynomials in our implementation of the DG method are composed by the product of Chebyshev polynomials that are expressed on the reference element . Precisely we seek

To determine such polynomials we perform a projection of the solution , i.e the solution on Cartesian grid,

but in this case the weighted inner product is

where the Chebyshev polynomials are orthogonal. To carry out this projection we use a local tensor product Chebyshev quadrature nodes, in each dimension, as shown in right subfigure of Figure 2.

Denoting The local time levels used by the DG solver th Hermite timestep are defined to be

where and are timesteps taken on grids (Cartesian) and (curvilinear) respectively. For simplicity the starting local time level and the final local time level are equal to consequent timesteps on the Hermite grid, and

To transfer the solution values and the time derivatives needed at each of the quadrature points and at each we carry our the following “start up” procedure at . For each of the quadrature points we re-center the Hermite interpolants closest to it and compute the time derivatives precisely by the recursion relations described in section 2. We note that this is an inexpensive computation as the interpolants have already been found as a step in the evolution of the Hermite solution, the only added operation is the re-centering.

## 5 Numerical experiments

The hybrid H–DG method is empirically stable and accurate, and here we demonstrate it with numerical experiments. To test the stability of the method in one dimension we first define the amplification matrix and compute its spectral radius. To test the stability in two dimensions, where the amplification matrix will take too long to compute, we provide the long time simulation and estimate the error growth for multiple refinements. Convergence tests in one and two dimensions are done for the domains where the exact solution is known. In the second half of this section we apply the method to the domain with complex curvilinear boundary in the experiment with wave scattering of the smooth pentagonal object. Finally, in the end of this section we apply the method to the inverse problem of locating the underground cavities as the forward solver.

### 5.1 Numerical stability test

Unlike the Hermite and DG methods, stability of the hybrid H–DG method cannot be shown analytically. As an alternative, the stability can be investigated numerically by looking at the spectrum of the amplification matrix associated with the method. A similar stability analysis was done for finite difference scheme for the wave equation in Strik . To construct the amplification matrix we apply the method to initial data composed of the unit vectors. The vector that is returned after one timestep is then placed as columns in a square matrix. The spectral radius of the amplification has to be smaller then for the method to be stable.

We consider the wave equation (1)-(2) on the unit interval with homogeneous Dirichlet and Neumann boundary conditions at and respectively. We introduce two uniform Cartesian grids which overlap inside a small interval close to one of the boundaries. Precisely, the grids are

The Hermite method is used on a grid and the DG method is used on grid . The grids thus overlap inside the interval . Here the ratio of the overlap size and the discretization width is . This ratio is fixed for all values of and . We also fix so that the amount of work done on grid is constant per timestep for all refinements. Fixing the ratio and makes the efficiency of the overall method to asymptotically be the determined by the efficiency the Hermite method.

Let be a vector holding the degrees of freedom of both methods at th timestep, then we may express the complete timestep evolution as where incorporates timestepping and projection. can be expressed as the matrix that can be computed column by column via

(20) |

where is the th unit vector. The equation

(21) |

is equivalent to the timesteps of the hybrid H–DG method. Taking -norm of both sides of equation (21) and applying the Cauchy-Schwartz inequality we get

where is the spectral radius of . If we conclude

and the solution will remain bounded.

We consider the case and take the parameters to be

Other parameters are for the DG method and , the number of timesteps done by the DG method during one step of the Hermite method. The parameters and are set so the methods used have the same order of accuracy as the approximation of for the Hermite method

To get an optimal , we take the largest possible timestep for the DG method, so that

and

is an integer. Equivalently, if the Hermite method CFL number is set, we get

Following the column-by-column construction process (20) described at the start the this section we compute the amplification matrix for the current setting. The spectrum of is shown in Figure 3 for . Displayed results are for the cases and . The CFL numbers set for Hermite method are and . The absolute value of eigenvalues do not exceed . We note that if interpolation is used some eigenvalues of the amplification matrix shift outside of the unit circle. Such unstable modes can possibly be stabilized by numerical dissipation / hyperviscosity but we do not pursue such stabilization here. Instead we observe that when projection is used all eigenvalues are inside the unit circle and the method is stable. Although we only display the results for one problem here the same results were obtained for other grid sizes, various overlap sizes to grid spacing ratios and different CFL numbers set for the Hermite method. We stress that it is possible to make the method unstable if we take the CFL number close to one and if we take to be larger than 3 and thus we only claim that the methods of orders of accuracy up to are stable.

### 5.2 Convergence to an exact solution

Using the same grid setup and boundary conditions as in the example above we test the method for the wave equation (1)-(2), and initial conditions

(22) | ||||

(23) |

A solution to this problem is the standing wave

The errors for the solution on the grids are

(24) |

for the Hermite grid and

(25) |

for the DG grid. The maximum error for the total method is

In Figure 4 we display computed maximum errors as functions of time for the method with (i.e. the order of accuracy is 7). In the left subfigure the CFL number for the Hermite method is set to be and in the right subfigure the CFL number is set to be 0.75. For all Hermite grid sizes, the error growth is linear in time (dashed lines display a least squares fit of a linear function), indicating that the solution is stable for long time computations.

In the left subfigure of Figure 5 the numerical solution and the absolute error are shown for the th order accurate method at time . As can be seen in the lower left subfigure in Figure 5 the error is rather smooth across the overlap indicating that the projection is highly accurate.

To the right in Figure 5 the error at the final time is shown as a function . The dashed lines show the least squares fit with polynomial functions of of order and respectively. The results indicate that the orders of accuracy of the methods are as expected. The parameters (, , , etc.) are the same is in previous example.

### 5.3 Analytical solution in a disk. Rates of convergence

Consider the solution of (1)-(2) with on the unit disk, , with homogeneous Dirichlet boundary conditions. Then the analytical solution can be expressed in polar coordinates as a composition of modes

(26) |

Here is the Bessel function of the first kind of order and is the th zero of . In the following experiment we set , . The initial condition is displayed in the left subfigure of Figure 6.

We setup overset grids as displayed in Figure 7. Grid is a Cartesian grid discretizing a square domain with grid points in each direction and grid spacing . Grid is a curvilinear grid discretizing a thin annulus with radial grid spacing . For all refinements Grid has 7 elements in the radial direction thus the number of elements (or equivalently the number of DOFs of DG method) will grow linearly with the reciprocal of the discretization size . In contrast the number of grid points in the Cartesian grid where the Hermite method will be used grows quadratically with .

To measure the error we evaluate the solution on a finer grid, oversampled with 20 grid points inside each Hermite cell and DG element. The convergence is displayed in the right subfigure of Figure 6. The errors at time as functions of for are displayed as solid lines. The dashed lines show the polynomials in of order . We use in the computations. As can be seen the expected orders of accuracy (3,5 and 7) are observed.

To test the performance of the method we evolve the method over one time period of the solution and measure the CPU time, see the left subfigure of Figure 8. The red curve, displaying the error of the rd, order accurate method only reaches the error in about 1000 seconds while the th and th order accurate methods, using the same compute time, yield errors on the order of and respectively. Clearly the higher order methods are more efficient.

To test the stability of the method we evolve the solution until time which is roughly periods of the solution. We set and test methods with orders of accuracy and . The error growth appears to be linear in time as indicated by dashed lines in the right subfigure of Figure 8.

### 5.4 A wave scattering of a smooth pentagon

In this experiment we study the scattering of a smooth pentagon in free-space. In addition to the use of non-reflecting boundary conditions experiment demonstrates the hybrid Hermite-DG method for the solution which is propagated over many wavelengths. The geometry of the pentagon is defined as the smooth closed parametric curve:

(27) | |||||

(28) |

The pentagon is placed in a square domain discretized by a Cartesian grid with grid spacing , . The curvilinear grid has 10 elements in the radial direction and the outer boundary is a circle of radius . The overlap width is at most 5 DG elements.

On the boundary of the body we set Dirichlet data

(29) |

The exterior boundary condition is modeled by truncating the domain using perfectly matched layers governed by the equations, (see appelo2012fourth for derivation)

(30) |

where the auxiliary variables satisfy the equations

(31) |

The damping profiles , are taken as

Comments

There are no comments yet.