The Euler equations of fluid dynamics are, in their conservative version,
As usual, is the density, is the velocity, is the total energy, is the internal energy, and is the pressure. The system is closed by an equation of state, . The simplest is that of calorically perfect gas where
where the ratio of specific heats is a constant.
When the solution is smooth, the system (1) can be equivalently written in nonconservative form as:
When the solution is not smooth, the form (2) is meaningless because the differential operators are no longer defined. This is why the form (1) is preferred, in particular in its weak form, see . This fact has a very strong implication for the design of numerical schemes applied to (1): the Lax-Wendroff theorem implies and guaranties that a suitable numerical approximation should be written in term of flux.
However, the form (2) is better suited for engineering purposes, since one has a direct access to the velocity and the internal energy. Hence a rather natural question is to ask how to discretise the Euler equations directly from (2), and still have convergence to the correct weak solutions, at least formally.
One obvious way to achieve this, is to start from a conservation approximation of (1), and by simple algebraic manipulations which amount to multiply the numerical scheme by approximations of
lead to a scheme directly working on the primitive variable. This ”new” scheme is equivalent to the original one.
This is not exactly the question we want to address here. We are interested in designing locally conservative approximations of (2) for which the thermodynamic variables are approximated in while the velocity is globally continuous. This can be seen as an Eulerian version of the Lagrangian schemes designed in [6, 11] or  and the related works by these authors. A similar question has been addressed by Herbin and co-authors, see for example [15, 19, 17] in the finite volume context. In this references, the authors describe a class of numerical schemes where the thermodynamic variables and the velocity are piecewise constant but logically described on a staggered mesh. They show the convergence towards the weak solution. The scheme can be partially implicit, so that in the low Mach number limit the scheme ”degenerates” to a Mac-type scheme, see . The schemes are second order in time and space.
In this paper, we describe a different technique which allows to reach an arbitrary order of accuracy, both in time and space. Before describing this method, we shortly review the class of residual distribution schemes that will be the main tool we use, thanks to some reinterpretation. Then we describe the scheme, and explain why it is locally conservative. Last, we show a variant of the Lax-Wendroff theorem that is adapted to our setting.
2 A high-order nonconservative approach
We have in mind a numerical approximations where the variables are piece-wise polynomial in simplex. We also assume that the velocity is globally continuous, in contrast to Discontinuous Galerkin(DG)–like approximations. This constraint is motivated by the choice that we want to extend the technique of , where a Petrov Galerkin technique, inspired from  and the reference therein. If nothing special is done, we need to invert a mass matrix. This can be cumbersome, and even impossible if we want to extend the techniques of  because the equivalent of the mass matrix changes at every time step. This is why we use a particular time stepping that we describe now. It relies on series of Euler forward type of discretisation. This is indeed the essential point: if one prefers to forget the globally continuous methods, rely on a DG–like approach, and use a Strong Stability Preserving (SSP) Runge-Kutta approach, one can extend our correction technique and build schemes that converge to a weak solution of the problem, starting from (2).
2.1 Time stepping approach
Consider a hyperbolic system in the form
For time discretization, we want to get a high order accurate approximation. To do so, we will use the Deferred Correction (DeC) approach. The aim of DeC schemes is to avoid implicit methods, without losing the high order of accuracy of a scheme. The high order method that we want to approximate will be denoted by . To use the DeC procedure, we also need another method, which is easy and fast to be solved with low order of accuracy . The DeC algorithm is providing an iterative procedure that wants to approximate the solution of the scheme in the following way:
where is the number of iterations that we compute. We need as many iterations as the order of accuracy that we want to reach. We know from :
Let and be two operators defined on , which depend on the discretization scale such that
is coercive with respect to a norm, i.e., independent of , such that for any we have that
is Lipschitz with constant uniformly with respect to , i.e., for any
We also assume that there exists a unique such that Then, if the DeC is converging to and after iterations the error is smaller than .
for the conservative form and we define
for the non–conservative form. We will describe the procedure in two steps. First, we consider the case of a scalar problem, and then we look at (1) or (2). The reason is that in (2) not all the variables play the same role, contrarily to (1), and it is easier to start with a system with one variable. We will proceed by forgetting the question of local conservation.
2.2 Scalar case
2.2.1 Trial space
We consider a triangulation of made of non overlapping simplex that are generically denoted by . We assume the triangulation to be conformal, and define
where as usual, is the set of polynomials in of degree less or equal to . We also define
In each element
, a polynomial is defined by a set of degrees of freedom, for example the Lagrange points. We denote bya generic degree of freedom. Here, for reasons that will be more clear later, we expand the polynomials in term of Bézier polynomials.
One dimensional elements: In the element , we consider the barycentric coordinates
If , the restriction of is defined in the element as follows, if , then the Bézier form vanishes. We describe the two families of Bézier form we will need:
Linear: The degrees of freedom are the vertices, so
Quadratic: the degrees of freedom are identified to the vertices , and the mid-points .
Multidimensional elements: we only describe the 2D cases, with triangles, but similar things are obtained for quadrangles, or 3D simplex. A triangle is made of 3 vertices denoted by , and . The barycentric coordinates with respect to the vertices are denotes by , and .
Linear: the degrees of freedom are the vertices and and .
Quadratic: The degrees of freedom are the 3 vertices , and , and the midpoint of the edges:
The Bézier polynomials are:
Then, considering or , for any , we expand as
where is any of the linear, quadratic (or more) function defined above. If then we have the expansion
and if , we can expand as
With some abuse of language, we will use the second notation throughout this paper, depending if we see per element or more globally.
2.2.2 Test space
As we said earlier, we rely on a Petrov-Galerkin approach. This means that the test functions will belong to a finite dimensional subspace of that can also be described by the degrees of freedom : we can identify functions of that are indexed by the and span this space. We denote them by . For example, in the SUPG method, we define in each by: for ,
Here is the diameter of and is a positive matrix. In [5, 13, 3, 21, 2], examples are given where depends on the solution, in order to get stability. In all the examples we are considering, the support of is that of .
2.2.3 Description of the time discretisation
The first step is to define . Let’s integrate the system (4):
Using the locality of the basis function, we have
so we will define the restriction of on . Then we introduce sub-time steps between and : and for any degree of freedom the corresponding approximations for . Of course, is the approximation at time
. This being done, we will consider Lagrange interpolation in time and will integrate exactly in time. More details can be found in. We describe here two examples as illustrations.
In the first one, we take , so that there is no intermediate points, and we simply get the Crank Nicholson scheme:
If we look for third order in time, we consider the mid point time step, and get
The definition of is somewhat formal, we replace it by some approximation to be defined later, the only constraint is that we have the relation
The precise definition of the term depends on wether or .
If , then and the integration is done via quadrature formula of order at least equal to the polynomial degree,
If , then may be discontinuous across the faces of , we only consider a consistent approximation. If is in conservation form, then we set
where is a numerical flux. If instead, is not in conservation form, for example
and one of the is not a jacobian, we simply set
which is evaluated by means of quadrature formula.
We set, for (7), and define
as a vector indexed by the degrees of freedom, and the components are:
|and for (8), and define by|
|So again we have a ”vector” of ”components” .|
The operator is defined by replacing the temporal terms by
and then by ”lumping” the mass matrix: this is why we use Bézier approximation since we are sure that the lumped mass is non zero because it is
We set (it does not depend on ) and set, for (7)
|and similarly, for (8),|
The generalisation to formally higher order temporal schemes is obvious. Last, one can show that the two operators satisfies the requirements of the lemma 2.1.
The DeC iteration is then defined by
After simplifications, this gives the following iteration: knowing , we get by, for all , we have for (7),
and for (8),
For now, this is all we need.
2.3 Case of system (2).
We describe the residuals, and develop the method first for the case of the second order in time, since it is simpler. The generalisation to higher order is straightforward and done in the paragraph that follows.
We assume that the computational domain is covered by non-overlapping simplices The velocity field belongs to a kinematic space of finite dimension; it has a basis denoted by , where is the set of kinematic degrees of freedom with the total degrees of freedom given by . The thermodynamic quantities such as the internal energy, the density and the pressure belong to a thermodynamic space ; this space is also finite dimensional and its basis is . The set is the set of thermodynamical degrees of freedom with the total degrees of freedom . The kinematic space is formed by the quadratic Bernstein elements, while the thermodynamic space has a piecewise-linear basis. The velocity field is approximated by
where the are the linear/quadratic Bézier, and the density, the pressure and the internal energy, are given by
where the are the piecewise constant/linear per elements functions. Note that the degrees of freedom for the velocity are assumed to be globally continuous, so in , while the thermodynamical ones are discontinuous across the boundary of elements, so in .
We can rewrite the Euler equations (2) in the following form
The only thing to do is to describe how the method of the previous section adapt to this case, and this amounts to describing the general structure residuals, that is how (9) is written. Since the velocity is globaly continuous, we write
where and . Since we are on , these are simply polynomials, and the integration is carried out by numerical quadrature.
For the density, is in conservation form and we use a numerical flux :
Last, for the internal energy, we write
and again we use quadrature formula.
In the numerical section, we will describe the residuals that we use.
3 A discussion on conservation
3.1 A set of sufficient conditions to achieve convergence to a weak solution
Again, to simplify the notations, we focus on the second order case, but the extension to the more general case is straightforward. In the appendix 6, we show a Lax Wendroff theorem for this type of discretisation. What we do here is to show how to go from the system in non–conservative form to the one in conservation form.
There is nothing to do for the density because it is already in conservation form and the standard proof  applies. There is no need to repeat it here since the proof we give for the momentum and the total energy, modulo some complications, is essentially the same.
Let us first look at the momentum. Considering a test function , we define the value of at time at the centroid of , and consider the following approximation of that we still denote by :
Then we consider
Then, we can write, introducing and ,
Hence, (14) is rewritten as:
where we have set for simplicity
For the velocity, we have:
where, for the scheme (7),
and for the density, we have
We note that the sum reduces to one term, hence
and is the element such that
Using these relations in (15), we get
We see that
Then, can be written as:
so that, setting
so that we get the master equation:
Then let us look at the total energy. The first thing to do is to remark that (with similar notations as before) that
which combined to
To simplify, we will set
Using these relations, we see that