1 Introduction
In this work, we introduce a shock capturing procedure for spectral element (SE) approximations of scalar hyperbolic conservation laws
(1) 
with appropriate initial and boundary conditions. Here, is the unknown function and is a flux function. We assume that the initial data is a function of bounded total variation in . Equation (1) might as well be written in its quasilinear form
(2) 
with . We further recall that solutions of (1
) may develop spontaneous jump discontinuities (shock waves and contact discontinuities) even when the initial data are smooth
lax1973hyperbolic . This important discovery has first been made by Riemann riemann1860fortpflanzung . Hence, the more general class of weak solutions is admitted, where (1) is satisfied in the sense of distribution theory. Since there are many possible weak solutions, however, equation (1) is augmented with an additional entropy condition, requiring(3) 
to hold. is an entropy function and is a corresponding entropy flux satisfying . A strict inequality in (3) reflects the existence of physically reasonable shock waves.
Next, we consider spectral/ element approximations of (1). Also see karniadakis2013spectral and references therein. SE approximations have been introduced in 1984 by Patera patera1984spectral and can be interpreted as a formulation of finite elements (FE) that uses piecewise polynomials of high degrees. SE methods thus combine the advantages of (Galerkin) spectral methods with those of FEs by a simple application of the spectral method to subdomains (elements) of . The elements are mapped to a reference element and all computations are carried out there. In one space dimension, this reference element is typically given by . Thus, the SE approximation is based on a polynomial approximation of degree at most , expressed w. r. t. some nodal or modal basis of . This polynomial approximation, in particular, is used to extrapolate the solution to the element boundaries (if these values are not already given as coefficients in a nodal basis). Subsequently, common numerical fluxes are computed at each element boundary toro2013riemann . Numerical fluxes allow information to cross element boundaries and are often essential for the stability of the method. They are incorporated into the approximation by some correction procedure at the boundaries. In discontinuous Galerkin (DG) methods, this is done by applying integration by parts to a weak form of (1), see hesthaven2007nodal . In flux reconstruction (FR) methods, which are based on the strong form of (1), this is done by applying correction functions, see huynh2007flux . Finally, is approximated using exact differentiation for the polynomial approximation.
Convergence of the resulting SE method is either obtained by increasing the degree of the polynomials (refinement) or by increasing the number of elements (refinement). SE methods are known to provide highly accurate approximations in smooth regions. But also from a computational point of view, the main difficulty of solving hyperbolic conservation laws is that discontinuities may arise: Resulting from the famous Gibbs–Wilbraham phenomenon hewitt1979gibbs , the polynomial approximation is polluted by spurious oscillations, which might produce unphysical numerical solutions and finally blow up the computation. To overcome these problems, many researchers have extended shock capturing procedures from finite difference (FD) as well as finite volume (FV) methods to (highorder) SE schemes. The idea behind many of these procedures is to add dissipation to the numerical solution. This idea dates back to the pioneering work vonneumann1950method of von Neumann and Richtmyer during the Manhattan project in the 1940’s at Los Almos National Laboratory, where they constructed stable FD schemes for the equations of hydrodynamics by including artificial viscosity (AV) terms. Since then, AV methods have attracted the interest of many researchers and were investigated also for SE approximations in a large number of works persson2006sub ; klockner2011viscous ; glaubitz2018application ; ranocha2018stability ; glaubitz2019smooth . Despite providing a robust and accurate way to capture (shock) discontinuities, AV terms are not trivial to include in SE approximations. Typically, they are nonlinear and consist of higher (second and fourth order) derivatives. Another drawback arises from the fact that AV terms can introduce additional harsh time step restrictions, when not constructed with care, and thus decrease the efficiency of the numerical method hesthaven2007nodal ; glaubitz2019smooth . Finally, we mention those methods based on order reduction baumann1999discontinuous ; burbeau2001problem , mesh adaptation dervieux2003theoretical , weighted essentially nonoscillatory (WENO) concepts shu1988efficient ; shu1989efficient , and regularisation applied to high order approximations of the jump function glaubitz2019high . Yet, a number of issues still remains unresolved. Often, the extension of these methods to multiple dimensions is an open question or they are regarded as too computational expensive.
Here, we propose an approach that combines the good properties of SE approximations in smooth regions with the total variation diminishing and shape preserving properties of Bernstein approximations for resolving shocks without spurious oscillations. In this new strategy, the approximation of
in each element may vary from usual (highorder) interpolation polynomials to a Bernstein reconstruction of the solution. Further, by employing a discontinuity sensor, here based on comparing polynomial annihilation operators
archibald2005polynomial of increasing orders as proposed in glaubitz2019high, the order of the approximations is reduced to one only in elements where the solution is not smooth. For instance mesh adaptation is hence not mandatory and (shock) discontinuities can be captured without modifying the number of degrees of freedom, the mesh topology, or even the method. Moreover, a slight modification of the proposed Bernstein reconstruction also allows the preservation of bounds, such as positivity of pressure and water height.
By now, Bernstein polynomials as polynomial bases have been successfully applied to solvers for partial differential equations (PDEs) in a number of works. Especially in
kirby2011fast ; ainsworth2011bernstein , efficient FE operators have been constructed by using Bernstein polynomials as shape functions. Also in beisiegel2015quasi , Bernstein polynomials have been proposed as basis functions in a thirdorder quasinodal DG method for solving flooding and drying problems with shallow water equations. Finally, lohmann2017flux ; anderson2017high have investigated the potential of imposing discrete maximum principles for polynomials expressed in a basis of Bernstein polynomials. Yet, while Bernstein polynomials as basis functions have been studied in a few works by now, the associated approximation procedure resulting from the Bernstein operator is still of very limited use. In contrast to the above mentioned works, we do not only utilise Bernstein polynomials as basis functions, but we further propose to use the associated Bernstein operatore as a building stone for new shock capturing procedures.The rest of this work is organised as follows. In §2, we revise bases of Bernstein polynomials and their associated Bernstein approximation operator. This operator will later be used to obtain ’smoother’ reconstructions of polynomial approximations near discontinuities and provides certain structurepreserving and approximation properties. Building up on these properties, §3 introduces the novel Bernstein procedure, which replaces polluted interpolation polynomial by convex combinations of the original (polluted) approximation and its Bernstein reconstruction. This process is steered by a polynomial annihilation sensor, which is discussed in §3.4. §4 investigates some analytical properties of the proposed Bernstein procedure, such as its effect on entropy, total variation and monotone (shock) profiles of the numerical solution. We stress that the Bernstein reconstruction is proven to be total variation diminishing (nonincreasing). Finally, §5 provides numerical demonstrations for a series of different scalar test problems. We close this work with concluding thoughts in §6.
2 Bernstein Polynomials and the Bernstein Operator
In this section, we introduce Bernstein polynomials as well as some of their more important properties. On an interval , the Bernstein basis polynomials of degree are defined as
(4) 
and form a basis of . Thus, every polynomial of degree at most can be written as a linear combination of Bernstein basis polynomials,
(5) 
called Bernstein polynomial or polynomial in Bernstein form of degree . The coefficients are referred to as Bernstein coefficients or Bézier coefficients. We further define the linear Bernstein operator of degree for a function by
(6) 
maps a function to a Bernstein polynomial of degree with Bernstein coefficients
(7) 
Without loss of generality, we can restrict ourselves to the intervals and . For sake of simplicity, the interval subsequently will be used for theoretical investigations. The interval , on the other hand, is typically used as a reference element in SE approximations. Thus, the proposed Bernstein procedure will be explained for this case.
2.1 StructurePreserving Properties
Let us consider Bernstein polynomials on . We start by noting that the Bernstein basis polynomials form a partition of unity, i. e.
(8) 
for all . This is a direct consequence of the binomial theorem. Thus, we can immediately note
Lemma 1.
Let be a Bernstein polynomial of degree with Bernstein coefficients . Then
(9) 
holds for the Bernstein polynomial.
Proof.
In particular, Lemma 1 ensures that the Bernstein operator (6) preserves the bounds of the underlying function . In fact, Lemma 1 not only ensures preservation of bounds by the Bernstein procedure, but also allows us to enforce such bounds. Moreover, the Bernstein operator preserves the boundary values of , i. e.
(11) 
hold. This makes the later proposed Bernstein procedure a reasonable shockcapturing method not just in discontinuous FE approximations, such as DG methods, but also in continuous FE approximations, where the numerical solution is required to be continuous across element interfaces. Further, we revise the formula
(12) 
with forward difference operator
(13) 
for derivatives of (6); see (lorentz2012bernstein, , Chapter 1.4). In particular, we have
(14) 
for the first derivative of . This formula will be important later in order to show that the Bernstein procedure is able to preserve monotone (shock) profiles.
2.2 Approximation Properties
Bernstein polynomials were first introduced by Bernstein bernstein1912demonstration in a constructive proof of the famous Weierstrass theorem, which states that every continuous function on a compact interval can be approximated arbitrarily accurate by polynomials weierstrass1885analytische . Hence, the sequence of Bernstein polynomials converges uniformly to the continuous function . Assuming that is bounded in and that the second derivative exists at the point , we have
(15) 
see (lorentz2012bernstein, , chapter 1.6.1). In particular, at points where the second derivative exists, the error of the Bernstein polynomial therefore is of first order, i. e.
(16) 
for a . However, we should remember that solutions of scalar hyperbolic conservation laws – which we intend to approximate – might contain discontinuities. The structure of these solutions has been determined by Oleinik oleinik1957discontinuous ; oleinik1964cauchy , Lax lax1957hyperbolic , Dafermos dafermos1977generalized , Schaeffer schaeffer1973regularity , Tadmor and Tassa tadmor1993piecewise , and many more. Most notably for our purpose, Tadmor and Tassa tadmor1993piecewise showed for scalar convex conservation laws that if the initial speed has a finite number of decreasing inflection points then it bounds the number of future shock discontinuities. Thus, in most cases the solution consists of a finite number of smooth pieces, each of which is as smooth as the initial data. Note that it is this type of regularity which is often assumed – sometimes implicitly – in the numerical treatment of hyperbolic conservation laws. Hence, it appears to be more reasonable to investigate convergence of the sequence of Bernstein polynomials for only piecewise smooth functions . Here, we call a function piecewise on if there is a finite set of points such that
and if the onesided limits
(17) 
exist for all . In this case is still integrable and the first order convergence of the Bernstein polynomials carries over in an sense.
Theorem 2.
Let be piecewise and let be the sequence of corresponding Bernstein polynomials. Then there is a constant such that
(18) 
holds for all and .
Proof.
Let be the points where is not . When further denoting and , we have
(19) 
and, since is piecewise , there is a generic constant such that
(20) 
holds for . It should be noted however that might be unbounded on the open interval . Therefore, let us choose such that and let us consider the closed interval . On these intervals is bounded and (20) becomes
(21) 
for . Hence, we have
(22)  
For the two remaining integrals, we remember that and therefore all are bounded. This yields
(23) 
for all and . As a consequence, (22) reduces to
(24) 
and letting results in
(25) 
Finally, we have
(26)  
which yields the assertion. ∎
One might reproach that this is an unacceptable order of convergence. We reply to this argument with a few selected counterarguments.
Remark 3.

For numerical solutions of hyperbolic conservation laws, it is almost universally accepted that near shocks, the solution can be first order accurate at most persson2006sub . Thus, accuracy of the numerical solution won’t decrease noticeably by reconstructing it as a Bernstein polynomial.

In fact, highorder methods often not even provide accuracy of first order but constant (or even decreasing) accuracy. This is due to the Gibbs–Wilbraham phenomenon hewitt1979gibbs ; richards1991gibbs ; gottlieb1997gibbs for (polynomial) higherorder approximations of discontinuous functions. Yet, for instance, Gzyl and Palacios gzyl2003approximation have shown the absence of the Gibbs–Wilbraham phenomenon for Bernstein polynomials. It is still an open problem which properties of the approximation cause the Gibbs–Wilbraham phenomenon, but it is our conjecture that the order of accuracy for smooth functions is the deciding factor.

While Bernstein polynomials converge uniformly for every continuous function, for instance, polynomial interpolation in general only converges if is at least (Dini–)Lipschitz continuous bernstein1912best . Even worse, Faber faber1914interpolatorische showed in 1914 that no polynomial interpolation scheme – no matter how the points are distributed – will converge for the whole class of continuous functions. For approximations by orthogonal projection, such as Fourier series, a similar result follows from divergence of the corresponding operator norms berman1958impossibility ; petras1990minimal .
We conclude from Remark 3 that Bernstein polynomials, while appearing not attractive for approximating sufficiently smooth functions, provide some advantages for the approximation of just continuous or even discontinuous functions.
3 The Bernstein Procedure
In this section, we introduce a novel subcell shock capturing procedure by using Bernstein polynomials. The procedure is described for the reference element and is based on replacing polluted highorder approximations by their Bernstein reconstruction.
3.1 Bernstein Reconstruction
We start by introducing the (modified) Bernstein reconstruction which is obtained by applying the Bernstein operator (6) to a polynomial approximation. Let be an approximate solution at time with coefficients w. r. t. a (nodal or modal) basis in the reference element . Then, the original approximation, for instance, obtained by interpolation or (pseudo) projection, is modified in the following way:

Compute the Bernstein reconstruction of by
(27) 
Write w. r. t. the basis by a change of bases with transformation matrix , i. e.
(28) where
is a vector containing the Bernstein coefficients
.
To put it in a nutshell, the idea is to replace the coefficients in a troubled cell by the coefficients of the Bernstein reconstruction. Table 1 lists the condition numbers trefethen1997numerical of the transformation matrix for the Lagrange and Legendre bases. Thereby, the nodal basis of Lagrange polynomials is considered w. r. t. the Gauss–Lobatto points in . For all reasonable polynomial degrees () and both bases we observe the condition number to be fairly small.
Concerning the now obtained Bernstein reconstruction, Lemma 1 does not only ensure preservation of bounds, but also allows us to enforce such bounds. Let us introduce the modified Bernstein reconstruction w. r. t. the lower bound and the upper bound
(29) 
To the best of the authors’ knowledge, the modified Bernstein operator (29) has not been defined anywhere else yet. The most beautiful property of the modified Bernstein operator is preservation – by default – of lower and upper bounds and , respectively. This follows directly from Lemma 1 and is summarised in
Theorem 4.
Let be the modified Bernstein operator w. r. t. the lower bound and the upper bound given by (29) and let be some function. Then fulfils
(30) 
We close this subsection by noting that, for instance, positivity (think about density in the Euler equations), can be easily ensured by setting , where is a suitable value larger than machine precision.
3.2 Proposed Procedure
Finally, we propose a procedure on how to replace the original polynomial approximation by its (modified) Bernstein reconstruction. Let be a troubled element with approximate solution and let the coefficients of w. r. t. a (nodal or modal) basis be given by . The Bernstein procedure consists of two steps:

Compute the (modified) Bernstein reconstruction of .

Build an ’appropriate’ convex combination of the original approximation and its Bernstein reconstruction , i. e.
(31) with .
W. r. t. the original basis , and therefore utilising the transformation (28), the Bernstein reconstruction can be written as
(32) 
Hence, its coefficients are given by
(33) 
This procedure can be incorporated easily into an already existing solver. Note that we have and in the extreme cases of and . Thus, the Bernstein reconstruction (linearly) varies between the original (and potentially oscillating or boundary violating) approximation and the more robust (modified) Bernstein reconstruction . This is illustrated in Figure 1 for the signum function , different parameters , and .
Obviously, the order of the approximation is reduced to one in elements where . Yet, in elements with , which define the large majority of the domain, high accuracy is retained. Subsequently, we demonstrate how the parameter can be adapted to the regularity of the underlying solution.
3.3 Selection of Parameter
The parameter in (31) is adapted to adequately capture different discontinuities and regions of smoothness in the solution. The value of adjusts in space and time to accurately capture strong variations in the solution. Hence, the proposed Bernstein procedure is able to calibrate the polynomial approximation to the regularity of the solution. It should be stressed that modification of the mesh topology, the number of degrees of freedom, node positions, or the type of SE method is utterly unnecessary. This makes the Bernstein procedure an efficient and easy to implement shock capturing method. As described above, the extreme values and yield the original approximation and the Bernstein reconstruction , respectively. For intermediate parameter values , the Bernstein reconstruction renders the convex combination more and more robust. Thus, allows us to adapt the amount of stabilisation introduced by the Bernstein reconstruction. Here, we chose being a function of a discontinuity sensor as proposed in persson2006sub ; huerta2012simple and briefly revisited in the next subsection.
3.4 Discontinuity Sensor
In this work, we use a discontinuity sensor proposed in glaubitz2019high to detect troubled elements and to steer the parameter . This sensor is an elementbased function leading to a single scalar measure of the solutions’ smoothness. It is a nonlinear functional
(34) 
which depends on a sensing variable . For systems, such as the Euler equations, the density or Mach number could be utilised for the sensing variable. Since we only consider scalar conservation laws in this work, is simply chosen as the conserved quantity . The sensor was first proposed in glaubitz2019high and is based on comparing polynomial annihilation (PA) operators archibald2005polynomial of increasing orders. A PA operator of order ,
(35) 
is designed as a highorder approximation of the jump function
(36) 
of the sensing variable . Here, denotes a set of local grid points around and the so called annihilation coefficients are given by
(37) 
where is a basis of . An explicit formula for the annihilation coefficients is provided in archibald2005polynomial by
(38) 
The normalisation factor is chosen as
(39) 
and ensures convergence to the right jump strengths. It has been proved in archibald2005polynomial that
(40) 
holds, where and is the smallest closed interval such that . We note that PA might be further enhanced, for instance, by applying a minmod limiter as described in archibald2005polynomial .
Next, let the sensor value of order be given by
(41) 
i. e. by the greatest absolute value of the PA operator of order at the mid points of the collocation points. If has at least continuous derivatives, the PA operator provides convergence to of order , and we therefore expect the sensor value (41) to decrease for an increasing order . In this case, a parameter value () is chosen, which means that the Bernstein procedure is not (fully) activated. In this work, we only compare the sensor values of order and since it was endorsed in archibald2005polynomial to use the same number of local grid points on both sides of a point . Of course, a variety of modification is possible for the PA sensor. Now, the parameter value is chosen only if
(42) 
holds, i. e. if the sensor value does not decrease when going over from order to . Finally, we decide for the parameter to linearly vary between and . This is realised by the parameter function
(43) 
where is a problem dependent ramp parameter. Figure 2 illustrates the parameter function w. r. t. the values of the PA sensor given by (42).
For the later numerical tests we investigated different other parameter functions as well, of which some have been discussed in huerta2012simple . Yet, we obtained the best results with (43). It should be noted that the above revisited PA sensor is only recommended for high orders . In our numerical test, we have observed some missidentifications for . For other discontinuity sensors could be used instead. We mention the modaldecay based sensor of Persson and Peraire persson2006sub ; huerta2012simple and its refinements barter2010shock ; klockner2011viscous as well as the KXRCF sensor krivodonova2004shock ; qiu2005comparison of Krivodonova et al., which is build up on a strong superconvergence phenomena of the DG method at outflow boundaries. Future work will address a detailed comparison of different shock sensors.
Remark 5.
The above sensor is simple to implement, but has its price, which is the introduction of the problem dependent and tunable parameter in (43). In practice, we follow a strategy proposed by Guermond, Pasquetti, and Popov for their entropy viscosity method for nonlinear conservation laws guermond2011entropy . For a fixed polynomial degree , the parameter is tuned by testing the method on a coarse grid. For all problems presented later in §5, the tuning has been done quickly on a coarse mesh of elements. Further, we have observed the Bernstein procedure to be robust w. r. t. the parameter . This will be demonstrated in §5.1.1 for the linear advection equation.
4 Entropy, Total Variation, and Monotone (Shock) Profiles
In this section, we investigate some analytical properties of the proposed Bernstein procedure, such as its effect on entropy, total variation and monotone (shock) profiles of the numerical solution. For sake of simplicity, all subsequent investigations are carried out on the reference element .
4.1 Entropy Stability
Let be a convex entropy function. We show that the proposed Bernstein reconstruction only yields a change in the total amount of entropy which is consistent with the total amount of the entropy of the original approximation . This means that for increasing the total amount of entropy of the Bernstein reconstruction converges to the total amount of entropy of the function . Yet, even though the change of entropy is consistent, it does not always yield a decrease of the entropy. This is demonstrated by
Example 6.
Let with Bernstein reconstruction . For the usual entropy , we have
(44)  
Thus, the total amount of entropy is increased by applying the Bernstein procedure.
In fact, Example 6 is no exceptional case. We note that for every continuous and convex function , the sequence of Bernstein reconstructions will converge to from above, i. e.
(45) 
holds for all ; see for instance (phillips2003interpolation, , Theorem 7.1.8 and 7.1.9). Hence, for , the entropy will be increased by the Bernstein procedure. Yet, we can further prove that the change of the total amount of entropy by the Bernstein reconstruction is consistent and vanishes for increasing .
Theorem 7.
Let be a convex entropy function and let be piecewise . Then,
(46) 
holds.
Proof.
Since is and is piecewise , equation (15) yields
(47) 
where are the points where does not exist. Hence, converges almost everywhere to . Further, and all are uniformly bounded, let us say by and , i. e.
(48) 
for all . Since is continuous, is also bounded on and there is a such that
(49) 
Thus, we have
(50) 
and the sequence is uniformly bounded. Finally, Lebesgue’s dominated convergence theorem (evans2018measure, , Chapter 1.3) yields
(51) 
and therefore the assertion. ∎
Note that in the proposed Bernstein procedure, the Bernstein reconstruction is always computed for . Yet, Theorem 7 holds for general which are piecewise .
4.2 Total Variation
A fundamental property of the (exact) solution of a scalar hyperbolic conservation law (1), assuming the initial data function has bounded variation, is that lax1973hyperbolic ; toro2013riemann

no additional spatial local extrema occur.

the values of local minima do not decrease and the values of local maxima do not increase.
As a consequence, the total variation (TV) of the solution,
(52) 
is a nonincreasing function in time, i. e.
(53) 
for . In the presence of (shock) discontinuities, in fact, the TV typically decreases harten1984class . We are thus interested in designing shock capturing methods which mimic this behaviour of being TV diminishing (TVD). The proposed Bernstein procedure is now shown to fulfil the TVD property in the sense that the Bernstein reconstruction of a function has a reduced (or the same^{1}^{1}1In all numerical tests, we actually observed the TV to decrease) TV, i. e.
(54) 
holds for the Bernstein reconstruction (27).
Theorem 8.
Let be the Bernstein reconstruction of a function . Then, the TV of is less or equal to the TV of , and the Bernstein procedure is TVD.
Proof.
Since and by consulting (14), we have
(55)  
(56) 
where the integrals are given by
(57) 
Thus, inequality
(58) 
follows, and therefore the assertion. ∎
4.3 Monotone (Shock) Profiles
Let be a piecewise smooth function with single discontinuity at , representing a shock profile in a troubled element. It is desirable for the polynomial approximation of such a function to not introduce new (artificial) local extrema. Yet, typical polynomial approximations, such as interpolation and (pseudo) projections, are doing so by the Gibbs–Wilbraham phenomenon hewitt1979gibbs . The Bernstein reconstruction, however, has been proved to not feature such spurious oscillations. This can, for instance, be noted from
Theorem 9 (Theorem 1.9.1 in Lorentz lorentz2012bernstein ).
Suppose that is bounded in and let respectively denote the right and left upper limits and the right and left lower limits of at a point . Then
(59) 
Note that Theorem 9 is fairly general. The absence of the Gibbs–Wilbraham phenomenon (without taking convergence into account) could have been noted by Lemma 1 already. It should be stressed that we can not just rule out spurious (Gibbs–Wilbraham) oscillations for the Bernstein reconstruction of discontinuous (shock) profiles, but we are further able to ensure the preservation of monotonicity. Let be a monotonic increasing (and possibly discontinuous) function on . Then, the following lemma ensures that the Bernstein reconstruction is monotonic increasing as well.
Lemma 10.
Let be monotonic increasing and let denote the Bernstein reconstruction of . Then, is monotonic increasing as well.
Proof.
Note that for monotonic increasing , we have
(60) 
Thus, by consulting (14), inequality
(61) 
follows and as a result also the assertion. ∎
Note that the same result holds for monotonic decreasing (shock) profiles.
5 Numerical Tests
In this section, we test the Bernstein procedure incorporated into a nodal collocationtype discontinuous Galerkin finite element method (DGFEM) as described in hesthaven2007nodal . For time integration, we have used the explicit TVD/SSPRK method of third order using three stages (SSPRK(3,3)) given in gottlieb1998total by Gottlieb and Shu: Let be the solution at time , then the solution at time is obtained by
(62)  
where is a discretisation of the spatial operator of the DGFEM. For the time step size, we have used
(63) 
with and where is calculated for all between and in all our numerical tests. Following cockburn1991runge in parts, four different problems are investigated for which the exact solutions can be calculated. Note that we assume periodic boundary conditions (BCs) in all numerical tests. This restriction is utterly unnecessary for the proposed Bernstein procedure and is only made in order to compactly provide reference solutions, which refer to the exact entropy solutions. Further, for every problem the local Lax–Friedrichs (Rusanov) flux
(64) 
has been applied, where is a locally determined viscosity coefficient based on maximum characteristics speed (leveque2002finite, , Chapter 12.5).
5.1 Linear Advection Equation
Let us consider the linear advection equation
(65) 
on with periodic BCs and a discontinuous initial condition (IC)
(66) 
By the method of characteristics, the solution at time is given by
(67) 
The linear advection equation is the simplest PDE that can feature discontinuous solutions. Thus, the (shock capturing) method can be observed in a wellunderstood setting, isolated from nonlinear effects. Yet, the linear advection equation provides a fairly challenging example. Similar to contact discontinuities in Euler’s equations, discontinuities are not selfsteeping, i. e. once such a discontinuity is smeared by the method, it can not be recovered to its original sharp shape kuzmin2006flux .
5.1.1 Parameter Study
We start by investigating the ramp parameter , which goes into the PA sensor (43) and steers the Bernstein procedure. Figure 3 demonstrates the effect of the ramp parameter on the results produced by the Bernstein procedure for a linear advection equation with discontinuous IC. The IC has thereby been evolved over time until . We observe the Bernstein procedure to be fairly robust w. r. t. the ramp parameter . On coarse meshes, such as and , slight differences can be observed between different parameter values. Yet, these differences become less significant when the mesh is refined. As mentioned in Remark 5, the tuning of the ramp parameter has been done quickly on a coarse mesh of elements for all problems presented.
5.1.2 Comparison with Usual Filtering in DG Methods
Next, we investigate the approximation properties of a DGFEM enhanced with the Bernstein procedure for the linear advection equation with discontinuous IC. Figure 4 illustrates the results at time . Further, we compare our results with the DGFEM without any filtering and with a usual filtering technique, where the DG solution is replaced by its mean,
(68) 
in a troubled element . Troubled elements are detected by a critical value , see (42). Note that the usual filtering is therefore expected to be applied in less elements than the Bernstein procedure, since the Bernstein procedure is already activated for , see (43). Yet, we still observe the usual filtering to smear the numerical solution around discontinuities considerably. Over time, this smearing yields the numerical solution to nearly become constant. This can be observed in Figure 5, where the results are further evolved in time until . At the same time, the results of the Bernstein procedure remain in their relatively sharp shape, even near discontinuities. Yet, no oscillations are observed, in contrast to the results produced by the DGFEM without any filtering. It should be stressed that this test case is especially challenging for the usual filtering by mean values, since the initial discontinuities have traveled trough the domain several times until is reached. The following tests might provide a fairer comparison. An extensive smearing of the numerical solution by the usual filtering by mean values, especially compared to the Bernstein procedure, is always observed to some extent, however.
Remark 11.
In Figure 3(g) (and Figure 8(d)) we note stronger oscillations for the DGFEM with mean value filtering than without any filtering. Typically, this is not excepted. A reason for this behaviour might be that the mean value filtering, when activated by the above sensor, is neither ensured to be TVD nor to preserve the relation between boundary values at element interfaces (while holds for the original approximation, mean value filtering in one or both elements connected by the interface might result in a reverse relation ). Such behaviour could be prevented by local projection limiters which are steered by modified minmod function; see cockburn1991runge ; cockburn1989tvb .
5.2 Invicid Burgers’ Equation
Let us now consider the nonlinear invicid Burgers’ equation
(69) 
on with smooth IC
(70) 
and periodic BCs. For this problem a shock develops in the solution when the wave breaks at time
(71) 
In the subsequent numerical tests, we consider the solution at times and . The reference solutions have been computed using characteristic tracing, solving the implicit equation in smooth regions. The jump location, separating these regions, can be determined by the Rankine–Hugoniot condition. Figure 6 illustrates the results at time , at which the shock waves starts to arise. As a consequence, only slight differences are observed at this state. In particular, we can observe that the Bernstein procedure does not smear the solution in smooth regions, even when steep gradients arise. A slight smearing can be observed for the usual filtering by mean values around the location of the arising shock at . Figure 7 illustrates the results at time , for which the shock has fully developed and has already traveled through the whole domain once. While we observe oscillations for the DGFEM without filtering and smearing for usual filtering by mean values, the Bernstein procedure still provides sharp profiles for the discontinuous solution.
5.3 A Concave Flux Function
Next, we investigate the conservation law
(72) 
with a concave flux function on , periodic BCs, and a discontinuous IC
Comments
There are no comments yet.