Entropy-dissipating finite-difference schemes for nonlinear fourth-order parabolic equations

01/11/2020
by   Marcel Braukhoff, et al.
TU Wien
0

Structure-preserving finite-difference schemes for general nonlinear fourth-order parabolic equations on the one-dimensional torus are derived. Examples include the thin-film and the Derrida-Lebowitz-Speer-Spohn equations. The schemes conserve the mass and dissipate the entropy. The scheme associated to the logarithmic entropy also preserves the positivity. The idea of the derivation is to reformulate the equations in such a way that the chain rule is avoided. A central finite-difference discretization is then applied to the reformulation. In this way, the same dissipation rates as in the continuous case are recovered. The strategy can be extended to a multi-dimensional thin-film equation. Numerical examples in one and two space dimensions illustrate the dissipation properties.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 21

02/24/2021

Energy-consistent finite difference schemes for compressible hydrodynamics and magnetohydrodynamics using nonlinear filtering

In this paper, an energy-consistent finite difference scheme for the com...
07/12/2021

Extending nonstandard finite difference schemes rules to systems of nonlinear ODEs with constant coefficients

In this paper, we present a reformulation of Mickens' rules for nonstand...
09/05/2019

Adaptive Third Order Adams-Bashforth Time Stepping Scheme for 2D Extended Boussinesq Equations

We develop the third-order adaptive Adams-Bashforth time stepping scheme...
05/20/2016

OPESCI-FD: Automatic Code Generation Package for Finite Difference Models

In this project, we introduce OPESCI-FD, a Python package built on symbo...
03/03/2021

Truly multi-dimensional all-speed schemes for the Euler equations on Cartesian grids

Finite volume schemes often have difficulties to resolve the low Mach nu...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The design of numerical schemes that preserve the structure of the associated partial differential equations is an important task in numerical mathematics. In this paper, we develop new finite-difference approximations conserving the mass, preserving the positivity, and dissipating the entropy of nonlinear fourth-order parabolic equations of the type

(1)

where is the one-dimensional torus, , , and , together with the initial condition in . We also discuss briefly the discretization of multi-dimensional equations; see Section 4.

A special case of (1) (with ) is the thin-film equation

(2)

which models the flow of a thin liquid along a solid surface with film height or the thin neck of a Hele-Shaw flow in the lubrication approximation [1]. The multi-dimensional version is given by and its discretization will be discussed in Section 4. Another example (with , , ) is the Derrida–Lebowitz–Speer–Spohn (DLSS) equation

(3)

which arises as a scaling limit of interface fluctuations in spin systems [7] and describes the evolution of the electron density in a quantum semiconductor under simplifying assumptions.

Equations (2) and (3) conserve the mass and preserve the positivity of the solution, even for the corresponding multi-dimensional versions [6, 14]. They also dissipate the entropy111Strictly speaking, equation (1) produces entropy and dissipates energy, but the notion entropy dissipation seems to be common in numerical schemes. in the sense that there exists a Lyapunov functional with entropy density such that the entropy production

provides gradient estimates. For instance, (

2) and (3) dissipate the entropy density , where

(4)

if (thin-film equation) and (DLSS equation); see [15, Section 4]. These bounds are optimal. We call the Shannon entropy (density), the logarithmic entropy, and for a Rényi entropy. It holds that pointwise for and pointwise for . We prefer to define as in (4) to avoid the additional term in which would complicate the subsequent computations.

The proof of the dissipation of the entropy is based on suitable integrations by parts and the chain rule [15]. On the discrete level, we face the problem that the chain rule is generally not available. We are aware of two general strategies.

The first strategy is to exploit the gradient-flow structure of the parabolic equation (if it exists). It involves only one integration by parts, and the discrete chain rule can be formulated by means of suitable mean functions. This idea was elaborated as the Discrete Variational Derivative Method for finite-difference approximations [11]. The gradient-flow formulation (with respect to the -Wasserstein metric) yields a natural semi-discretization in time of the evolution using the minimizing movement scheme in finite-dimensional spaces from finite-volume or finite-difference approximations. These techniques were used in [8, 13, 19, 20] for the multi-dimensional DLSS equation. It allows for the proof of entropy dissipation of the Shannon entropy and the Fisher information , but not for general Rényi entropies, since no Wasserstein gradient-flow formulation seems to be available for these functionals. The thin-film equation with is shown in [18] to be a gradient flow with respect to a weighted Wasserstein metric. In the work [22], a finite-difference scheme that dissipates the discrete norm of the solution to the one-dimensional thin-film equation was analyzed.

The minimizing movement scheme is based on the implicit Euler method. We mention that higher-order time discretizations were investigated too, in the framework of semi-discrete problems; see [4] using the two-step BDF method and [17] using one-leg multistep generalizations. A generic framework for Galerkin methods in space and discontinuous Galerkin methods in time was presented in [9].

The second strategy uses time-continuous Markov chains on finite state spaces. Birth-death processes that define the Markov chain can be interpreted as a finite-volume discretization of a one-dimensional Fokker–Planck equation, and the dissipation of the discrete Shannon entropy can be proved. The nonlinear integrations by parts are reduced to a discrete Bochner-type inequality

[5, 10, 16], which is obtained by identifying the Radon–Nikodym derivative of a measure involving the jump rates of the Markov chain [3, Section 2]. It seems that this idea is restricted to linear diffusion equations.

In this paper, we suggest a third strategy. The idea is to write the flux as a combination of derivatives of the function . This allows for integrations by parts that can be extended to the discrete level and it avoids the application of the chain rule. More precisely, we determine two functions and depending on and its derivatives such that . The function is known in thermodynamics as the chemical potential, and the formulation of the flux in terms of the chemical potential seems to be natural from a thermodynamic viewpoint. We apply this idea to fourth-order parabolic equations for the first time. It turns out that for with , we can write

where , , and the coefficients depend on , , , and ; see (10) and (11). Integrating by parts twice and using equation (1) gives for :

The task is to show that the integrand, written as a polynomial in , is nonnegative for all values of . It follows from the product rule that, for ,

(5)

Under certain conditions on the parameters, we expect that the integrand is bounded from below by , up to a factor, which yields some gradient estimates.

On the discrete level, we imitate this idea: The flux and the variables , of the polynomials and is discretized by central finite differences. For this, let be a discrete torus with grid size and define the scheme

(6)

with the initial condition for , where , , and and are the polynomials and , evaluated at , respectively; see (19). We show below that with the discrete entropy , the discrete analog of (5) becomes

where , approximate , , respectively. This yields exactly the polynomial of the continuous case. Thus, we obtain the same conditions on the parameters as for the continuous equation.

We still need a discrete analog of the product rule to conclude. This is done by carefully choosing and . Definition (20) ensures that which approximates . This choice is used in the central scheme (6) for . Noncentral schemes require different definitions of and ; see Remark 7.

A drawback of our technique is that scheme (6) depends on the entropy to be dissipated. The scheme does not dissipate all admissible entropies. In applications, however, one usually wants to dissipate only that entropy which is physically relevant.

Our main results can be sketched as follows:

  • Lyapunov functional: Let , and assume that

    (7)

    Then the continuous entropy and the discrete entropy are dissipated in the sense that and , i.e., and are Lyapunov functionals for and , respectively; see Theorems 2 and 5. Condition (7) is optimal for the thin-film and DLSS equations.

  • Entropy dissipation: Let , and assume that . Then there exists a constant such that for all ,

    (8)
    (9)

    where , , , is an arbitrary average of , and , are defined in (20). This result is proved in Theorems 2 and 5.

  • Case : For the case , we need the formulation instead of , where and , since generally does not depend on the logarithm. For details, see Proposition 8.

  • Case : We show that scheme (6) with possesses global positive solutions. This result is a consequence of the discrete entropy inequality and mass conservation, which imply that is bounded for all . Consequently, is bounded for all and , and since the function tends to infinity if either or , this proves that is bounded from below and above. We refer to Proposition 6 for details.

  • Multi-dimensional case: In principle, the multi-dimensional case can be treated using functions and with many variables. Practically, however, the computations are becoming too involved and it may be unclear how to discretize mixed derivatives. One idea to overcome this issue is to use scalar variables only, like and . This allows us to treat the multi-dimensional thin-film equation; see Proposition 10.

The paper is organized as follows. We prove the continuous entropy inequality (8) in Section 2 and the discrete entropy inequality (9) in Section 3. A scheme for the multi-dimensional thin-film equation is proposed and analyzed in Section 4. Numerical simulations for the thin-film and DLSS equations in one space dimension and for a thin-film equation in two space dimensions are presented in Section 5.

2. General continuous equation

To prepare the discretization, we need to analyze the entropy dissipation properties of the continuous equation (1). We show first that can be written as with and functions and which depend on , , and .

Lemma 1.

Let be given as in (1) and as in (4) and let satisfy . Then , where

and

(10)

with being a free parameter.

The lemma shows in particular that the formulation is not unique. This fact is used to optimize later the range of admissible parameters , , , and .

Proof.

Let with . A direct computation yields

Inserting these expressions into and simplifying leads to

We identify the coefficients with those in the expression (1) for :

The general solution of this linear system for , with free parameter , gives (10).

Next, let . The ansatz for and becomes

Then

Identifying the coefficients with those in (1) again gives a linear system for the parameters . The general solution reads as

(11)

These expressions are the same as (10) with . ∎

For the following theorem, we recall definition (4) of and set .

Theorem 2.

Let be a smooth positive solution to (1) and let . If (see definition (7)), then is a Lyapunov functional, i.e.  for all . If then there exists such that for all ,

and there exists another constant such that for all ,

Proof.

Let first with . We calculate the time derivative of the entropy, using integration by parts twice:

(12)

where

(13)

The right-hand side of (12) is nonpositive if for all . Taking into account that , this is the case if and only if

In view of definition (10) of and , we may interpret as a free parameter und and as affine functions in . Therefore, we require that

(14)

We choose the optimal value of by computing the critical value of :

This yields

(15)

Inserting in (14) leads to

which is equivalent to , see (7).

If , there exists such that for all , . Inserting this information in (12), we infer that

The discriminant equals

and it is positive for all . Therefore, there exists such that

and this gives the conclusion for and when .

It remains to analyze the case . Here, we cannot formulate the flux as with , since does not contain logarithmic terms. Our idea is to write with and functions and that depend on , , and . The time derivative of the entropy can be written in terms of and its derivatives only, since the logarithmic term only appears with its derivatives.

The formulation corresponds to the expression used for . In fact, we have

where are given by (11) and is a free parameter. As before, the time derivative becomes

Set and . Since and , we obtain

where

(16)

Observe that . Thus, if the polynomial is nonnegative for all , which is the case if and only if

We choose the optimal value from , i.e.

Then is equivalent to .

Finally, if , we have and consequently,

The discriminant is positive and there exists such that . We infer that

which concludes the proof with and . ∎

Remark 3 (Examples).

The DLSS equation corresponds to (1) if , , and . Then condition (7) becomes , which is the optimal interval. Choosing , we obtain the thin-film equation, and condition (7) is equivalent to , which again is the optimal parameter range. ∎

Remark 4 (Systematic integration by parts).

The result of Theorem 2 can be also derived by the method of systematic integration by parts of [15]. Our proof is taylored in such a way that it can be directly “translated” to the discrete level. Indeed, the method of [15] needs several chain rules that are not available on the discrete level and our technique needs to be used.

Still, systematic integration by parts and our strategy are strongly related. Systematic integration by parts means that we are adding so-called integration-by-parts formulas whose derivatives vanishe. In this way, we can derive (12),

by choosing