Multilevel Monte Carlo Finite Volume Methods for Random Conservation Laws with Discontinuous Flux

06/21/2019
by   Jayesh Badwaik, et al.
0

We consider a random scalar hyperbolic conservation law in one spatial dimension with bounded random flux functions which are discontinuous in the spatial variable. We show that there exists a unique random entropy solution to the conservation law corresponding to the specific entropy condition used to solve the deterministic case. Assuming the empirical convergence rates of the underlying deterministic problem over a broad range of parameters, we present a convergence analysis of a multilevel Monte Carlo Finite Volume Method (MLMC-FVM). It is based on a pathwise application of the finite volume method for the deterministic conservation laws. We show that the work required to compute the MLMC-FVM solutions is an order lower than the work required to compute the Monte Carlo Finite Volume Method solutions with equal accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/01/2020

Multi-level Monte Carlo Finite Difference Methods for Fractional Conservation Laws with Random Data

We establish a notion of random entropy solution for degenerate fraction...
07/01/2021

Scalar conservation laws with stochastic discontinuous flux function

A variety of real-world applications are modeled via hyperbolic conserva...
12/16/2019

Alsvinn: A Fast multi-GPGPU finite volume solver with a strong emphasis on reproducibility

We present the Alsvinn simulator, a fast multi general purpose graphical...
11/22/2020

Convergence of a Godunov scheme for degenerate conservation laws with BV spatial flux and a study of Panov type fluxes

In this article we prove convergence of the Godunov scheme of [16] for a...
07/31/2019

Modeling random traffic accidents by conservation laws

We introduce a stochastic traffic flow model to describe random traffic ...
06/18/2020

Reconstruction of finite volume solution for parameter-dependent linear hyperbolic conservation laws

This paper is concerned with the development of suitable numerical metho...
05/09/2021

Probabilistic forecast of multiphase transport under viscous and buoyancy forces in heterogeneous porous media

In this study, we develop a probabilistic approach to map the parametric...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One dimensional scalar conservation laws with spatially discontinuous flux are often used to model different phenomena such as traffic flow [20], [26] [11], two phase flow in a porous media [9, 27, 10, 13, 12, 14] and sedimentation processes [5, 7, 4]11todo: 1Cite more papers.

. However, often the data needed describing the equation is not known deterministically, and instead is only available via certain statistical quantities of interest like the mean, the variance and the higher moments. This uncertainty in Cauchy data is carried over to the uncertainty in the solution of the conservation laws. Uncertainty in the Cauchy data and the corresponding solution is frequently modeled in a probability manner. A common point of view is that the Cauchy data is a random variable described by a probability distribution. Development of efficient algorithms to quantify the uncertainty in the solutions of random conservation law is an active field of research. The number of random sources driving the uncertainty may be very large, and possibly countably infinite. The numerical method should be able to deal with the corresponding possibly infinite dimensional spaces efficiently.

There are different methods to quantify the uncertainty in solutions of conservation laws, namely stochastic Galerkin method [1, 6, 21, 25, 30] based on generalized polynomial chaos, stochastic collocation [22, 31] and statistical sampling methods, especially Monte Carlo (MC) methods [23]. As was shown in [24], 22todo: 2more references the MC methods converge to the mean at rate as the number of samples

increases. This rate is due to the central limit theorem and hence, optimal for these class of methods. This makes the MC methods computationally expensive. In

[8]

, Giles proposed showed that a multilevel Monte Carlo method can be used to reduce the computational complexity of estimating the expected value of stochastic differential equations. Since, then multiple authors have applied the method to different problems

33todo: 3references.

For conservation laws, [24] proposed a multilevel Monte Carlo (MLMC) method for a random initial condition and a smooth flux. Later, the method was extended to include random but smooth flux in [23]. In the same paper, the authors described a method to determine the optimized combination of sample sizes at each level of spatial and temporal discretization. The optimal number of samples at each level depends on the error estimates of the numerical method. We are interested in extending the method to conservation laws with random initial condition and discontinuous flux in the special case where the discontinuous flux is multiplicative. In this case, the conservation law can be written as equationparentequation

(1a)
(1b)

where and are random variables satisfying the following conditions: there exists , , , , , and , such that for almost every , we have inlineinlinetodo: inlinefix spacing and commas equationparentequation

(2a)
(2b)
(2c)
(2d)
(2e)
(2f)
(2g)

The deterministic case of the equation Eq. 1 has been studied extensively by [3, 28, 16] 44todo: 4more papers and different admissibility conditions have been developed to select a unique solution. However, the different conditions, though yielding uniqueness, give different unique solutions. In this paper, we use a modified Kruzkov entropy condition to select the unique solution. Finite volume methods for the deterministic case have been developed in [28, 16]. The stability of solutions with respect to the Cauchy data was examined and shown in [17].

We define an pathwise entropy condition for the random conservation law based on the modified Kruzkov entropy condition of the deterministic case. Using the results from deterministic case, we then prove the uniqueness and stability of the solution to the random conservation law. In accordance with [23], the next step would be to use the error estimates for the finite volume method from the deterministic case in order to derive the optimal sample numbers for the MLMC method. However, such error estimates do not yet exist. Moreover, [2] have presented an example where the total variation of the solution is unbounded near the discontinuous interface. This indicates that the determination of the convergence rates which are uniformly valid can be a difficult task. A general practice then has been to estimate such rates only empirically.

In this paper, we use the empirically estimated convergence rates to compute the optimized combination of sample sizes. We use the method of Lagrange multipliers to optimize the values. We then show that the sample sizes thus computed indeed provide a reduction in computational complexity as compared with the Monte Carlo Method. Finally, we present some numerical experiments to verify the theoretical computations.

The rest of the paper is organized as follows: We start by covering some preliminary background in Section Section 2. In Section Section 3, we prove that a unique solution exists to Eq. 1 and in Section Section 4, we analyze the Monte Carlo and the Multilevel Monte Carlo Methods. Finally, we test the method developed in this section on some numerical examples in Section LABEL:sec:numexp and describe the conclusions of our analysis in Section LABEL:sec:conclusion.

2 Preliminaries

2.1 Preliminaries from Probability

We first introduce some preliminary concepts which are needed in the exposition. A large part of the exposition has been adapted from [19, 29]. Let be a probability space and let be a Banach space where is the Borel -algebra over . A map is called a -simple function if it is of the form

(3)

and for . A map is strongly -measurable if there exists a sequence of simple functions converging to in the -norm -almost everywhere on . A strongly -measurable map is called an -valued random variable. We call two strongly -measurable functions, , -versions of each other if they agree -almost everywhere on .

Lemma 1

Let be a probability space and and be two Banach spaces. Let be a strongly measurable function and be a continuous function. Then the function is a strongly measurable function.

Definition 1 (Integration on Banach Spaces)

The integral of a simple function is defined as

(4)

A strongly measurable map is said to be Bochner integrable if there exists a sequence of simple functions ,converging to , -almost everywhere such that

(5)

The Bochner integral of a strongly measurable map is then defined by

(6)

Theorem 1

A strongly measurable map is Bochner integrable if and only if

(7)

in which case, we have

(8)

Definition 2 ( Spaces on Banach Spaces)

For each , we define the space to consist of all strongly measurable functions for which . These spaces are Banach spaces under the norm

For , we define the space as the space of all strongly measurable functions for which there exists a such that . This space is a Banach space under the norm

(9)

Definition 3 (Banach Space of Type [19, page 246])

Let be a sequence of independent Rademacher random variables. A Banach space is said to have the type if there is a constant (known as the type constant) such that for all finite sequences

(10)

Theorem 2 ([19, page 246])

Let and let be a measure space and let be a Banach space having the type , then the space has the type . In particular, the Banach space has the type .

Theorem 3 ([19, Proposition 9.11])

Let be a Banach space having the type with the type constant . Then, for every finite sequence of zero mean independent random variables in , we have

(11)

Given a probability space and a Banach space , let be a random variable. Given independent, identically distributed samples of , the Monte Carlo estimator of is defined as the sample average

(12)
Theorem 4 ([18, Corollary 2.5])

Let the Banach space have the type with the type constant . Let be a zero mean random variable. Then for every finite sequence of independent, identically distributed samples of , we have

(13)

Theorem 5 ([18, Theorem 4.1])

Let , then the Monte Carlo estimate converges in for and we have the bound

(14)

2.2 Conservation Law with Discontinuous Flux

We next present a series of results for conservation laws with spatially discontinuous flux that we will need later in the paper. In Eq. 1, for a fixed , the problem reduces to the deterministic conservation law equationparentequation

(15a)
(15b)
Definition 4 (Weak Solution)

A weak solution to Eq. 15a is a bounded measurable function , , satisfying, for all ,

(16)

Theorem 6 (Existence of Weak Solution [15])

If the conditions Eq. 2a, LABEL:eqn:_concavitycondition, Eq. 2c and LABEL:eqn:_initialcondition_regularity are satisfied, then there exists a weak solution to Eq. 15a, and we have the weak solution

and hence by interpolation for all

, and .

Definition 5 (Modified Kruzkov Entropy Condition [28])

Let be a convex entropy/entropy flux pair for Eq. 15a, and assume that . A weak solution for Eq. 15a is said to be an entropy solution if for every smooth test function with compact support in , , where is defined in Eq. 2d and every , satisfies the inequality

(17)

Theorem 7 (Uniqueness of Entropy Solution [15])

If in addition to conditions required in Theorem 6, conditions in Eq. 2d is also satisfied , then there exists a unique weak entropy solution to Eq. 15a satisfying the inequality

(18)

where

(19)

Theorem 8 ( Stability Result [17])

If in addition to conditions required in Theorem 7, conditions in Eq. 2e and LABEL:eqn:strictly_positive_condition are also satisfied, then we have

(20)

where equationparentequation

(21a)
(21b)
(21c)

3 Random Conservation Law with Discontinuous Flux

For a conservation law with a random flux and a random initial condition, we consider the Cauchy problem given in Eq. 1. In this section, we show that the solutions to Eq. 1 existence and are pathwise unique under an entropy condition. For the purpose, we first define the allowed space of Cauchy data

Definition 6 (Random Data)

Define a norm

(22)

where . Let be a fixed constant, then we assume that is the the space of functions which satisfy the conditions in Eq. 2 almost everywhere. We note that is a Banach space under the given norm. Let be the Borel -algebra on and let be some fixed constant. Then, we assume that the random data is a strongly measurable map such that

(23)

From the results for the deterministic conservation law, we would expect the random solution to be a random variable taking values in for . Denote the space of solutions as and write the solution in terms of a mapping , whence we have,

(24)
Definition 7 (Random Entropy Solution)

Given a probability space , a random variable is said to be a random entropy solution for Eq. 1 if the following conditions are satisfied:

  1. Weak Solution For -a.e. , satisfies

    (25)

    for all .

  2. Entropy Condition For -a.e. , satisfies the entropy condition as in Definition Definition 5.

    (26)

Theorem 9 (Existence and -wise Uniqueness of a Random Entropy Solution)

For each satisfying Eq. 2, and the random data as defined in Definition Definition 6, then there exists a unique random entropy solution to the random conservation law Eq. 1.

Proof

By Eq. 23 for almost all , the random data is such that there exists a corresponding unique entropy solution . By the assumptions of the theorem, the map is a strongly measurable map. Further, by Eq. 20, the map is continuous. Then Lemma Lemma 1 shows that the map is strongly measurable. And hence, there exists a random entropy solution to Eq. 1.

Next, let the random variables and be -versions of each other. For all times , let be the random entropy solution corresponding to at time and be the random entropy solution corresponding to at time . Then by Eq. 20 and LABEL:eqn:_norm_randomdata, for and -almost everywhere we have

This implies that for -almost everywhere we have almost everywhere in . And hence, for all , the random entropy solution is unique in and therefore in .

Theorem 10

Let be a random entropy solution to LABEL:eqn:_ranconlaw as per Definition Definition 7. For any and , we have and

(27)

where is dependent linearly on , where is a constant. Given a bounded interval , we also have the estimate

(28)

where is as defined in Theorem Theorem 6.

Proof

By Theorem Theorem 6, for any , we have for -almost everywhere ,

And hence, we have, for all , . Also, by the fact that , we have

4 Multilevel Monte Carlo Finite Volume Method

Monte Carlo methods are a class of methods where repeated random sampling is used to obtain the mean value and the subsequent moments of the random variable. In our case, the samples are the entropy solution to the deterministic Cauchy problem for the corresponding samples of Cauchy data. The exact solutions to those deterministic problems are however unavailable and instead, we must use a numerical approximation. Here, we use a finite volume method to compute the numerical approximation to the deterministic problem.

The error introduced by the Monte Carlo methods depends on the number of samples used, while the error introduced by the finite volume methods depends on the resolution of the grid. Different combinations of grids can be used to compute the finite volume approximations for different samples. In this section, we will describe and analyze two such combinations, denoted by the Monte Carlo Finite Volume Method (MC-FVM) and the Multilevel Monte Carlo Finite Volume Method (MLMC-FVM). We solve our random conservation law on a compact interval and in the time interval .

Definition 8 (Cauchy Problem Sample, Solution Sample and FVM Solution Sample)

In the context of Monte Carlo methods, given a sample of the random variable , the corresponding deterministic Cauchy problem will be referred to as the Cauchy problem sample for . Similarly, the unique entropy solution and the finite volume solution for the Cauchy problem sample will be referred to as the solution sample for and the finite volume solution sample for respectively.

Definition 9 (-discretization)

Let the domain . Divide the domain into N uniform cells of size , with where and . The -discretization is defined as . Additionally, the cell center and the cell length are given as

(29)

4.1 Finite Volume Method For Conservation Law with Discontinuous Flux

Given a Cauchy problem sample and a -discretization of containing cells, we now describe a method to compute the FVM solution to the solution of the given Cauchy problem. The initial conditions of the Cauchy problem dictate that

(30)

We denote the cell averages of the solution for the -th cell at -th time step as ,

(31)

The function is computed at the cell interfaces by considering the cell averages on the staggered grid as shown below

(32)

The finite volume scheme is then defined as equationparentequation

(33a)
where is given by
(33b)

where is an monotone numerical flux. There are a few different numerical fluxes that are appropriate in this problem. We use the following numerical fluxes motivated by [28].

  1. Godunov Flux

    (34)
  2. Engquist-Osher Flux

    (35)
Theorem 11 ([28, Theorem 3.2])

Let satisfy the conditions in LABEL:eqn:_conditions_on_randomcauchy_data. Additionally, assume

(36)

Then, the finite volume scheme Eq. 33 converges to the entropy solution of the Cauchy problem Eq. 15a on provided that the time step follows the CFL condition

(37)

Remark 1

The convergence rates for the numerical methods are yet unknown. However, we do need the convergence rates to determine the optimal sample numbers for the analysis of MLMC method. Hence, we assume that the convergence rate for the -error between the numerical solution and the exact solution is

(38)

4.2 Monte Carlo Finite Volume Methods

We now describe and analyze the MC-FVM to compute the numerical approximation to the random entropy solution for conservation law Eq. 1. The underlying idea of MC-FVM is to use identical uniform discretization to solve each of the Cauchy problem sample.

Definition 10 (MC-FVM Approximation)

Generate independent, identically distributed samples . Let be a -discretization of the spatial domain . Let denoted the FVM solution sample corresponding to at time . Then, the -sample MC-FVM to is defined as

(39)

The variance is defined as

(40)

Theorem 12 (MC-FVM Error Bound)

Assume that and is the random data as defined in Definition Definition 6, then for the random conservation law Eq. 1, for the MC-FVM approximation converges to in as and . Furthermore, we have the bounds

(41)

where is the rate of convergence determined empirically from the numerical experiments.

Proof

For the first inequality, using the triangle inequality, we can write

Using the fact that and applying Hölder inequality, Theorem Theorem 5 and Eq. 27 to the first term, we have

Let be such that

Then by Hölder inequality for probability spaces we have

Since , we can put and and obtain

(42)

For the second term, by Eq. 38, we have

Remark 2

We would like to point out that in the proof above, the error bounds are derived by assuming that . A similar procedure will be carried out for the MLMC-FVM. Furthermore, we use the same error bounds for calculation of optimal sample numbers. Whence, we require that the error bounds and the rate of error used for the optimal sample numbers should be for .

4.3 Multilevel Monte Carlo Finite Volume Methods

The underlying idea of the MLMC-FVM is to use a hierarchy of nested set of discretization and solve several Cauchy problem samples on each of them.

Definition 11 (MLMC-FVM Approximation)

Let be a sequence of nested discretization of the spatial domain . For each level , define by the random Finite Volume approximation on with . Then, given , the MLMC-FVM approximation to is then defined as equationparentequation

(43a)
(43b)
(43c)
(43d)

It must be emphasized here that for every discretization apart from and, the approximate solution is compute twice, once for the level and another time for level in an independent manner. This is important to ensure that is independent from .

Theorem 13 (MLMC-FVM Error Bound)

Assume that and is the random data as defined in Definition Definition 6, then for the random conservation law Eq. 1, for , the MLMC-FVM approximation converges to as for and . Further, we have the bound

(44)

where .

Proof

We first derive the result for error in . Using the triangle inequality, we can write