I Introduction
In the United States alone, the cost of direct and indirect consequences of traffic congestion was estimated to 124 billions USD in 2013, this cost taking the form of time spent by commuters in traffic jam, air pollution, accidents, etc. It represents almost 1% of the country’s GDP, and is expected to grow by 50% within the next 15 years. Dealing with this issue is becoming a priority of government agencies, as the U.S. Department of Transportation Budget rose to almost 100 billions USD in 2016. In this context, any improvement to travel times on highways can lead to tremendous nationwide and worldwide improvements for the economy and the environment.
Maintaining and building road infrastructure, as well as urban planning are the most obvious ways to adapt the traffic network to the ever growing demand for mobility. However, changing the network architecture can only occur seldom and at an expensive cost.
Control of traffic flow is an alternate approach to addressing this issue as it aims at using existing infrastructure more efficiently and adapt it dynamically to the demand. In this article, we introduce new techniques for traffic control based on advances in Reinforcement Learning (RL) and Neural Networks. As opposed to most commonly used approaches in traffic control, we want to achieve control in a modelfree fashion, meaning that we do not assume any prior knowledge of a model or the parameters of its dynamics, and thus do not need to rely on the expensive and time consuming model calibration procedures.
Recent developments in Reinforcement Learning (RL) have enabled machines to learn how to play video games with no other information than the screen display [mnih2013playing], remarkably beat champion human players at Go with the AlphaGo program [alphago], or complete various tasks including locomotion and simple problem solving [levine2015end, finn2015deep, duan2016benchmarking]. The advent of policy gradientbased optimization algorithms enabling RL for highlydimensional continuous control systems as can be found in [levine2013guided, schulman2015high], has generalized modelfree control to systems that were characteristically challenging for Qlearning. Qlearning approaches, although successful in [mnih2013playing] suffer from a curse of dimensionality in continuous control if they rely on discretizing the action space.
In this article, RL trains a traffic management policy able to control the metering of highway onramps. The current state of the art RampMetering policy ALINEA [alinea] controls incoming flow in order to reach an optimal density locally. This optimal density depends on the model used and has to be manually specified to have an optimal control. Recently, nonparametric approaches based on Reinforcement Learning such as [fares2014freeway] or [rezaee2014decentralized] have been proposed to achieve rampmetering, but face two main limitations. These methods are not scalable beyond a few onramps, and limit traffic management to onramp control.
We introduce a way to learn an optimal control policy with numerous agents, and demonstrate the flexibility of our approach by applying it to different scenarios. The following contributions of our work are presented in this article:

We introduce a framework to use RL as a generic optimization scheme that enables the control of discretized Partial Differential Equations (PDEs) in a robust and nonparametric manner. To our knowledge, this is the first use of RL for control of PDEs discretized by finite differencing. Discretized nonlinear PDEs are notoriously difficult to control if the difference scheme used is nonsmooth and noncontinuous, which is usually required to capture nonlinear and nonsmooth features of the solution (as is the case here).

In the case of PDEs used to model traffic, we demonstrate on different examples an extensive control over boundary conditions as well as inner domain for the first time in a nonparametric way. We showcase the robustness of the approach and its ability to learn from realworld examples by assessing its performance after an extremely noisy training phase with stochastic variations in the underlying PDE parameters of the model used

We introduce an algorithm to train Neural Networks that we denote Mutual Weights Regularization (MWR) which enables the sharing of experience between agents and specialization of each agent thanks to Multi Task Learning [caruana1998multitask]. MWR is a Neural Network training approach that allows Reinforcement Learning to train a policy in a multiagent environment without being hampered by a curse of dimensionality with respect to the number of agents. Applied to the actual traffic control problem of “Ramp metering”, our modelfree approach achieves a control of a comparable level to the currently used and modeldependent implementation of ALINEA which constitutes the state art and was in our case calibrated by an worldwide renowned traffic engineer.
We first present the PDEs used to simulate traffic and introduce the generic PDE discretization scheme. A first simulator based on a Godunov scheme [Godunov] is used to demonstrate the efficiency of our approach on multiples situations. The Berkeley Advanced Transportation Simulator (BeATS), a state of the art macroscopic simulator implementing a particular instantiation of the Godunov scheme sometimes referred to as the Cell Transmission Model [muralidharan2009freeway] and used in traffic research [wan2013prediction] is also introduce, as we use it for our final benchmark. Traffic control is presented in the form of a Reinforcement Learning problem and we present the MWR algorithm to mitigate the issues arising from the high dimensionality of the problem. We eventually present the results we achieve, and compare our algorithm to the preexisting state of the art techniques. This state of practice reference, an ALINEA control scheme calibrated by traffic engineers at California PATH can be considered representative of state of the art expert performance. To the best of our knowledge, we are the first to present a nonparametric scalable method calibrated with RL that performs as well as the preexisting parametric solutions provided by traffic engineering experts.
Ii Models and methodology
Iia Highway traffic PDE
A highway vehicle density may be modeled by a Partial Differential Equation (PDE) of the following form:
(LWR PDE) 
For a given uniform vehicle density , is the flow of the vehicles (in vehicles per time unit); is the fundamental diagram, its maximum is called the critical density, and corresponds to the optimal density to maximize the flow.
usually has the following typical shape:

When the density is lower than the critical density , the vehicle flow increases with the density

When the density exceeds this value, congestion appears, and adding more vehicles actually reduces the flow
IiB Control of PDEs with reinforcement learning
Nonparametric control of PDEs takes the form in the present article of a Markov Decision Process (MDP) which is formally introduced below in
IIIA. The PDE in its discretized form devises a transition probability
between two different states of the system. Solving a MDP is a known procedure if the transition probabilityis known beforehand, using techniques such as Dynamic or Linear Programming. If
is unknown, it is more challenging. However, it is more appealing a procedure as devising for discretized PDEs is generally intractable and requires estimates of the parameters of the system. Also, as we operate in a continuous action space we will not consider Qlearning based approaches which are typically challenged by highdimensionality in that setting. Therefore, policy gradient algorithms present a compelling opportunity as model dynamics are sampled from the simulation trials in algorithms such as [kakade2001natural, peters2006policy, van2015learning, zzzheess2015learning] and no prior knowledge of the model is necessary to train the policy. This creates a modelindependent paradigm which abstracts out the model and makes the approach generic.IiC Simulation
The experimental method we followed consists of two steps.
A first one uses a coarser less realistic discretization scheme for the LWR PDE named after its inventor: Godunov [Godunov]. We provide more details about this scheme in Appendix A. This step serves the purpose of a proof of concept that discretized PDEs can be controlled by a neural net trained using a RL scheme in completely different situations and with different objectives. Our first contribution is to show that policy gradient algorithms achieve that aim.
In our second step we use a more accurate cell transition model [muralidharan2009freeway] scheme entailed in the state of the art BeATs simulator in order to show that the procedure we present is still valid in a more realistic setting. We also change the tasks we assigned to the control scheme in order to precisely account for the actual needs of traffic management systems used in production. We show that training our neural net based policy by policy gradient methods achieves comparable performance with the state of the art ALINEA control scheme [alinea] although the former is nonparametric when the latter requires a calibration of traffic related parameters. In both cases, the neural net manages to implicitly learn the intrinsic properties of the road segment under consideration and provide a good control policy.
Iii Controlling cyberphysical systems with Neural Networks trained by Reinforcement Learning
In order to be able to control a complex cyberphysical system a nonparametric manner, we adopt a Reinforcement Learning formulation.
Iiia Reinforcement Learning formulation
RL is concerned with solving a finitehorizon discounted Markov Decision Process (MDP). A MDP is defined by a tuple . The set of states is denoted and will typically be in our instance where is the number of finite differences cells as in [Godunov]. The action space
will correspond to a vector in
which represents the vehicular flow that the actuator lets enter the freeway which corresponds in the present case to the weak boundary condition implemented in the form of a ghost cell. The transition probability is determined by the freeway traffic simulator we use i.e. the Godunov discretization scheme and the stochastic queue arrival model devised, discussed below. Random events such as perturbations to the input flow of vehicles or accidents affect the otherwise deterministic dynamics of the discretized system. The transition probability is affected by these events and there likelihood but does not need to be known analytically for the system to operate nor be estimated. This is one of the key advantages offered by Reinforcement Learning over other approaches. The real valued reward function is for the practitioner to define which implies that the same training algorithm can be used to achieve different objectives. The initial state distribution is denoted , the discount factor and the horizon . Generically, RL consists in finding the policy that maximizes the expected discounted rewardWe denote
the representation of a trajectory defined by the probability distributions
, , the state transition probability and the reward distributioon . We will consider a stochastic policy which defines a probability distribution of conditional to (or the observation of the state at time ) parametrized by . This creates a stochastic regularization of the objective to maximize and enables to computation of the gradient of a policy with respect to its parameters in spite of the dynamics of the system under consideration not being differentiable, continuous, or even known.IiiA1 Reinforcement Learning based control of discretized PDEs
The recent developments in RL featured in [levine2014learning, schulman2015high, schulman2015trust] guarantee convergence properties similar to those of standard control methods and therefore strongly motivates their usage for the control of PDEs. On the contrary, since they are being modelindependent, they are intrinsically robust to varying parameters and are able to track parameter slippage. This leads us to consider them as the new generation of generic control schemes.
We show how the use of RL on discretized PDEs enables the extension of schemes to systems featuring random dynamics, unknown parameters and regime changes, hence surpassing parametric control schemes.
The MPC approach in [zzzbellemans2006model] and the adjoint method based technique of [reilly2015adjoint] both rely on the definition of a cost function which needs to be minimized. For PDEs such as the LWR PDE, one typically maximizes throughput, minimizes delay, or a functional of the state (for example encompassing energy emissions). A RL approach will therefore focus on maximizing a decreasing function of that cost which will be our accumulated reward. This is standard practice to encode an operational objective.
IiiA2 State and action space.
Consider a discretized approximation of the solution to Eq. (3) (see appendix) by the Godunov scheme described in appendix. The solution domain is , the discretization resolutions and satisfying are chosen to meet well posedness requirements (Courant–Friedrichs–Lewy condition where is the maximum characteristic speed of Eq. (3)). The solution to the equation is approximated by a piecewise constant solution computed at the discrete timespace points . The action space for this system consists of incoming flow at the discretized elements , and generally belongs to a bounded domain . The policy will control this vector of incoming flows at each time step.
IiiB Neural Networks for Parametrized stochastic policies.
In this paragraph we show how to construct an actuator based on a Neural Network.
IiiB1 Parametrized stochastic policies.
A vast family of stochastic policies are available for us to use so as to choose an action conditionally to an observation of the state. A common paradigm is to create a regression operator, typically a Neural Network, which is going to determine the values of the parameters of a probability distribution over the actions based on the space observation. We practically use a Multilayer Perceptron that determines the mean and covariance of a Gaussian distribution over the action space. The action the policy undertakes is sampled from this parametrized distribution and will manage to maximize its expected rewards provided a reliable training algorithm is used.
IiiC Neural Networks
The policy we train are implemented as Artificial Neural Networks, containing Artificial Neural wired together.
IiiC1 Artificial Neural Model
For , an Artificial Neural computes an output from an input vector with the following formula:
Where
is called the activation function which responds to the outcome of an affine transformation of its input space parametrized by
and .IiiC2 Networks
A group of
artificial neurons can be linked to a single input
to create an output vector . This forms a neural layer. When several layers are stacked, with the output of one being the input of the next one, one can speak of an Artificial Neural Network, whose input is the input of the first layer, and output is the output of the last layer.The general organization and architecture of such a network may vary depending on usages, the type of input data to process. In the setting of computer vision, convolutional neural networks famously achieved human level image classification thanks to the invariance by translation and rotation of the convolution masks they progressively learnt
[krizhevsky2012imagenet].Back propagation In order to train Neural Networks backpropagation [le1990handwritten]
is a key algorithm that performs a stochastic gradient descent on a nonconvex function
[bishop1995neural]. Approaches to train such a Neural Network for control in the Qlearning framework has been adopted in [mnih2013playing] and were successful in a discrete control setting. With continuous control, a different family of methods is generally used that encourage the policy entailed in the network parameters to take actions that are on average advantageous and discourage actions that have an expected negative reward.IiiD Training algorithms in a RL context
Modern training algorithms for continuous control stochastic policies can be divided in policy gradientbased approaches and nongradientbased approaches. The former family encompasses first order methods such as REINFORCE [schulman2015high] which we will denote Vanilla Policy Gradient (VPG), approximated second order methods based on the use of natural gradient descent [kakade2001natural], local line search methods such as Trust Region Policy Optimization (TRPO) [schulman2015trust], LBGFs inspired methods such as Penalized Policy Optimization (PPO) [duan2016benchmarking] and gradient free approaches such as the crossentropy method [zzzszita2006learning]. The performance of these algorithms have been thoroughly compared in [duan2016benchmarking] where the natural gradient based method Truncated Natural Policy Gradient (TNPG) and TRPO generally outperformed other approaches. In our numerical experiments we find that when the statistical patterns at the stochastic boundary conditions are stationary enough, all approaches perform conveniently. However, TNPG and TRPO outperform other methods when regime changes occur.
IiiD1 Reinforce
The Reinforce algorithm [schulman2015high] has been used to train our policy to maximize .
We consider a parametric policy and we denote by its parameters. In our case, the policy is a neural network parametrized by its weights. The input layer is filled with the environment observation, and the output layer contains the action probability distribution.
For , and , we note the probability to take action while being under state , and following the policy parametrized by .
In practice, we use the following equality to compute a gradient average across multiple trajectories:
The right side term can be approximated by simply running enough simulations with the given policy according to the law of large numbers.
Once we obtain the gradient we can perform a gradient ascent on the parameters to incrementally improve our policy.
IiiD2 Architecture and choices
Beside these theoretical considerations and algorithms, the network architecture choice has a crucial impact on the training of the policy. If the layers are not adapted to the input data, the training algorithm will converge to a bad local minimum, or may not converge at all.
In our settings, the observation consists of a array, where is the number of discretization cells of the highway in the simulator, and every cell contains 3 information:

The vehicle density scaled to have a median value of 0, and a standard deviation of 1 in average

A boolean value indicating the presence of an offramp

A logarithmic scaled value indicating the number of vehicle waiting in the onramp queue if there is any, 0 otherwise.
As this data is spatially structured, we chose to process it with convolutionnal neural layers the core idea being to handle data in a spatially invariant way. Local features are created as a function of these local values independently of the highway location. A pipeline of convolutionnal neural network layers are stacked to create local features. A last layer on the top of these convolution takes the action which consists in deciding how many vehicle can enter at the highway respective onramp. This practically achieve by ramp metering traffic lights. This algorithm is illustrated in Figure 3.
IiiD3 Sharing information while allowing specialization among agents: the Mutual Weight Regularization algorithm
The features used for decision making on the low level layers of the network are created with a convolutionnal neural network, which exploits the spatial invariance of the problem.
There are two possible situations for the last layer:

Parameters sharing for all onramp policies. This results into having the exact same policy for all agents. This should not be the case because of local specificity of the highway, such as a reduced number of lanes, or a different speed limit for instance.

Every onramp has its dedicated set of parameters. It allows more flexibility and different control for every onramp, but dramatically increases the number of parameters, does not share learning between agents, and finally does not converge to a good policy
The novel approach we introduce and call Mutual Weight Regularization (MWR) is between these 2 extremes. It acknowledges the fact that experiences and feedback should be shared between agents, to mitigate the combinatorial explosion when the number of agents scales, and still allows some agent specific modifications to adapt to local variations.
Let us consider:

the number of features computed per cell.

where is the set of cells linked to a controllable onramp.

the output of the convolutionnal layers and we note the features of the agent .
We introduce the distinct parameters of every agent, and define, for , as
The decision for the average of the distribution of actions of a given agent will be determined by a nonlinear transforms of
. Similarly for the variance of this Gaussian stochastic policy distribution
[reinforce].The MWR methods consist in adding a regularization term to the global gradient used for the gradient ascent:
(1)  
(2) 
Where the hyperparameter
defines the strength of the regularization and therefore how much mutual information is shared between agents in the gradient descent. Note that:
is equivalent to having independant policies for every onramp.

is equivalent to having a shared policy making algorithm for every onramp (shares weights).

is not actually used for control computation, but is rather a reference weight.
Iv Experimental results
Iva Proof of concept on different scenario cases
In this first experimentation set, we demonstrate that RL can be used to control PDEs in a robust and generic way. The same training procedure converges to a successful policy for two very different tasks. In this section, simulations are run using a simple Godunov discretization scheme for the LWR PDE (Eq. (3)), see appendix.
IvA1 Highway outflow control
Traffic management scenario. We consider a 5 mile section of I80W starting from the metering toll plaza and ending within San Francisco (see Figure 5. The flow is metered at the toll plaza at a rate shown in Figure 5 (i.e. a vehicle rate). The Godunov scheme is implemented with cells and simulated over a time span enabling several bridge crossings. The inflow integrates random arrival rates for inbound traffic, which consists of a sinusoid perturbed by random noise. The state is vehicular density (i.e. as defined by the LWR PDE, vehicles per unit length) from which flow (numbers of vehicles per unit time at ) can be computed.
Action space and environment We consider the following operational scenario. The number of vehicles upstream from the meter (see Figure 5 ) is randomized for each simulation in order to reproduce the diversity of freeway flow dynamics as they actually occur. At every time step, the observation forward propagated in the policy Neural Network is the value of current time step in time unit. An action undertaken is also a positive scalar representing the number of cars permitted to enter the highway per time unit. We train the policy to reproduce a pattern on the outflow with the following reward
The choice of this reward function encodes our intention to replicate the function on the downstream boundary condition (located at spatial offset ). In particular, what is remarkable here is that this is the only way the environment provides information to the policy about the state of the freeway. Indeed, we only provide the current time step as state observation to our policy along with the reward associated with the result obtained after a simulation roll out.
Learning in the presence of disruptive perturbations The discretized model we use is wellsuited for representing traffic accidents. The state update mechanism may be randomly altered by accidents which drastically change the maximum flow the freeway can carry at local points in space. Accidents can be simulated by locally decreasing the maximum flow speed in a given discretization cell for a given time interval. A key goal of our work is alleviating the impact of accidents while simultaneously tracking operational objectives (e.g. desired outflow of the bridge into the city), and achieving it with a robust and generic method is a tremendous breakthrough for urban planning. See the appendix for a presentation of the accident scenario.
Learning algorithms and convergence to an effective policy In Figure 5, we analyze the results of the control scheme learnt by a given policy consisting of two fully connected hidden layers of size . The policy controls the inflow of cars (boundary control). We choose an arrival rate sufficiently high on average to provide the controller with sufficient numbers of vehicles to match the prescribed downstream conditions (note obviously in congestion this is always the case). The results prove that in spite of the problem being nondifferentiable, nonsmooth, nonlinear and perturbed drastically by unpredictable accidents and random input queues, the policy converges to a control scheme that manages to replicate the objective density. The learning phase uses different policy update methods such as [kakade2001natural, levine2013guided, schulman2015trust] and benchmarked in [duan2016benchmarking]. In this benchmark, among gradient based methods, TNPG, TRPO and PPO seem to outperform the simpler REINFORCE method which only leverages first order gradient information. In Figure 6 we show how PPO, TRPO and REINFORCE are all reliable in this instance and converge to more effective policies than TNPG. It is also noteworthy that PPO converges faster to a plateau of rewards.
IvA2 Inner domain control
Reward shaping, in the form of assigning a target density and penalizing the distance between the observed density and the objective enables us to reproduce an arbitrary image with the solution density in the solution domain only by controlling it on the boundaries. The results in Figure 7 demonstrate the ability of the method we present to train a policy to extensively control the values of a solution to the discretized PDE we study in its solution domain. The action space here is much higher dimensional as ramps are present all along the freeway that can let cars in at a sequence spatial offsets each separated by cells. An off ramp split model handles the vehicle leaving the freeways. In spite of the increased dimensionality, a neural net with tree fully connected hidden layers of size trained by the TRPO method converges to a policy capable of reproducing a target solution in the interior domain as shown in Figure 7, whereas TNPG failed in this instance to converge to an efficient policy. From a practitioner’s perspective, this example is very powerful, it shows the ability to generate arbitrary congestion patterns based on metering along the freeway. From a PDE control standpoint, this is even more powerful, as direct state actuation is a very hard problem in manufacturing and has a lot of applications with PDEs such as the (nonlinear) heat equation, HamiltonJacobi equations, and several others.
Objective assigned 
Objective achieved after 2000 iterations 
IvB Optimal ramp metering control
In order to demonstrate the applicability of this novel method to real world cases, we consider the RampMetering problem in a 20 miles (33 km) long section of the 210 Eastbound freeway in southern California as illustrated in Figure 8. For this simulation, we use the BeATS simulator calibrated by traffic experts based on realworld data. Every simulation run lasts for 4 hours, after a 30 minutes warmup period to initialize the freeway. The simulation starts at 12pm and the traffic peak happens between 3pm and 4pm, as the demand curve reaches a maximum (Fig. 14).
Two reinforcement learning algorithms and the ALINEA control scheme are benchmarked against the baseline scenario in which no control occurs at all:

NoRM, baseline: The baseline without any ramp metering. Cars instantly enter the freeway when they reach an onramp, and if the freeway has enough capacity

NoMWR, standard deep reinforcement learning: The Reinforcement Learning based policy we introduce, trained with shared weights for the last layer

MWR, novel approach to training: The same policy as NoMWR, but trained with MWR.

Alinea, parametric control: The state of the art reference algorithm, using with model and parameters the exact same values used in the simulator it is being benchmarked on.
IvB1 Reinforcement Learning problem
In this scenario, the agent takes an action every 32 seconds. An action is a vector with the rampmetering rates for the 29 onramps on the highway section, in vehicle per time unit. The reward we collect at every time step is the total outflow in the last 32 seconds (in number of vehicle per time unit).
The highway is discretized in 167 cells of 200 meters each to generate an observation vector in . For every cell, the following 3 data values are provided to the network:

Density in vehicle per space unit, normalized

A boolean value indicating the presence of an offramp on this cell

The number of cars waiting in this cell’s onramp queue (logarithmically scaled), or 0 if there is no onramp
It is worth mentioning that the Reinforcement Learning policies are trained in a stochastic way to ensure that a generic policy is learned. This is done by introducing some noise in the actions taken by the Neural Network: the ramp metering actually applied is sampled from a Gaussian distribution centered on the network output. This strategy, along with the use of shared learning techniques over the 29 onramps, globally prevents overfitting issues.
IvB2 Numerical results
After the training, we compared the results of our approach to existing algorithm on several criteria. The average speed is globally increased as expected (Fig. 9). We also report the Total Vehicle Miles in Fig. 13, along with the Total Vehicle Hour (Fig. 12) that assess the performance of our approach with a single score. In both cases, the MWR training approach provides a significative performance increase regular parameters sharing (NoMWR), and almost reaches the performance of the reference and state of the art parametric method Alinea.
Approach  Score in veh.hr  Score in veh.mile 

(lower is better)  (higher is better)  
ALINEA  10514  644522 
MWR  10575  644334 
NoMWR  10617  643605 
NoRM  11085  639709 
V Conclusion
We have demonstrated how neural RL substantially improves upon the stateoftheart in the field of control of discretized PDE. It enables reliable nonparametric control while offering theoretical guarantees similar to that of classic parametric control techniques. In particular, neural RL can be applied without an explicit model of system dynamics, and instead only requires the ability to simulate the system under consideration. Through our experimental evaluation, we demonstrated that neural RL approach can be used to control discretized macroscopic vehicular traffic equations by their boundary conditions in spite of accidents drastically perturbing the system. Achieving such robustness is a significant breakthrough in the field of control of cyberphysical systems. Specific to the practice of transportation, the results are a major disruption as they enable us to beat current controllers by performing adaptive control, without the need for model calibration. By eliminating the need for calibration, our method addresses one of the critical challenges and dominant causes of controller failure making our approach particularly promising in the field of traffic management. We also introduced a novel algorithm, MWR, to achieve multiagent control and leverage trial and error experiences across different agents while at the same time allowing each agent to learn how to tailor its behavior to its localization in the large cyberphysical system under study.
Appendix A Godunov discretization scheme
Because of the presence of discontinuities in their solutions, the benchmark PDE we consider is formulated in the weak sense. We consider an open set in , and a measurable function is a distributional solution to the system of conservation laws
(3) 
if, for every function defined over with compact support, one has
(4) 
The operator in Eq. (3) will be referred to as flux function, also called the “Fundamental diagram” in transportation engineering. The operator defines entirely the dynamics at stake and therefore is often domain specific. Given an initial condition
(5) 
where is locally integrable, is a distributional solution to the Cauchy problem defined by Eq. (4) and Eq. (5) if
(6) 
for every function with compact support contained in . If is a continuous function from into the set of locally integrable on , solves the Cauchy problem above in the distribution sense, is referred to as a weak solution to the Cauchy problem.
The Dirichlet problem corresponding to a boundary condition (as done later in the article) can be formulated in a similar manner and is left out of the article for brevity. Such a definition of weak solutions is not sufficient to guarantee their being admissible solutions.The entropy condition guarantees the uniqueness to the problem and continuous dependence with respect to the initial data (derivation also left out of the article for brevity).
The Godunov’s scheme computes an approximate weak solution to the Dirichlet problem Eq. (4), Eq. (5) with the following recursive equation and Godunov flux :
The Godunov scheme is second order accurate in space. Unfortunately, like most numerical schemes, it is nondifferentiable because of the presence of the “ifthenelse” statements in its explicit form. Another problematic aspect related with computing numerical weak entropy solutions with most numerical schemes (incl. Godunov) is their relying on a numerical evaluation of which often takes the form of a parametrized function. The estimation of these parameters is often difficult and it is practically intractable to assess the impact of the parameter uncertainty on the solutions because of the nonlinearity, nonsmoothness and nondifferentiability of the schemes.
Comments
There are no comments yet.