1. Introduction
Solving differential equations with numerical optimization, specifically with minimization of the equation residual, is not a new idea. Optimizationbased methods have been popular for some time in the form of leastsquares finite element methods (LSFEM) [1, 2, 3], and elementfree Galerkin methods [4, 5], where even for meshfree formulations, theoretical convergence criteria have been examined [6]. Nevertheless, recent work by Han et al. [7], Berg and Nyström [8], Sirignano and Spiliopoulos [9], Raissi et al. [10], and Xu and Darve [11] has shown that remarkably simple implementations of deep neural networks (DNNs) can also solve relatively diverse differential equations by utilizing very similar residualminimizing formulations but with the DNNs replacing the usual finite element discretization strategies. From a method development point of view, further demonstrating the capability of DNNs to solve differential equations is of intrinsic interest.
To explore how efficient, reliable, robust, and generalizable DNNbased solutions are will undoubtedly require careful and prolonged study. Here, we offer a practical starting point by asking how well, and in what ways, can DNNs manage one of the cleanest, simplest, and most ubiquitous irregularities observed in differential systems, namely shock fronts. Shock fronts provide a quintessential test bed for a numerical method in that they contain an isolated analytically nondifferentiable feature that propagates through the differential system. The feature reduces the local regularity of the solution both in space and time and leads to numerical representations that must accommodate local firstorder analytic discontinuities while still maintaining some concept of numerical stability and robustness. Indeed the way a numerical method responds to a local shock front tends to reveal the fundamental signature response that the numerical method has to local irregularities in the solution, which might be thought of as providing an indicator by which to gauge the method’s applicability.
A tremendous amount of work spanning pure mathematical analysis [12, 13], numerical analysis [14, 15], physics [16, 17], etc., has been done on differential systems that demonstrate shocklike behavior. Shocklike numerical behavior is deeply rooted in discrete function representation and approximation theory, and further in functional and discrete functional analysis, specifically in what it means to locally and globally approximate a member of a function space [18, 19, 20, 21].
Along these lines, the first half of this paper is dedicated to exploring the behavior of the conventional Sod shock tube solution to the compressible Euler equations of gas dynamics [22]. We use the shock tube solution as a setting in which to explore what is meant by a DNN solution to a system of PDEs, what is meant by a regular (and irregular) solution to a system of PDEs both analytically as well as numerically, how these two frames of reference relate to each other, and where they differ. Then we discuss in some detail how DNN solutions compare and relate to more traditional and conventional ways of solving systems of PDEs and how the validation of DNNbased methods is slightly different than, say, typical numerical convergence analysis.
The second half of the paper is focused more directly on how to improve the performance of DNN solutions in the context of irregular PDE solvers, and how to capitalize on the operational details of DNNs solvers to perform complete and simultaneous parameter space exploration. We also consider how to extend the framework of physicsbased modeling to incorporate experimental data into the analytic workflow. These questions have growing importance in science and engineering applications as data becomes increasingly available. Many of the more traditional approaches, including datainformed parameter estimation methods
[23] and realtime filtering [24], are frequently incorporated into more standard forward PDE solvers. We argue that DNNbased PDE solvers provide a natural interface between physicsbased PDE solvers and advanced data analytics. The general observation is that because DNNs are able to combine a PDE solver framework with a simple and flexible interface for optimization, it becomes natural to ask how sufficient a modeling system is at representing experimental observations and to explore the nature of the mismatch between a simulated theoretical principle and an experimentally measured phenomenon. In this context, the DNN framework almost automatically inherits the ability to couple many practical datadriven concepts, from signal analysis, to optimal control, to datadriven PDE enrichment and discovery.To provide some insight into the utility of the DNN framework for solving PDEs, it is instructive to discuss some of the apparent strengths and drawbacks. Three strengths of the framework are:

[label=()]

Phenomenal ease of prototyping PDEs,

Natural incorporation of large data,

Simultaneous solution over entire parameter space.
Regarding (I), complicated systems of highly parameterized and multidimensional PDEs can be prototyped in TensorFlow or PyTorch in hundreds of lines of code, in a day or two. This might be compared to the decadelong development cycle of many legacy PDE solvers. For (II), incorporating and then utilizing experimental data in the PDE workflow as a straightforward supervised machine learning task is remarkably rewarding and provides a painless integration with the empirical scientific method. It is easy to see how this feature could be naturally leveraged for practical uncertainty quantification, risk assessment, datadriven exploration, optimal control, and even discovery and identification of the theoretical underpinnings of physical systems (as discussed in some detail in section
8 below). In (III), another powerful and practical advantage is identified, where dimensional parameter space exploration simply requires augmenting the solution domain (spacetime) with additional parameter axes, , and then optimizing the DNN to solve the PDEs as a function of the parameters as inputs. The input space augmentation adds little algorithmic or computational complexity over solving on the spacetime domain at a single parameter point and is drastically simpler than exploring parameter space points sequentially.Some of the more immediate drawbacks the DNNbased approach to solving PDEs include:

[label=()]

Absence of theoretical convergence guarantees for the nonconvex PDE residual minimization,

Slower overall run time per forward solve,

Weaker theoretical grounding of the method in PDE analysis.
We touch on these weaknesses throughout the paper, but briefly, (i) indicates the challenge posed by understanding convergence criteria in convex optimization, where solutions may get trapped in local minima. The concern of (ii) is substantially subtler than it appears at first glance, and strongly depends on many aspects of the chosen neural network architecture, how optimized the hyperparameters are, what the ultimate goal of the simulation is, etc. Finally, (iii) merely highlights that DNNs have only recently been seriously considered for PDE solvers, and thus, remain largely theoretically untouched.
The paper is organized as follows: Section 2 shows how a simple DNN can be used to solve a nonlinear system of PDEs. Section 3 proceeds with a discussion about the relationship between different notions of solutions to PDEs and attempts to provide some context in which to understand where DNN solutions can be placed relative to solutions provided by more conventional numerical methods. Section 4 is a demonstration of how the DNN solution behaves given unbounded gradients. Section 5 explores solutions to the Sod shock tube problem and compare aspects of the DNN method to more standard finite element and finite volume methods. Section 6 discusses a method of dissipative annealing to improve (in a specific sense) the rate of convergence relative to the number of iterations required in the descent algorithm, and then how to use the DNN framework to recover simultaneous parameter scans.. In section 7
the modified network with residual connections and multiplicative gates from
[9], which resembles the “highway network” and “long short term memory” (LSTM) architectures, is compared to the more standard DNN model in the specific context of the Sod shock tube. Finally in section
8, the concept of dataenriched PDEs is presented through the lens of a hypothetical experimental scenario, where the experimenter finds herself in the situation in which the anticipated physical model of the system does not adequately validate against the experimental data. The accompanying code to this manuscript will be made available on github upon publication.2. Solving PDEs using Deep Neural Networks
In this paper we are interested in a broad system of coupled potentially nonlinear PDEs that can be written in terms of the initialboundary value problem, for :
(2.1)  
over the domain with boundary , and the dimensional parameter domain given parameters , where is the
dimensional solution vector with
th scalarvalued component , is the duration in time, is generally a nonlinear differential operator, with arbitrary boundary conditions expressed through the componentwise operator .Here we are interested in approximating the solutions to (2.1) with DNNs. A feedforward network can be described in terms of the input for , the output , and an inputtooutput mapping , where and are the input and output dimension. The components of the pre and postactivations of hidden layer are denoted and
, respectively, where the activation function is a sufficiently differentiable function
. The th component of the activations in the th hidden layer of the network is given by(2.2) 
where
are the numbers of neurons in the hidden layers of the network, and
and are the weight and bias parameters of layer. These parameters make up the tunable, or “trainable”, parameters of the network, which can be thought of as the degrees of freedom that parametrize the representation space of the DNN. Note that in section
7we will consider a slightly more exotic network architecture, but for now, we concern ourselves with the standard “multilayer perceptron” described in (
2.2).In this context, the output of the neural network , where the number of units is chosen and fixed for each layer, realizes a family of approximate solutions to (2.1), such that for each component of (2.1),
(2.3) 
where we have set the parameters . Similarly, activation functions must be chosen such that the differential operators
can be readily and robustly evaluated using reverse mode automatic differentiation [25].
We proceed by defining a slightly more general objective function to that presented in [10, 9],
(2.4) 
where
(2.5)  
and indicates the form of the th loss and
denotes the probability density with respect to which the expected values of the loss are calculated. In particular, the form of the loss function can correspond to any standard norm (e.g., the discrete kernel of an
norm), the mean square error, or specialized hybrids such as the Huber loss [26]. For this chosen form , the corresponding loss is given by the following expected value:Letting , we define any particular approximate solution to (2.1) as finding a that minimizes the loss relative to the parameter set. That is:
(2.6) 
To solve this optimization problem, one must approximate the loss function and couple this approximation with an optimization algorithm. For instance, using stochastic gradient descent (SGD), one uses a Monte Carlo approximation of the loss coupled with gradient descent. Thus, the loss is computed by drawing a set of samples
from the joint distribution
and forming the following approximation:where
and denotes the th PDE residual evaluated at the batch point , which is drawn according to , and similarly for the BC and IC terms. Then, the SGD algorithm for the iteration can be written as
(2.7) 
where denotes the batch set drawn at iteration .
In practice we find the Adam optimizer [27] to be the most efficient SGDfamily algorithm for the cases run in this paper. For network initialization we use the Xavier uniform initializer [28]. It is also worth noting that the hyperparameters and as well as the sampling distribution drawn from are additional hyperparameters that can be optimized for. For simplicity, however, we do not perform hyperparameter optimization in this paper.
3. Irregular Solutions to PDEs
In this section we briefly review what a solution to a PDE represents. Although most PDEs have been derived from and are studied for their ability to represent natural phenomena, at their core they remain mathematical abstractions that are only approximations to the phenomena that inspire them. These abstractions are often derived from simple variational perturbation theories, and are subsequently celebrated for their ability to capture and predict the physical behavior of complex natural systems. This “capturing” of physical systems is conventionally accomplished at the level of experimental validation, which is often many steps removed from the abstraction that the PDE itself represents.
In the study and understanding of PDEs and mathematical analysis, it is difficult to overemphasize the importance of stability and regularity. One of the most challenging problems in modern theoretical science and engineering is developing a strategy for solving a PDE and ultimately interpreting the solution. In this context, irregular solutions to PDEs are easy to come by. To account for this, convention introduces the concept of mathematically wellposed solutions (in the sense of Hadamard [29]). Wellposedness circumscribes what is meant, in some sense, by a “good” or admissible solution to a PDE. It is within this framework that the concept of classical and regular solutions, strong solutions, weak solutions, and so forth has evolved.
A “regular solution” can be thought of as one where the actual PDE is satisfied in an intuitively sensible and classical way. Namely, in a regular solution, enough derivatives exist at every point in the solution so that operations within the PDE make sense at every point in the entire domain of relevance. A wellposed regular solution is formally defined to provably exist, be unique, and depend continuously on the data given in the problem [30]. In contrast to regular solutions, the concept of a weak solution was developed to substantially weaken the notion of what is meant by a solution to a PDE. In weak solutions derivatives only exist in a “weak” distributional sense, and the solution is subsequently recast in terms of distributions and integral norms.
For example, in the case of a “strong solution” to a PDE, which is a “weak solution” that is bounded and stable in the norm and thus preserves important geometric properties, a solution may admit, e.g., pointwise discontinuities and remain an admissible solution. Such solutions are the strongest, and/or most wellbehaved of the weak solutions; still, interpreting the space of strong solutions from a physical point of view can be challenging. Moreover, weak solutions in the sense of Hadamard are only proper solutions to a PDE if they are adjoined with a concept of stability (such as stability) that implies a bound in a given normed space. This is a rather practical condition, in that a solution in a distributional sense that is not bounded in norm is too permissive to pointwise “variation” to be physically interpretable.
A substantial fraction of solutions arising in applied science are weak solutions that demonstrate stability in notably irregular spaces. Even the simplest of equations can admit solutions of highly irregular character (e.g., the Poisson equation [31]). In fact, in many applied areas the concept of turbulence becomes important, which from a mathematical point of view deals with PDEs in regimes where neither regularity nor stability are observed, a fact with broad conceptual ramifications [32]. This irregular and unexpected behavior in PDE analysis leads to a rather basic and inevitable question that we address in the next section: how well do our specific discrete numerical approximate solutions to certain PDEs capture their rigorously established mathematical properties?
3.1. Numerical solutions
Discrete numerical systems cannot, in general, exactly replicate infinite dimensional spaces (except in an abstract limit), and thus approximate numerical solutions to PDEs remain just that: approximations to mathematical PDEs. As a matter of practice, weak formulations are often implemented in numerical methods, but while these methods often preserve the spirit of their mathematical counterparts, they subsequently lose much of their content. For example, solutions in discontinuous finite element methods are frequently couched in terms of residuals and are presented along with numerical stability results, when in effect the discrete spaces they are purported to represent are simply piecewise continuous polynomial spaces that are capable of representing only a tiny fraction of admissible solutions in .
It is then a natural question to ask, what has become of all the admissible solutions that even the most rigorous numerical methods discard by construction? And how exactly has it been determined which solutions are to be preserved? A pragmatist might respond with the argument that numerical methods develop, primarily, to deliver practical utility, adjoining as a central thematic pillar to their evolution the notion of “physical relevance,” where physical relevance is notionally defined in terms of the utility that the numerical solution provides in the elucidation of a practical observation.
While this may certainly be the case, and while we adopt the pragmatic perspective in this paper, we would like to raise a consideration to the reader: if numerical methods, particularly those driving the evaluation of predictive simulation models, are, with widespread application, utilizing numerical schemes that to some extent arbitrarily select solutions of a particular kind from the space of mathematically admissible solutions, and then use that subset of solutions as the justification upon which interpretive predictive physical understanding is derived, then how can one evaluate the predictive capability of a model without directly comparing it to observed data?
3.2. Shocks
Perhaps the simplest type of irregularity that confronts a numerical method is a numerical shock. A numerical shock need not be a mathematical discontinuity, and as such it is a fundamentally different concept than both a mathematical shock and a physical shock. One definition of a numerical shock is: the numerical response (in the form of numerical instability) of a method when the representation space that the method is constructed in does not support the function space it is interrogating. Examples of this behavior can be found from Gibb’s phenomena in spectral methods to Runge’s phenomena in polynomial based methods and elsewhere. More broadly, one can simply posit that the radius of convergence of the solution must be contained within the support of its discrete representation, or else instabilities are likely to arise [33, 34]. As a consequence, numerical approaches to the shock problem display unique characteristics that often expose deep numerical insights into the nature of the method. The unique signature that a numerical method displays when dispersing its instabilities throughout a solution foretells the predictive characteristics of the simulation itself.
Many successful numerical methods can be viewed as variations of general strategies with important but subtle differences in the underlying detail. Consider, for example, finite volume methods (FVM), spectral, pseudospectral, and Fourier Galerkin methods, continuous and discontinuous Galerkin finite element methods, etc. In each of these cases, the system of PDEs is cast into a weak formulation, where the spatial derivatives are effectively transferred to analytically robust basis functions, and fluxes are recovered from integration by parts. For example, if we choose some adequately smooth test function , multiply it by some transport equation in a state space where a state vector, and integrate by parts, we have an equation that can be written in the form:
(3.8) 
where denotes componentwise multiplication.
To see the relation between distinct numerical methods, recall here that when is a high order polynomial, and the discrete domain is, for example, a single element, the recovered method is a type of spectral element method (SEM). Whereas, when the discretization is turned into a finite element mesh comprised of subintervals, , and the basis functions are chosen to be continuous or piecewise discontinuous polynomials, we have the continuous Galerkin (CG) or discontinuous Galerkin (DG) finite element methods, respectively. Similarly, when the elements are viewed as finite volumes and the basis functions are chosen to be piecewise constant, a finite volume method can be easily recovered.
As discussed above the weak formulation (3.8) of method should not really be viewed, in and of itself, as an advantage. Indeed, the DNN loss functions (2.4) could equally be cast into a weak formulation (as in [2] for example), paying a price in the simplicity of the method. Similarly, by using different activation functions (e.g.,
being taken as ReLU), the universal approximation theorem can, in the limit sense, recover all Lebesgue integral function spaces
[35]. We sidestep these issues, viewing them as unnecessary complications that muddy the simplicity and elegance of the DNN solution. Instead, we choose the hyperbolic tangent activation function and interpret solutions to shock problems through the lens that smooth solutions are dense in .3.3. Euler’s equations and the Sod shock tube
For simplicity, we start by considering the dimensionless form of the compressible Euler equations in 1D, solved over only the spacetime domain , with and boundary . We are interested in solutions to the initialboundary value problem:
(3.9) 
for a state vector and corresponding flux . The state vector is comprised of the density , velocity , and total energy of the gas
while the pressuretemperature relation is the single component ideal gas law . The constant denotes the adiabatic index and, unless otherwise noted, is taken throughout the remainder of the paper to be , while the speed of sound is .
To analyze the behavior of the network on a simple and classic 1D shock problem, we solve the Sod shock tube. The initial condition is chosen to be piecewise constant:
(3.10) 
where the left state is defined by and the right by . This also defines the left and right Dirichlet boundaries . The exact solution to this system is readily obtained by a standard textbook procedure [36]. Except for in section 8, the simulation time horizon is set to .
4. Numerical Experiments
In this section we show the basic behavior of DNN solutions to the Sod shock tube. In section 4.1 we show how the DNN solution behaves without any regularization and discuss how that compares to other numerical methods. In section 4.2 we discuss how to add analytic diffusion to regularize the solution.
4.1. An unlimited solution using DNNs
The standard solution to the Sod shock tube problem for (3.9) poses real challenges to most numerical methods. In the standard DG method, for example, when the polynomial degree is nonzero, , the solution becomes numerically unstable without the use of some form of slope limiting [37, 38]. Similarly when FVM is evaluated at higher than first order accuracy, the absence of fluxlimiting leads to unstable solutions [39, 40]. This behavior is also observed in spectral methods [41, 42], continuous Galerkin methods [43], and so on.
One of the more robust regimes for solving shock problems can be found in constant DG methods with degree basis functions (or equivalently, FVM methods with first order accuracy), wherein both a stable and relatively robust solution can be readily calculated, but at the cost of large numerical diffusion. Indeed the ability to solve these types of irregular problems helps explain the preference practitioners often demonstrate in choosing low order methods to compute solutions to irregular systems, where to resolve solution features one relies solely on mesh/grid refinement.
In contrast to lower order approaches, the DNN solution to the Sod shock tube problem is able to exploit the relative flexibility of DNN functional approximators, as determined by composition of affine transformations and elementwise activation nonlinearities. For example, with hyperbolic tangent activations, the DNN solution to the shock tube problem becomes an approximation to a regular solution of the Sod problem at high order accuracy, in the sense that the approximation order of a linear combination of transcendental functions is .
In the case of the DNN solution, even though the discontinuous shock fronts display unbounded derivatives in the classical sense, the DNN is able to find a relatively wellbehaved solution as shown in Fig. 1. The DNN formulation with the hyperbolic tangent activation, if the optimization is to converge to a parameter point , is predicated on a regular solution. Failure to converge could arise if the derivatives at the locus of discontinuity, and the components of , enter unstable unbounded growth in the iterations of the optimization protocol. Indeed, the DNN approximates the discontinuous mathematical shock precisely in the limit in which certain components of tend to infinity. We observe that in practice, however, this does not happen. The inherent spectral bias [44] of the DNNs seems to be able to contain the potential instability of the solution until the optimization has settled in a “reasonable” local minimum. This leads to an interesting feature of the DNN solutions, namely that they seem to demonstrate robustness while broadly maintaining many of the smallscale features of the exact solution, even in the presence of irregularity in the formulation.
4.1.1. DNNs as gridless adaptive inverse PDE solvers
To explain why an approximate regular DNN solution to a shock problem can perform as well as it does, as shown in Fig. 1, we note that a DNN solver can really be viewed as a type of gridless adaptive inverse solver. First, the absence of a fixed grid in the evaluation of the loss functions (2.4) and determination of the optimization direction (2.7) means that the probability density can be finitely sampled in space as densely as the method can computationally afford at every iteration and for as many iterations as can be afforded. Given the computational resources, this can lead to a relatively dense, adaptively sampled interrogation of the spacetime domain. Moreover, adaptive mesh methods for shock problems have long been known to be highly effective [45, 46], particularly when shock fronts can be precisely tracked. However, AMR and adaptive methods are constrained by their meshing/griding geometries, even as isoparametric and isogeometric methods have been developed to, at least in part, reduce these dependencies [47].
The second pertinent observation about the DNN solution is that because it utilizes an optimization method to arrive at the solution, it can be viewed as a type of inverse problem. Unlike a forward numerical method, which accumulates temporal and spatial numerical error as it runs, the DNN solutions become more accurate with iterations, being a globalinspacetime inverselike solver. This means that it becomes entirely reasonable to consider running such systems in single or even half precision floating point formats, leading to potential performance benefits in comparison to more standard PDE solvers.
4.2. Diffusing along unbounded gradients
As discussed above, solving a shock problem without adding some form of diffusion (often in the form of limiters or low order accurate methods) tends to lead to unstable solutions in classical numerical methods. Similarly here, even in the case of the DNN solution, when searching for an approximation to the regular solution shown in Fig. 1, the resulting solution, though reasonably wellbehaved, is of course spurious along the undefined shock fronts.
As in all numerical methods, to ameliorate this problem, all that needs to be done is to add some small amount of diffusion to constrain the derivatives along the shock fronts. In this case, we slightly alter (3.9) into the nonconductive compressible NavierStokes type system,
(4.11) 
given a dissipative flux . Again here we start by merely solving over the spacetime domain, , given a fixed and positive constant . As we show in more detail below, adding a modest amount of viscosity leads to a DNN solution that is competitive with some of the most effective numerical methods available for solving the Sod shock tube problem.
5. Numerical Comparison Tests
In this section we show numerical comparison tests between DNNs, the DG finite element methods (FEMs), and FVMs for the Sod shock tube problem. We show solutions in the eyeball norm for each, relative to, first, grid density, and then degrees of freedom in the method. It should be noted that accuracy in the DNN solution is somewhat arbitrary, as they can be tuned via metaheuristics to search for local minima that improve upon basic accuracy measures, such as error. This is in contrast to, for example, DG and FVM methods which have established convergence criteria, so that as a function of spatial order and mesh width expected improvements in smooth solutions can be anticipated.
Solution Type  Density  Pressure  Velocity 

Unlimited DNN  0.00199  0.00074  0.01272 
FVM 1st Order  0.00117  0.00139  0.00804 
FVM 2nd Order  0.00030  0.00019  0.00164 
DG,  0.00163  0.00259  0.01413 
DG,  0.00314  0.00811  0.02945 
DNN,  0.00020  0.00025  0.00136 
The DNN solution method is effectively adaptive and gridless and relies on stochastic optimization to establish minimizing solutions. In the standard DNN formulation of (3.9) a list of tunable parameters would include a choice of activation function , the number of optimization iterations , the number and distribution of sampled points from , the optimization method stopping criteria, the form and weighting of the loss functions in , the learning rate of the optimizer , etc. The result is that comparing the accuracy of a DNN solution to a more standard forward solver in some specified norm is not particularly insightful (though we still show the MSE of the solutions in Table 1 and 2). But more so, we try to establish some criteria that do not draw us into the quicksand of hyperparameter optimization, by which to compare DNN solutions to FVM and DGFEM solutions below.
5.1. Solutions as a function of spatial grid density
One way to evaluate the relationship between the DNN method and FVM or DGFEM is by comparing the relative density of effective residual evaluation points in space. Broadly, in the DG solution, for degree polynomial solutions, this corresponds to for the number of elements. In the FVM method this corresponds to simply , the number of finite volumes. In the DNN the residual is evaluated at batch points determined by the sampling distribution , which we denote with . One nuance in comparing the methods through residual evaluation points is that in standard FVM and DGFEM methods, these points are spatially fixed. In the DNN approach however, the batch points are resampled after every evaluation of the objective function gradients. As such, it might seem more appropriate to look at how DNNs compare to adaptive DGFEM and FVM that use adaptive mesh refinement (AMR). Note, however, that adaptive forward problems using DGFEM and FVM are adaptive in a fundamentally different way relative to the residual evaluation as compared to the DNN method. In forward problems, the residual evaluation shifts in space relative to the current solution in time. In the DNN solution, however, though the batch points are resampled, the samples can be chosen independent of the solution behavior, and are done so over the whole spacetime domain .
Solution Type  Density  Pressure  Velocity 

FVM 1st Order  0.00027  0.00017  0.00136 
FVM 2nd Order  0.00011  0.00005  0.00084 
DG,  0.00021  0.00013  0.00182 
DG,  0.00042  0.00029  0.00257 
DNN,  0.00020  0.00025  0.00136 
This dependence on space and time of the sampled batch points , distinguishes the DNN solution from even the adaptive FVM and FEM solutions. Thus in order to compare the spatial resolution of the two different types of solutions, we compare solutions using the rule of thumb that .
Results of this comparison are shown in Fig. 2 and Table 1, where the timestepping is chosen sufficiently small. Here the DGFEM solution is run at , and at , using a Rusanov Riemann solver with adaptive hierarchical slope limiting [37]. Both first and second order FVM solutions are run at , where the first order solution is unlimited, and the second order uses a standard perface slope limiter. The DNN solution is run at , using . The results indicate that the DNN solution performs favorably to the FVM and DGFEM solutions relative to spatial grid density.
5.2. Solutions as a function of degrees of freedom
Another way of comparing the DNN solution to the FVM and DGFEM methods is to look at solutions as a function of their respective degrees of freedom (DoFs). Heuristically this can be thought of as comparing the number of “tunable parameters” within the representation space of the solution.
Since the solution in the DNN is a global in space and time solution, we consider the DoFs spanning the whole space, . In the DG solution this corresponds to , where is the temporal grid and are the number of stages in the RungeKutta discretization. Likewise in the FVM setting, we have . In the case of the DNN, on the other hand, the degrees of freedom are recovered by counting the scalar components of the vector parametrizing the network architecture (2.2), and this yields
(5.12) 
where is the number of hidden layers, is the width of the th layer, and and are the input and output dimensions, respectively.
The results are presented in Fig. 3 and Table 2. In the DNN solution, the degrees of freedom are after setting and . The DGFEM solutions are run the same as in section 5.1, except that at sixth order, , , and , so that . Similarly the third order accurate DGFEM solutions use , , and , corresponding to . The first order accurate FVM solution is run the same as in section 5.1, except (a forward Euler solver is used), , and so that . At second order the FVM method uses a predictorcorrector method, which we set as , , and corresponding to . Again, the results show that the DNN solution compares favorably as a function of degrees of freedom.
6. Natural Extensions
In this section we discuss a pair of natural extensions to the DNN framework applied to the Sod Shock tube problem. The first of these is an application of a type of “simulated annealing," that can be used along the viscous and/or dissipative parameter directions to improve the effective time to convergence in the method. Next we show how even more powerfully, solving the DNN framework along a full parameter axis, in this case along the viscous axis , can lead to a simultaneous and fully parameterized solution in the entire parameter space at nearly no additional cost.
6.1. Dissipative annealing
One of the potential limitations of the DNN approach, in comparison to other numerical methods, is the compute timetoconvergence. Discussion of convergence in the DNN/PDE setting is complicated by the fact that “convergence” refers to different concepts in numerical PDE analysis and in numerical optimization. In numerical PDE analysis, convergence usually refers to rate at which an approximate solution tends to the exact solution as a function of the representation parameters, e.g., mesh width and/or polyniomial order . In iterative numerical optimization, however, convergence refers to the rate at which the parameters converge to a (possibly local) minimum as a function of the number of iterations. The global minimum may exist only in the limit in which some components of tend to infinity. Then, heuristics mandates that we seek convergence to an acceptable local minimum. It is thus difficult to arrive at a formal definition for what is meant by timetoconvergence in the DNN setting. In this section, we resort to a hybrid heuristic between the two concepts of convergence discussed above, and refer to “timetoconvergence” as the number of computational cycles required to reach a comparable level of accuracy to that of a more traditional PDE solution method. It is in this sense that DNN solutions can appear noticeably more expensive, though in this section we discuss some of the nuances that underlie this observation, and why the unique capacity and flexibility of DNN solutions makes the timetoconvergence less restrictive than it seems.
The premise we assume here about slowly converging solutions (in the sense of optimization) is that largescale features of a smooth solution are relatively easy to converge to, but when accompanied with finer scale features, the optimization noise from attempting to fit the finer scales can obscure the larger scale features from the optimizer. To mitigate this effect, inspired by the simulated annealing method [48], we recast the dissipative flux in (4.11) as
where becomes an iteration numerical viscosity. In this case for some smooth . The function is chosen as a fractional stepwise function bounded from above by unity.
The idea of using , is to converge early and fast in the iteration cycle to an overtly smooth solution of (4.11) with large viscosity. Since such a solution has dramatically fewer finescale features, it is conjectured that the optimizer can more easily find a stable minimum of the objective function for such a smooth solution. As a consequence, fewer total iterations are required to converge to the minimum.
We have tested this idea on the Sod problem, and have found that with minimal effort it seems to reduce the number of iterations needed to arrive at similar results. For example, in Figs. 4–6, we test a DNN using a decreasing learning rate schedule. This base case is run with the minibatch size of and initial learning rate of that is reduced every 25000 iterations by factor of for a total of 100000 iterations. This case is compared to a dissipatively annealed solution obtained with the same minibatch size, but instead of iterations per learning rate decrement we find we can get away with iterations, where after each boundary set, , such that for and . This means the dissipatively annealed solution takes only 60000 iterations to arrive at .
As is clear from Figs. 4–6, the results are largely indistinguishable, where in some ways the dissipatively annealed solution actually looks better for 60% of the computational cost. This result, however, should be taken with a grain of salt. In this example case the two runs are set up exactly the same, up to the dissipative anealing algorithm. It is not clear that this is an entirely fair comparison, given the number of hyperparameters that can be tuned in each case.
6.2. Simultaneous parameter scans
Dense parameter space exploration is the most reliable way to develop predictive inputoutput mappings for scenario development, optimal design and control, risk assessment, uncertainty quantification, etc., in many realworld applications. Without understanding the response to the parameters of the system, be they variable boundary conditions, interior shaping constraints, or constitutive coefficients, it is all but impossible to examine the utility an engineered model might have in some particular design process. PDEs clearly help in this regard since they are fully parametrized models of anticipated physical responses. That being said, PDEs are also often fairly expensive to run forward simulations on, frequently requiring large HPC machines to simulate even modest systems
[49, 50]. Consequently, parameter space exploration of a PDE system is usually both very important and very computationally expensive.To mitigate the expense of running forward models, surrogate modeling [51] and multifidelity hierarchies [52] are often conceived in order to develop costeffective ways of examining the response of a PDE relative to its parametric responses. One of the ways in which these methods function is by developing reduced models to emulate the parametric dependencies from a relatively sparse sampling of the space. In this way it becomes possible—for a sufficiently smooth response surface—to tractably parametrize the inputoutput mapping, and thus interrogate the response at a significantly reduced cost, lending itself to statistical inference, uncertainty quantification, and realworld validation studies.
However, as is commonly the case, the response surface is not sufficiently smooth, or its correlation scalelength cannot be determined a priori, and these reduced order mappings can become highly inaccurate. In such circumstances having a way to emulate the exact response surface would clearly be of high practical value. As a potential solution to this, one of the most powerful immediate features of the DNN framework seems to be the ability to perform simultaneous and dense parameter space scans with both very little effort. The framework is thus a natural, and in some sense exact emulator for complicated system response models. In the case of the Sod shock problem this can be accomplished by simply recasting (4.11) relative to its parameter , where instead of solving over spacetime , the solution is solved over the parameteraugmented parameter space .
Intuitively one might expect the increase of dimensionality to be prohibitive, as the parameter space grows from a discrete 2D system to a discrete 3D system. However, in analogy with the observation of section 6.1 where the parameter is dissipatively annealed, this is not what is observed. Instead, using the same DNN parameter sets, the same minibatch size and learning rate schedule, optimization over the parameteraugmented input space reaches similar loss values with almost no computational overhead. Contour and time slice plots of the solutions are shown in Fig. 7 and Fig. 8 for the density, velocity pressure, and temperature, each as a function of .
Remarkably, in this example, the DNN framework demonstrates that it is as cost effective to perform dense parameter space exploration along the dimension of the adiabatic index as it is to solve at a single fixed value of . It remains unclear how robust this behavior is over physical parameters of the model — in this case (4.11). But however robust this behavior ends up being, even if only across isolated and specific parameters, this demonstrates a remarkably advantageous aspect of the DNN formulation.
7. DNNs With Residual Connections and Multiplicative Gates
As pointed out in [9], the DNN architecture can influence its behavior. In [9] it is claimed that an LSTMinspired feedforward network architecture with residual connections and multiplicative gates can help improve performance for some parabolic PDE problems. These models are effectively an alternative to the more standard deep neural network architecture presented in (2.2).
We implement these LSTMlike architectures for our 1D mixed hyperbolicparabolic like PDE system (3.9), to see how it behaves relative to standard DNNs in the context of more irregularity in the solution space. Setting , we test the Euler system (3.9) on the following architecture comprised of hidden layers:
(7.13)  
where
is the sigmoid function and the network parameters are given by:
(7.14)  
and the number of units per layer is . For more details on the network architecture we refer the reader to [9].
To understand whether the standard DNN architecture is more effective, in some sense, than (7.13), it is essential to recognize that the degrees of freedom scale differently for the LSTMlike system (7.13) than they do in (5.12), where the LSTMlike system is substantially more expensive per network layer . More explicitly, assuming that each layer has the same width, , the degrees of freedom in the LSTMlike system can be calculated as:
(7.15) 
Comparing (5.12) to (7.15), for a fixed number of layers , setting for (3.9) and , the resulting relationship between the number of units in the LSTM, , and DNN, , can be computed by solving the following ceiling function:
For our numerical test, we set with the reference solution width set to . This leads to . Solving for the relationship above this yields . The results are presented in Fig. 9, and show fairly unambiguously that for this particular test case the LSTMlike architecture does not perform as well as the standard DNN. As above, this result must be taken in context. It may well be, for example, that the tuned parameters for standard DNNs do not tune the LSTMlike networks well, and that the result presented is not a proper comparison. Again, to fully reveal the relationship between these architectures on even just this one simple hyperbolic test case, would require a full study of its own.
That said, from a purely practical point of view, the implementation of the DNN versus the implementation of the LSTMlike networks seems to lead to a network architecture with a large disparity in the number of graph edges. The DNN network has effectively one layer per , while the LSTMlike network has, in some sense, five layerlike networks per as seen in (7.13. The result of this is that in our implementation, even at fixed degrees of freedom, the LSTMlike network takes almost five times longer in compute time to finish than the standard DNN, at a fixed number of iterations. Again, it is not clear if this slowdown can be reduced by using a more clever implementation of the LSTMlike networks, or if this is simply a result of increasing the network complexity in the graph.
8. Dataenriched PDEs
The simplicity and apparent robustness of DNNs for solving systems of nonlinear and irregular differential equations raises the question: how to incorporate experimental data into such a framework? Work combining DNNbased approach to solving PDEs with conventional supervised machine learning has already started to emerge; we presume that this trend will only accelerate. Raissi et al. [10] introduced “data discovered PDEs,” where a relatively traditional parameter estimation is performed over differential operators that are discovered through optimization. For example, parameter estimation can be performed for that factors through a differential operator . It is easy to see this can be expanded to data driven discovery of PDEs, where parameter hierarchies can be used to generate libraries of operators that are selected based on system specific physical criteria and constraints, in order to effectively “construct” and/or discover PDE systems from large data sets [53]. These types of approaches are emerging with increasing interest [54, 55, 56, 57, 58], and it is easy to see how such approaches naturally lend themselves to the DNN frameworks.
These types of empirical PDE discovery techniques offer clear advantages in cases where the systems are assumed to be too complex to easily construct first principles models. However, researchers also find themselves in a different type of situation, where the physics model that describes the system response is confidently prescribed, so that any mismatch between the model and data raises questions about the experimental setup just as it raises them about the model.
To explore this common circumstance, we address a situation relating to the experimental validation of a first principles model system. First principles models are predicated on the idea that the physical dependencies within an experimental system are, or can be, fully understood. Validation studies, however, frequently discover that this is in fact not the case [59, 60]. Frequently systematic behaviors, engineering details, alleatoric and/or epistemic uncertainties can cascade, and lead to systems with behaviors that are uncertain and/or different from those of the model systems anticipated to describe them.
In this circumstance, a researcher might be in a situation where a prescribed model system does not plausibly validate against the large quantities of high resolution experimentally measured data that have been collected. Some of the scientific questions one might want to raise in such circumstances are:

Are my experimental results reliable?

Is my model system sufficient to describe the experimental data?

How far from my model system is the measured data?

What is the form of the mismatch between the data and the model?

How might I enrich my model system to more accurately capture the observed behavior?

Are there characteristics in the mismatch that reveal missing physical subsystems?
Below we present a simple example of how to approach such a circumstance, and provide an outline of how such a series of questions can be systematically retired.
8.1. Hypothetical experimental senario
We consider the following hypothetical scenario: an experimental test facility is set up to run experiments interrogating the behavior of transonic gas dynamics. Although experimental setup is mildly exotic, the expected responses of the system are anticipated to obey classical gas dynamics (3.9). Experiments are run, and a process is initially set up to validate relative to the following dimensionless ideal model system:
(8.16)  
where the total energy density is given by
and the initialboundary data is set to the same as in (3.9). Here a constant heat conductivity is also assumed. In the process of validating the system, the classical gas dynamic model (8.16) repeatedly demonstrates that it does not strongly validate against the experimentally measured data. After making sure the measurement equipment is well calibrated, and the confidence in the data is high, the researchers are left with the quandary: what is happening?
A traditional approach to this problem might begin with a vigorous debate between the simulation experts and the laboratory experts, both claiming systematic errors on behalf of the other. Both camps might proceed by testing their subsystems to the best of their ability and coming to the conclusion that nothing is wrong, per se, but there is some other physics occurring in the test facility that they have not properly anticipated. As a consequence, the next step might be to slowly start adding back physics terms to (8.16). Subsequent efforts may include parameter estimation on remaining uncertain parameters in the model until a better fit can be recovered. In this case, however, such a process would be extremely slow and tedious as illustrated below.
Unbeknownst to both the modellers and experimentalists, the actual dynamics going on inside the experiment, is a substantially more complicated system. In fact, it turns out that a feature of the experimental setup is inadvertaintly inducing ionization of the gas in the chamber. As a result, the gas actually behaves like a reactive photoactivated autocatalytic plasma in the center of the reactor, characterized exactly by the following equations:
(8.17)  
where
Here is a magnetic field, thought inconsequential in the erroneously presumed absence of ions, and the velocity field in the gas is defined as . In this hypothetical scenario, just like for the model system (8.16
), the initial conditions for all unknowns are initialized with a jump discontinuity at
, over the domain , for , as inspired in the RP2 case in [61]. The remaining full boundary conditions are listed in Table 3.B.C.  

Left Mod. B.C.  1.08  1.2  –  –  0.8796  –  –  – 
Right Mod. B.C.  0.9891  0.0131  –  –  0.9823  –  –  – 
Left Exp. B.C.  1.08  1.2  0.01  0.5  0.8796  2.0  3.6  2.0 
Right Exp. B.C.  0.9891  0.0131  0.0269  0.010037  0.9823  2.0  4.0244  2.0026 
The autocatalytic reactive subsystem is induced by virtue of an unexpected ionization pulse in the reactor, leading to an oscillating chaotic attractor characterized by the reactions:
(8.18)  
where is the first chemical species with density , and the second chemical species with density . Here the are excess bulk species, and the are dimensional rate constants. The condition for instability is that , thus we set , and , where for simplicity we set for all . Photoactivation only occurs in the center of the reactor, and thus
The solution to this subsystem is shown in Fig. 10
Given the power of the DNN framework, to reveal the mismatch of the underlying system that is actually observed in the experiment, DNN is trained to model the following enriched PDE system instead of (8.16):
(8.19)  
where
with again the same initialboundary data from Table 3.
The solution from the experiment (8.17) is now used as training data to supervise (8.19). Formally the objective function from (2.4) is enriched with the training data, from (8.17), by appending the new loss functions
to (2.4), where are the experimentally measured data support points, and is the mean square error. For simplicity, the support points are chosen as a point grid in . In addition, the neural network outputs associated to the are regularized by setting:
where here the distributions are chosen to be the same as for (8.19), and the are losses with weight for each in order to minimize the penalization for accumulating nonzero .
The resulting system matches up reasonably well to the experimental data. As can be seen in Fig. 11, the simulation density and experimental density match to within the eyeball norm, where the initial mismatch is corrected with the function. Though the total density nearly completely obfuscates the oscillations from the reactive subsystem in Fig. 10, the total density is off primarily near the initial state, where the reactive subsystem indicate a dramatic influx of mass due to the species being tracked by the sensor data in the experiment. This type of mismatch in the mass conservation clearly indicates chemical reactions relative to the tracked species densities, unless, of course, the system is somehow not closed. In Fig. 12 the velocity mismatch is similarly convincing, where the mismatch shows complicated behavior with nontrivial variation that cannot be readily explained by the mass influx in . Similarly the mismatch in the temperature in Fig. 13 also shows an unusual signature not present in .
Returning to our list of potential questions about how the model system relates to the experimental system (8.17), we can now make some fairly strong conclusions. First, clearly there is a mass mismatch in the measured species density. The mismatch is substantial and cannot be captured by simple parameter estimation. Therefore, there is physics in the mass equation that is not being accounted for by the ascribed model system (8.16). Moreover, it is also straightforward to conclude that the giant influx of mass makes a chemical reaction a very likely candidate to explain the experimental behavior, assuming the system closed.
In contrast to the mass equation, the momentum and energy equations in (8.16) show perturbations away from the model Sod shock solution that indicate that complicated system dynamics is occurring in the continuum. While this behavior might be more challenging to diagnose, it is clear in these cases too that the base equations (8.16), even with the addition of , cannot account for the system response, and there is missing physics. As it so happens, knowing simply the magnetic field variation of the experiment, as shown in Fig. 14 is really enough to diagnose the mismatch in both the the velocity and temperature as related to the magnetic field, since they exhibit signature features that track the magnetic field variation.
9. Conclusion
Gridless representations provided by deep neural networks in combination with numerical optimization can be used to solve complicated systems of differential equations that display irregular solutions. This requires fairly straightforward techniques, and in irregular solutions benefits from the usual trick of adding numerical diffusion to smooth numerically unstable function representations. The DNN method compares favorably with regard to accuracy and stability to more conventional numerical methods and yet enables oneshot exploration of the entire parameter space. The incorporation of efficient optimization algorithms in DNN methods into the PDE solver lends itself to an ease and simplicity of integrating advanced data analytic techniques into physicsenriched model systems. Early results indicate that DNNs enable a simple and powerful framework for exploring and advancing predictive capabilities in science and engineering.
10. Acknowledgements
This work was supported by U.S. DOE Contract No. DEFG0204ER54742 and U.S. DOE Office of Fusion Energy Sciences Scientific Discovery through Advanced Computing (SciDAC) program under Award Number DESC0018429. This work was also supported by the NSF grant AST1413501. We acknowledge the Texas Advanced Computing Center at The University of Texas at Austin for providing HPC resources. Computations were performed on the Maverick2 GPU cluster.
References
 [1] B.N. Jiang and G. F. Carey, “Leastsquares finite element methods for compressible Euler equations,” Internat. J. Numer. Methods Fluids, vol. 10, no. 5, pp. 557–568, 1990.
 [2] P. B. Bochev and M. D. Gunzburger, Leastsquares finite element methods, vol. 166 of Applied Mathematical Sciences. Springer, New York, 2009.
 [3] G. Strang and G. J. Fix, An analysis of the finite element method. PrenticeHall, Inc., Englewood Cliffs, N. J., 1973. PrenticeHall Series in Automatic Computation.
 [4] T. Belytschko, Y. Y. Lu, and L. Gu, “Elementfree Galerkin methods,” Internat. J. Numer. Methods Engrg., vol. 37, no. 2, pp. 229–256, 1994.
 [5] X. Li, “Error estimates for the moving leastsquare approximation and the elementfree Galerkin method in dimensional spaces,” Appl. Numer. Math., vol. 99, pp. 77–97, 2016.
 [6] I. Babuška, U. Banerjee, and J. E. Osborn, “Survey of meshless and generalized finite element methods: a unified approach,” Acta Numer., vol. 12, pp. 1–125, 2003.

[7]
J. Han, A. Jentzen, and W. E, “Solving highdimensional partial differential equations using deep learning,”
Proceedings of the National Academy of Sciences, vol. 115, no. 34, pp. 8505–8510, 2018.  [8] J. Berg and K. Nyström, “A unified deep artificial neural network approach to partial differential equations in complex geometries,” Neurocomputing, vol. 317, pp. 28–41, 2018.
 [9] J. Sirignano and K. Spiliopoulos, “DGM: A deep learning algorithm for solving partial differential equations,” Journal of Computational Physics, vol. 375, pp. 1339 – 1364, 2018.
 [10] M. Raissi, P. Perdikaris, and G. Karniadakis, “Physicsinformed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational Physics, vol. 378, pp. 686 – 707, 2019.
 [11] K. Xu and E. Darve, “The neural network approach to inverse problems in differential equations,” 2019.
 [12] P. D. Lax, Hyperbolic systems of conservation laws and the mathematical theory of shock waves. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1973. Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, No. 11.
 [13] J. Smoller, Shock waves and reactiondiffusion equations, vol. 258 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Science]. SpringerVerlag, New YorkBerlin, 1983.
 [14] R. Abgrall and R. Saurel, “Discrete equations for physical and numerical compressible multiphase mixtures,” Journal of Computational Physics, vol. 186, pp. 361–396, APR 10 2003.
 [15] T. Ohwada and S. Kobayashi, “Management of discontinuous reconstruction in kinetic schemes,” Journal of Computational Physics, vol. 197, pp. 116–138, JUN 10 2004.
 [16] R. Chevalier, J. Blondin, and R. Emmering, “Hydrodynamic instabilities in supernovaremnants  selfsimilar driven waves,” Astrophysical Journal, vol. 392, pp. 118–130, Jun 10 1992.
 [17] Z. Zhang and G. Gogos, “Theory of shock wave propagation during laser ablation,” Physical Review B, vol. 69, Jun 2004.
 [18] J. S. Hesthaven and T. Warburton, Nodal discontinuous Galerkin methods, vol. 54 of Texts in Applied Mathematics. Springer, New York, 2008. Algorithms, analysis, and applications.
 [19] T. Gerstner and M. Griebel, “Numerical integration using sparse grids,” Numer. Algorithms, vol. 18, no. 34, pp. 209–232, 1998.
 [20] J. B. Garnett, Bounded analytic functions, vol. 96 of Pure and Applied Mathematics. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New YorkLondon, 1981.
 [21] R. A. DeVore and G. G. Lorentz, Constructive approximation, vol. 303 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. SpringerVerlag, Berlin, 1993.
 [22] G. A. Sod, “A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws,” Journal of Computational Physics, vol. 27, no. 1, pp. 1 – 31, 1978.
 [23] R. Aster, B. Borchers, and C. Thurber, “Parameter Estimation and Inverse Problems,” in Parameter Estimation and Inverse Problems, vol. 90 of International Geophysics Series, pp. 1–303, 2005.
 [24] N. Gordon, D. Salmond, and A. Smith, “Novelapproach to nonlinear nongaussian Bayesian state estimation,” IEE ProceedingsF Radar and Signal Processing, vol. 140, pp. 107–113, APR 1993.
 [25] D. A. Fournier, H. J. Skaug, J. Ancheta, J. Ianelli, A. Magnusson, M. N. Maunder, A. Nielsen, and J. Sibert, “Ad model builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models.,” Optimization Methods & Software, vol. 27, no. 2, pp. 233 – 249, 2012.
 [26] P. J. Huber, “Robust estimation of a location parameter,” Ann. Math. Statist., vol. 35, pp. 73–101, 03 1964.
 [27] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2014.

[28]
X. Glorot and Y. Bengio, “Understanding the difficulty of training deep
feedforward neural networks,” in
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics
(Y. W. Teh and M. Titterington, eds.), vol. 9 of Proceedings of Machine Learning Research, (Chia Laguna Resort, Sardinia, Italy), pp. 249–256, PMLR, 13–15 May 2010.  [29] J. Hadamard, “Sur les problèmes aux dérivés partielles et leur signification physique,” Princeton University Bulletin, vol. 13, pp. 49–52, 1902.
 [30] L. C. Evans, Partial differential equations. Providence, R.I.: American Mathematical Society, 2010.
 [31] L. Wang, F. Yao, S. Zhou, and H. Jia, “Optimal regularity for the poisson equation,” Proceedings of the American Mathematical Society, vol. 137, no. 6, pp. 2037–2047, 2009.
 [32] P. Isett, “A proof of Onsager’s conjecture,” Annals of Mathematics, vol. 188, pp. 871–963, NOV 2018.
 [33] C. Domb, M. F. Sykes, and J. T. Randall, “On the susceptibility of a ferromagnetic above the curie point,” Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, vol. 240, no. 1221, pp. 214–228, 1957.
 [34] G. Mercer and A. Roberts, “A Centre Manifold Description of Contaminant Dispersion in Channels with Varying Flow Properties,” SIAM Journal on Applied Mathematics, vol. 50, no. 6, pp. 1547–1565, 1990.
 [35] Z. Lu, H. Pu, F. Wang, Z. Hu, and L. Wang, “The expressive power of neural networks: A view from the width,” in Advances in Neural Information Processing Systems 30 (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds.), pp. 6231–6239, Curran Associates, Inc., 2017.
 [36] R. J. LeVeque, FiniteVolume Methods for Hyperbolic Problems. Cambridge University Press, 2002.
 [37] C. Michoski, C. Mirabito, C. Dawson, D. Wirasaet, E. J. Kubatko, and J. J. Westerink, “Adaptive hierarchic transformations for dynamically penriched slopelimiting over discontinuous Galerkin systems of generalized equations,” Journal of Computational Physics, vol. 230, pp. 8028–8056, Sep 10 2011.
 [38] C. Michoski, C. Dawson, E. J. Kubatko, D. Wirasaet, S. Brus, and J. J. Westerink, “A Comparison of Artificial Viscosity, Limiters, and Filters, for High Order Discontinuous Galerkin Solutions in Nonlinear Settings,” Journal of Scientific Computing, vol. 66, pp. 406–434, Jan 2016.
 [39] Z. Xu, “Parametrized Maximum Principle Preserving Flux Limiters for High Order Schemes Solving Hyperbolic Conservation Laws: Onedimensional Scalar Problem,” Mathematics of Computation, vol. 83, pp. 2213–2238, Sep 2014.
 [40] M. Dumbser, M. Kaeser, V. A. Titarev, and E. F. Toro, “Quadraturefree nonoscillatory finite volume schemes on unstructured meshes for nonlinear hyperbolic systems,” Journal of Computational Physics, vol. 226, pp. 204–243, Sep 10 2007.
 [41] Z. Xu, Y. Liu, and C.W. Shu, “Hierarchical reconstruction for spectral volume method on unstructured grids,” Journal of Computational Physics, vol. 228, pp. 5787–5802, Sep 1 2009.
 [42] j. Giannakouros and G. Karniadakis, “Spectral Element Fct Method for Scalar Hyperbolic Conservationlaws,” International Journal for Numerical Methods in Fluids, vol. 14, pp. 707–727, MAR 30 1992.
 [43] S. Mabuza, J. N. Shadid, and D. Kuzmin, “Local bounds preserving stabilization for continuous Galerkin discretization of hyperbolic systems,” Journal of Computational Physics, vol. 361, pp. 82–110, MAY 15 2018.
 [44] N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. A. Hamprecht, Y. Bengio, and A. Courville, “On the spectral bias of neural networks,” 2018.
 [45] M. Berger and P. Colella, “Local Adaptive Mesh Refinement for Shock Hydrodynamics,” Journal of Computational Physics, vol. 82, pp. 64–84, May 1989.
 [46] R. LeVeque, “Wave propagation algorithms for multidimensional hyperbolic systems,” Journal of Computational Physics, vol. 131, pp. 327–353, MAR 1 1997.
 [47] C. Michoski, J. Chan, L. Engvall, and J. Evans, “Foundations of the blended isogeometric discontinuous galerkin (bidg) method,” Computer Methods in Applied Mechanics and Engineering, vol. 305, pp. 658 – 681, 2016.
 [48] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing.,” Science, vol. 220 4598, pp. 671–80, 1983.
 [49] M. Bremer, K. Kazhyken, H. Kaiser, C. Michoski, and C. Dawson, “Performance Comparison of HPX Versus Traditional Parallelization Strategies for the Discontinuous Galerkin Method,” Journal of Scientific Computing, vol. In Press, 2019.
 [50] N. Malaya, D. McDougall, C. Michoski, M. Lee, and C. S. Simmons, “Experiences porting scientific applications to the intel (knl) xeon phi platform,” in Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, PEARC17, (New York, NY, USA), pp. 40:1–40:8, ACM, 2017.
 [51] A. Forrester, A. Sóbester, A. Keane, and W. I. O. service), Engineering design via surrogate modelling : a practical guide. Hoboken, N.J. : Wiley ; Chichester : John Wiley [distributor], 2008. Description based upon print version of record.
 [52] B. Peherstorfer, K. Willcox, and M. Gunzburger, “Survey of multifidelity methods in uncertainty propagation, inference, and optimization,” SIAM Review, vol. 60, no. 3, pp. 550–591, 2018.
 [53] Z. Long, Y. Lu, X. Ma, and B. Dong, “PDEnet: Learning PDEs from data,” 2018.
 [54] S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Datadriven discovery of partial differential equations,” Science Advances, vol. 3, no. 4, 2017.
 [55] H. Schaeffer, “Learning partial differential equations via data discovery and sparse optimization,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 473, no. 2197, p. 20160446, 2017.
 [56] J. Berg and K. Nyström, “Datadriven discovery of pdes in complex datasets,” Journal of Computational Physics, vol. 384, pp. 239 – 252, 2019.
 [57] K. Champion, B. Lusch, J. N. Kutz, and S. L. Brunton, “Datadriven discovery of coordinates and governing equations,” 2019.
 [58] M. Raissi, “Deep hidden physics models: Deep learning of nonlinear partial differential equations,” J. Mach. Learn. Res., vol. 19, pp. 932–955, Jan. 2018.
 [59] R. Ghanem, D. Higdon, and H. Owhadi, Handbook of uncertainty quantification. 2017.
 [60] T. A. Oliver, G. Terejanu, C. S. Simmons, and R. D. Moser, “Validating predictions of unobserved quantities,” Computer Methods in Applied Mechanics and Engineering, vol. 283, pp. 1310 – 1335, 2015.
 [61] M. Dumbser, D. Balsara, M. Tavelli, and F. Fambri, “A divergencefree semiimplicit finite volume scheme for ideal, viscous, and resistive magnetohydrodynamics,” International Journal for Numerical Methods in Fluids, vol. 89, no. 12, pp. 16–42, 2019.
Comments
There are no comments yet.