NumericalGP
None
view repo
We introduce the concept of numerical Gaussian processes, which we define as Gaussian processes with covariance functions resulting from temporal discretization of time-dependent partial differential equations. Numerical Gaussian processes, by construction, are designed to deal with cases where: (1) all we observe are noisy data on black-box initial conditions, and (2) we are interested in quantifying the uncertainty associated with such noisy data in our solutions to time-dependent partial differential equations. Our method circumvents the need for spatial discretization of the differential operators by proper placement of Gaussian process priors. This is an attempt to construct structured and data-efficient learning machines, which are explicitly informed by the underlying physics that possibly generated the observed data. The effectiveness of the proposed approach is demonstrated through several benchmark problems involving linear and nonlinear time-dependent operators. In all examples, we are able to recover accurate approximations of the latent solutions, and consistently propagate uncertainty, even in cases involving very long time integration.
READ FULL TEXT VIEW PDF
This work leverages recent advances in probabilistic machine learning to...
01/10/2017 ∙ by Maziar Raissi, et al. ∙
0
∙
share
read it
We present a detailed mathematical description of the connection between...
11/23/2018 ∙ by Johnathan M. Bardsley, et al. ∙
0
∙
share
read it
While there is currently a lot of enthusiasm about "big data", useful da...
08/02/2017 ∙ by Maziar Raissi, et al. ∙
0
∙
share
read it
We algorithmically construct multi-output Gaussian process priors which
...
01/28/2018 ∙ by Markus Lange-Hegermann, et al. ∙
0
∙
share
read it
Stochastic differential equations are an important modeling class in man...
02/22/2019 ∙ by Gabriele Abbati, et al. ∙
12
∙
share
read it
We prove that the category of vector bundles over a fixed smooth manifol...
11/15/2018 ∙ by James Wallbridge, et al. ∙
0
∙
share
read it
Purely data driven approaches for machine learning present difficulties ...
07/13/2011 ∙ by Mauricio A. Álvarez, et al. ∙
0
∙
share
read it
None
Data-driven methods are taking center stage across many disciplines of science, and machine learning techniques have achieved groundbreaking results across a diverse spectrum of pattern recognition tasks
krizhevsky2012imagenet; hochreiter1997long; ghahramani2015probabilistic; lecun2015deep; jordan2015machine. Despite their disruptive implications, many of these methods are blind to any underlying laws of physics that may have shaped the distribution of the observed data. A natural question would then be how one can construct efficient learning machines that explicitly leverage such structured prior information? To answer this question we have to turn our attention to the immense collective knowledge originating from centuries of research in applied mathematics and mathematical physics. Modeling the physical world through the lens of mathematics typically translates into deriving conservation laws from first principles, which often take the form of systems of partial differential equations. In many practical settings, the solution of such systems is only accessible by means of numerical algorithms that provide sensible approximations to given quantities of interest. In this work, we aim to capitalize on the long-standing developments of classical methods in numerical analysis and revisit partial differential equations from a statistical inference viewpoint. The merits of this approach are twofold. First, it enables the construction of data-efficient learning machines that can encode physical conservation laws as structured prior information. Second, it allows the design of novel numerical algorithms that can seamlessly blend equations and noisy data, infer latent quantities of interest (e.g., the solution to a partial differential equation), and naturally quantify uncertainty in computations. This approach is aligned in spirit with the emerging field of probabilistic numerics probnum, which roots all the way back to Poincaré’s courses on probability theory
poincare1896, and has been recently revived by the pioneering works of diaconis1988bayesian; o1992some; HenOsbGirRSPA2015; conrad2016statistical.To illustrate the key ingredients of this study, let us start by considering linear^{1}^{1}1Non-linear equations have to be studied on a case by case basis (see e.g., section 2.6). partial differential equations of the form
(1) |
where is a linear operator and denotes the latent solution. As an example, the one dimensional heat equation corresponds to the case where . Moreover, is a subset of . All we observe are noisy data on the black-box initial function as well as some information on the domain boundary to be specified later. Our goal is to predict the latent solution at , and propagate the uncertainty due to noise in the initial data. For starters, let us try to convey the main ideas of this work using the Euler time stepping scheme
(2) |
Here, . Building upon Raissi et al. Raissi2017736; raissi2017machine, we place a Gaussian process Rasmussen06gaussianprocesses prior on , i.e.,
(3) |
Here, denotes the hyper-parameters of the covariance function . Gaussian process regression (see Rasmussen06gaussianprocesses; murphy2012machine
) is a non-parametric Bayesian machine learning technique that provides a flexible prior distribution over functions, enjoys analytical tractability, and has a fully probabilistic work-flow that returns robust posterior variance estimates, which quantify uncertainty in a natural way. Moreover, Gaussian processes are among a class of methods known as kernel machines (see
vapnik2013nature; scholkopf2002learning; tipping2001sparse) and are analogous to regularization approaches (see tikhonov1963solution; Tikhonov/Arsenin/77; poggio1990networks). They can also be viewed as a prior on one-layer feed-forward Bayesian neural networks with an infinite number of hidden units
neal2012bayesian. The Gaussian process prior assumption (3) along with the Euler scheme (2) will allow us to capture the entire structure of the differential operator as well as the Euler time-stepping rule in the resulting multi-output Gaussian process(4) |
The specific forms of the kernels and are direct functions of the Euler scheme (2) as well as the prior assumption (3), and will be discussed in more detail later. The multi-output process (4) is an example of a numerical Gaussian process, because the covariance functions and result from a numerical scheme, in this case, the Euler method. Essentially, this introduces a structured prior that explicitly encodes the physical law modeled by the partial differential equation (1). In the following, we will generalize the framework outlined above to arbitrary linear multi-step methods, originally proposed by Bashforth and Adams bashforth1883attempt, as well as Runge-Kutta methods, generally attributed to Runge runge1895numerische. The biggest challenge here is the proper placement of the Gaussian process prior (see e.g., equation (3)) in order to avoid inversion of differential operators and to bypass the classical need for spatial discretization of such operators. For instance, in the above example (see equations (2) and (3)), it would have been an inappropriate choice to start by placing a Gaussian process prior on , rather than on , as obtaining the numerical Gaussian process (4) would then involve inverting operators of the form corresponding to the Euler method. Moreover, propagating the uncertainty associated with the noisy initial observations through time is another major challenge addressed in the following.
Let us start with the most general form of the linear multi-step methods butcher2016numerical applied to equation (1); i.e.,
(5) |
Different choices for the parameters and result in specific schemes. For instance, in table 1, we present some specific members of the family of linear multi-step methods (5).
Forward Euler | |
---|---|
Backward Euler | |
Trapezoidal Rule |
We encourage the reader to keep these special cases in mind while reading the rest of this section. Linear multi-step methods (5) can be equivalently written as
(6) |
where and . Some special cases of equation (6) are given in table 2.
Forward Euler | |
---|---|
Backward Euler | |
Trapezoidal Rule | |
For every and some which depends on the specific choices for the values of the parameters and , we define to be given by
(7) |
Definition (7) takes the specific forms given in table 3 for some example schemes.
Forward Euler | |
---|---|
Backward Euler | |
Trapezoidal Rule | |
Shifting every term involved in the above definition (7) by yields
(8) |
To give an example, for the trapezoidal rule we obtain and . Therefore, as a direct consequence of equation (8) we have
(9) | |||||
This, in the special case of the trapezoidal rule, translates to and . It is worth noting that by assuming
, we can capture the entire structure of the trapezoidal rule in the resulting joint distribution of
and . This proper placement of the Gaussian process prior is key to the proposed methodology as it allows us to avoid any spatial discretization of differential operators since no inversion of such operators is necessary. We will capitalize on this idea in the following.Assuming that
(10) |
are independent processes, we obtain the following numerical Gaussian process
where
(11) |
It is worth noting that the entire structure of linear multi-step methods (5) is captured by the kernels given in equations (11). Note that although we start from an independence assumption in equation (10), the resulting numerical Gaussian process exhibits a fully correlated structure as illustrated in equations (11). Moreover, the information on the boundary of the domain can often be summarized by noisy observations
of ; i.e., noisy data onUsing this, we obtain the following covariance functions involving the boundary
The numerical examples accompanying this manuscript are designed to showcase different special treatments of boundary conditions, including Dirichlet, Neumann, mixed, and periodic boundary conditions.
The proposed work flow is summarized below:
Starting from the initial data and the boundary data , we train the kernel hyper-parameters as outlined in section 2.3. This step carries the main computational burden as it scales cubically with the total number of training points since it involves Cholesky factorization of full symmetric positive-definite covariance matrices Rasmussen06gaussianprocesses.
Having identified the optimal set of kernel hyper-parameters, we utilize the conditional posterior distribution to predict the solution at the next time-step and generate the artificial data . Note that
is randomly sampled in the spatial domain according to a uniform distribution, and
is a normally distributed random vector, as outlined in section
2.4.Given the artificial data and boundary data we proceed with training the kernel hyper-parameters for the second time-step^{2}^{2}2To be precise, we are using the mean of the random vector for training purposes. (see section 2.3).
Having identified the optimal set of kernel hyper-parameters, we utilize the conditional posterior distribution to predict the solution at the next time-step and generate the artificial data , where is randomly sampled in the spatial domain according to a uniform distribution. However, since is a random vector, we have to marginalize it out in order to obtain consistent uncertainty estimates for . This procedure is outlined in section 2.5.
Steps 3 and 4 are repeated until the final integration time is reached.
In summary, the proposed methodology boils down to a sequence of Gaussian process regressions at every time-step. To accelerate training, one can use the optimal set of hyper-parameters from the previous time-step as an initial guess for the current one.
In the following, for notational convenience and without loss of generality^{3}^{3}3The reader should be able to figure out the details without much difficulty while generalizing to cases with . Moreover, for the examples accompanying this manuscript, more details are also provided in the appendix., we will operate under the assumption that (see equation (5)). The hyper-parameters , can be trained by employing the Negative Log Marginal Likelihood resulting from
(12) |
where are the (noisy) data on the boundary, are artificially generated data to be explained later (see equation (14)), and
It is worth mentioning that the marginal likelihood provides a natural regularization mechanism that balances the trade-off between data fit and model complexity. This effect is known as Occam’s razor rasmussen2001occam after William of Occam 1285–1349 who encouraged simplicity in explanations by the principle: “plurality should not be assumed without necessity”.
In order to predict at a new test point , we use the following conditional distribution
where
However, to properly propagate the uncertainty associated with the initial data through time, one should not stop here. Since are artificially generated data (see equation (14)) we have to marginalize them out by employing
to obtain
(13) |
where
and
Now, one can use the resulting posterior distribution (13) to obtain the artificially generated data for the next time step with
(14) |
Here, and .
Burgers’ equation is a fundamental partial differential equation arising in various areas of applied mathematics, including fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow basdevant1986spectral. In one space dimension the equation reads as
(15) |
along with Dirichlet boundary conditions , where denotes the unknown solution and is a viscosity parameter. Let us assume that all we observe are noisy measurements of the black-box initial function . Given such measurements, we would like to solve the Burgers’ equation (15) while propagating through time the uncertainty associate with the noisy initial data (see figure 1).
This example is important because it involves solving a non-linear partial differential equation. To illustrate how one can encode the structure of the physical laws expressed by Burgers’ equation in a numerical Gaussian process let us apply the backward Euler scheme to equation (15). This can be written as
(16) |
We would like to place a Gaussian process prior on . However, the nonlinear term is causing problems simply because the product of two Gaussian processes is no longer Gaussian. Hence, we will approximate the nonlinear term with , where is the posterior mean of the previous time step. Therefore, the backward Euler scheme (16) can be approximated by
(17) |
Rearranging the terms, we obtain
(18) |
Let us make the prior assumption that
(19) |
is a Gaussian process with a neural network Rasmussen06gaussianprocesses covariance function
(20) |
where denotes the hyper-parameters. Here we have chosen a non-stationary prior motivated by the fact that the solution to the Burgers’ equation can develop discontinuities for small values of the viscosity parameter . This enables us to obtain the following Numerical Gaussian Process
with covariance functions , , and given in section 5.1 of the appendix. Training, prediction, and propagating the uncertainty associated with the noisy initial observations can be performed as in sections 2.3, 2.4, and 2.5, respectively. Figure 1 depicts the noisy initial data along with the posterior distribution of the solution to the Burgers’ equation (15
) at different time snapshots. It is remarkable that the proposed methodology can effectively propagate an infinite collection of correlated Gaussian random variables (i.e., a Gaussian process) through the complex nonlinear dynamics of the Burgers’ equation.
It must be re-emphasized that numerical Gaussian processes, by construction, are designed to deal with cases where: (1) all we observe is noisy data on black-box initial conditions, and (2) we are interested in quantifying the uncertainty associated with such noisy data in our solutions to time-dependent partial differential equations. In fact, we recommend resorting to other alternative classical numerical methods such as Finite Differences, Finite Elements, and Spectral methods in cases where: (1) the initial function is not a black-box function and we have access to noiseless data, or (2) we are not interested in quantifying the uncertainty in our solutions. However, in order to be able to perform a systematic numerical study of the proposed methodology and despite the fact that this defeats the whole purpose of the current work, sometimes we will operate under the assumption that we have access to noiseless initial data. For instance, concerning the Burgers’ equation, if we had access to such noiseless data, we would obtain results similar to the ones reported in figure 2.
Moreover, in order to make sure that the numerical Gaussian process resulting from the backward Euler scheme (18) applied to the Burgers’ equation is indeed first-order accurate in time, we perform the numerical experiments reported in figures 5 and 5. Specifically, in figure 5 we report the time-evolution of the relative spatial error until the final integration time . We observe that the error indeed grows as , and its resulting behavior reveals both the shock development region as well as the energy dissipation due to diffusion at later times. Moreover, in figure 5 we fix the final integration time to and the number of initial and artificial data to 50, and vary the time-step size from to . As expected, we recover the first-order convergence properties of the backward Euler scheme, except for a saturation region arising when we further reduce the time-step size below approximately . This behavior is not a result of the time stepping scheme but is attributed to the underlying Gaussian process regression and the finite number of spatial data points used for training and prediction. To investigate the accuracy of the posterior mean in predicting the solution as the number of training points is increased, we perform the numerical experiment reported in figure 5. Here we have considered two cases for which we fix the time step size to and
, respectively, and increase the number of initial as well as artificial data points. A similar accuracy saturation is also observed here as the number of training points is increased. In this case, this is attributed to the error accumulated due to time-stepping with the relatively large time step sizes for the first-order accurate Euler scheme. If we further keep decreasing the time-step, this saturation behavior will occur for higher numbers of total training points. The key point here is that although Gaussian processes can yield satisfactory accuracy, they, by construction, cannot force the approximation error down to machine precision. This is due to the fact that Gaussian processes are suitable for solving regression problems. This is exactly the reason why we recommend other alternative classical numerical methods for solving partial differential equations in cases where one has access to noiseless data. In such cases, it is desirable to use numerical schemes that are capable of performing exact interpolation on the data, rather than just a mere regression.
The wave equation is an important second-order linear partial differential equation for the description of wave propagation phenomena, including sound waves, light waves, and water waves. It arises in many scientific fields such as acoustics, electromagnetics, and fluid dynamics. In one space dimension the wave equation reads as
(21) |
The function solves this equation and satisfies the following initial and homogeneous Dirichlet boundary conditions
(22) |
Now, let us assume that all we observe are noisy measurements and of the black-box initial functions and , respectively. Given this data, we are interested in solving the wave equation (21) and quantifying the uncertainty in our solution associated with the noisy initial data (see figure 6).
To proceed, let us define and rewrite the wave equation as a system of equations given by
(23) |
This example is important because it involves solving a system of partial differential equations. One could rewrite the system of equations (23) in matrix-vector notations and obtain
which takes the form of (1) with
This form is now amenable to the previous analysis provided for general linear multi-step methods. However, for pedagogical purposes, let us walk slowly through the trapezoidal rule and apply it to the system of equations (23). This can be written as
(24) | |||||
Rearranging the terms yields
Now, let us define and to be given by
(25) | |||||
As outlined in section 2 this is a key step in the proposed methodology as it hints at the proper location to place the Gaussian process prior. Shifting the terms involved in the above equations by and we obtain
(26) | |||||
and
(27) | |||||
respectively. Now we can proceed with encoding the structure of the wave equation into a numerical Gaussian process prior for performing Bayesian machine learning of the solution at any .
Let us make the prior assumption that
(28) | |||||
are two independent Gaussian processes with squared exponential Rasmussen06gaussianprocesses covariance functions
(29) | |||
where and . From a theoretical point of view, each covariance function gives rise to a Reproducing Kernel Hilbert space aronszajn1950theory; saitoh1988theory; berlinet2011reproducing that defines a class of functions that can be represented by this kernel. In particular, the squared exponential covariance function chosen above implies smooth approximations. More complex function classes can be accommodated by appropriately choosing kernels (see e.g., equation (20)). This enables us to obtain the following numerical Gaussian process
which captures the entire structure of the trapezoidal rule (24), applied to the wave equation (21), in its covariance functions given in section 5.2 of the appendix. Training, prediction, and propagating the uncertainty associated with the noisy initial observations can be performed as in section 5.2 of the appendix. Figure 6 depicts the noisy initial data along with the posterior distribution (50) of the solution to the wave equation (21) at different time snapshots.
In the case where we have access to noiseless initial data we obtain the results depicted in figure 7.
Moreover, we perform a numerical study similar to the one reported in section 2.6.2. This is to verify that the numerical Gaussian process resulting from the trapezoidal rule (24) applied to the wave equation is indeed second-order accurate in time. In particular, the numerical experiment shown in figure 10 illustrates the time evolution of the relative spatial until the final integration time . The second-order convergence of the algorithm is also demonstrated in figure 10 where we have fixed the number of noiseless initial and artificially generated data, while decreasing the time step size. We also investigate the convergence behavior of the algorithm for a fixed time-step and as the number of training points is increased. The results are summarized in figure 10. The analysis of both temporal and spatial convergence properties yield qualitatively similar conclusions to the ones reported in section 2.6.2. One thing worth mentioning here is that the error in is not always less than the error in (as seen in figures 7 and 10). This just happens to be the case at time .
Let us now focus on the general form of Runge-Kutta methods alexander1977diagonally with stages applied to equation (1); i.e.,
(30) | |||||
Here, . This general form encapsulates both implicit and explicit time-stepping schemes, depending on the choice of the weights . An important feature of the proposed methodology is that it is oblivious to the choice of these parameters, hence the implicit or explicit nature of the time-stepping scheme is ultimately irrelevant. This is in sharp contrast to classical numerical methods in which implicit time-integration is burdensome due to the need for repeatedly solving linear or nonlinear systems. Here, for a fixed number of stages , the cost of performing implicit or explicit time-marching is identical. This is attributed to the fact that the structure of the time-stepping scheme is encoded in the numerical Gaussian process prior, and the algorithm only involves solving a sequence of regression problems as outlined in section 2.2. This allows us to enjoy the favorable stability properties of fully implicit schemes at no extra cost, and thus perform long-time integration using very large time-steps. Equations (30) can be equivalently written as
(31) | |||||
Let us make the prior assumption that
(32) | |||||
are mutually independent Gaussian processes. Therefore, we can write the joint distribution of which will capture the entire structure of the Runge-Kutta methods in the resulting numerical Gaussian process. However, rather than getting bogged down into heavy notation, and without sacrificing any generality, we will present the main ideas through the lens of an example.
We have chosen this classical pedagogical example as a prototype benchmark problem for testing the limits of long-time integration. This example also highlights the implementation of periodic constraints at the domain boundaries (3.1). The advection equation in one space dimension takes the form
(33) |
The function solves this equation and satisfies the following initial and periodic boundary conditions
(34) |
However, let us assume that all we observe are noisy measurements of the black-box initial function . Given this data, we are interested in encoding the structure of the advection operator in a numerical Gaussian process prior and use it to infer the solution with quantified uncertainty for any (see figure 11).
Let us apply the Gauss-Legendre time-stepping quadrature iserles2009first with two stages (thus fourth-order accurate) to the advection equation (33). Referring to equations (31), we obtain
(35) | |||||
Here, , , , , , and .
We make the prior assumption that
(36) | |||||
are three independent Gaussian processes with squared exponential covariance functions similar to the kernels used in equations (29). This assumption yields the following numerical Gaussian process
where the covariance functions are given in section 5.3 of the appendix.