Functional Gaussian processes for regression with linear PDE models

05/29/2014
by   Ngoc-Cuong Nguyen, et al.
MIT
0

In this paper, we present a new statistical approach to the problem of incorporating experimental observations into a mathematical model described by linear partial differential equations (PDEs) to improve the prediction of the state of a physical system. We augment the linear PDE with a functional that accounts for the uncertainty in the mathematical model and is modeled as a Gaussian process. This gives rise to a stochastic PDE which is characterized by the Gaussian functional. We develop a functional Gaussian process regression method to determine the posterior mean and covariance of the Gaussian functional, thereby solving the stochastic PDE to obtain the posterior distribution for our prediction of the physical state. Our method has the following features which distinguish itself from other regression methods. First, it incorporates both the mathematical model and the observations into the regression procedure. Second, it can handle the observations given in the form of linear functionals of the field variable. Third, the method is non-parametric in the sense that it provides a systematic way to optimally determine the prior covariance operator of the Gaussian functional based on the observations. Fourth, it provides the posterior distribution quantifying the magnitude of uncertainty in our prediction of the physical state. We present numerical results to illustrate these features of the method and compare its performance to that of the standard Gaussian process regression.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

12/22/2020

APIK: Active Physics-Informed Kriging Model with Partial Differential Equations

Kriging (or Gaussian process regression) is a popular machine learning m...
11/23/2021

Stochastic Processes Under Linear Differential Constraints : Application to Gaussian Process Regression for the 3 Dimensional Free Space Wave Equation

Let P be a linear differential operator over 𝒟⊂ℝ^d and U = (U_x)_x ∈𝒟 a ...
06/10/2019

Multimodal Data Fusion of Non-Gaussian Spatial Fields in Sensor Networks

We develop a robust data fusion algorithm for field reconstruction of mu...
01/30/2021

Gaussian Process for Functional Data Analysis: The GPFDA Package for R

We present and describe the GPFDA package for R. The package provides fl...
04/23/2020

Coarsening in Algebraic Multigrid using Gaussian Processes

Multigrid methods have proven to be an invaluable tool to efficiently so...
07/21/2020

MAGMA: Inference and Prediction with Multi-Task Gaussian Processes

We investigate the problem of multiple time series forecasting, with the...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Partial differential equations (PDEs) are used to mathematically model a wide variety of physical phenomena such as heat transfer, fluid flows, electromagnetism, and structural deformations. A PDE model of a physical system is typically described by conservation laws, constitutive laws, material properties, boundary conditions, boundary data, and geometry. In practical applications, the mathematical model described by the PDEs is only an approximation to the real physical system due to (i) the deliberate simplification of the mathematical model to keep it tractable (by ignoring certain physics or certain boundary conditions that pose computational difficulties), and (ii) the uncertainty of the available data (by using geometry, material property and boundary data that are not exactly the same as those of the physical system). We refer to the PDE model (available to us) as the best knowledge PDE model [23] and to its solution as the best knowledge state. To assess the accuracy of the best knowledge model in predicting the physical system, the best knowledge state needs to be compared against experimental data, which typically will have some level of noise.

In cases where the discrepancy between the PDE model and the experimental data is beyond an acceptable level of accuracy, we need to improve the current PDE model. There are several approaches to defining a new improved model. Parameter estimation [19, 24] involves calibrating some parameters in the model to match the data. An alternative approach to obtain an improved model is data assimilation [7, 8, 9, 10, 18]

. Broadly speaking, data assimilation is a numerical procedure by which we incorporate observations into a mathematical model to reflect the errors inherent in our mathematical modeling of the physical system. Although data assimilation shares the same objective as parameter estimation, it differs from the latter in methodology. More specifically, data assimilation does not assume any parameters to be calibrated; instead, data assimilation defines a new model that matches the observations as well as possible, while being as close as possible to the best knowledge model. Another approach is

data interpolation

[2, 5, 11, 12, 14, 21] which involves computing a collection of solutions (snapshots) of a parametrized or time-varying mathematical model and reconstructing the physical state by fitting the experimental data to the snapshots.

A widely used technique for obtaining an improved model in parameter estimation and data assimilation is least squares regression [3, 8, 22]. Least squares is a deterministic regression approach that provides an estimate for the physical state which is optimal in least squares sense. However, it does not provide a means to quantify the prediction uncertainty. A recent work [23] poses the least-square regression as a regularized saddle point Galerkin formulation which admits interpretation from a variational framework and permits its extension to Petrov-Galerkin formulation. While the Petrov-Galerkin formulation provides more flexibility than the Galerkin formulation, it does not quantify the uncertainty in the prediction either. A popular statistical approach in parameter estimation and data assimilation is Bayesian inference [4, 6, 17, 20]

. In Bayesian inference an estimate of the physical state is described by random variables and the posterior probability distribution of the estimate is determined by the data according to Bayes’ rule

[6, 17]. Therefore, Bayesian inference provides a powerful framework to quantify the prediction uncertainties.

In this paper, we introduce a new statistical data assimilation approach to the problem of incorporating observations into the best knowledge model to predict the state of physical system. Our approach has its root in Gaussian process (GP) regression [13, 15, 16]. We augment the linear PDE with a functional that accounts for the uncertainty in the mathematical model and is modeled as a Gaussian process.333In the cases considered, the physical system is not stochastic but deterministic. The introduction of the Gaussian functional serves to represent uncertainties in the best knowledge model and in the data, not in the physical system per se. This gives rise to a stochastic PDE whose solutions are characterized by the Gaussian functional. By extending the standard GP regression for

functions of vectors

to functionals of functions, we develop a functional Gaussian process regression method to determine the posterior distribution of the Gaussian functional, thereby solving the stochastic PDE for our prediction of the physical state. Our method is devised as follows. We first derive a functional regression problem by making use of the adjoint states and the observations. We next solve the functional regression problem by an application of the principle of Gaussian processes to obtain the posterior mean and covariance of the Gaussian functional. Finally, we compute the posterior distribution for our estimate of the physical state. A crucial ingredient in our method is the covariance operator representing the prior of the Gaussian functional. The bilinear covariance operators considered incorporate a number of free parameters (the so-called hyperparameters) that can be optimally determined from the measured data by maximizing a marginal likelihood.

Our functional Gaussian process regression method can be viewed as a generalization of the standard GP regression from a finite dimensional vector (input) space to an infinite dimensional function (input) space. GP regression is a well-established technique to construct maps between inputs and outputs based on a set of sample, or training, input and output pairs, but does not offer a direct method to incorporate prior knowledge, albeit approximate, from an existing model relating the inputs and outputs. By combining the best knowledge model with the data, our method can greatly improve the prediction of the physical system.

Furthermore, we introduce a nonparametric Bayesian inference method for linear functional regression with Gaussian noise. It turns out that nonparametric Bayesian inference and functional GP regression represent two different views of the same procedure. Specifically, we can think of functional GP regression as defining a distribution over functionals and doing inference in the space of functionals — the functional-space view. We can think of nonparametric Bayesian inference as defining a distribution over weights and doing inference in the space of weights — the weight-space view. Theoretically, functional GP regression can be interpreted as an application of the kernel trick [15] to nonparametric Bayesian inference, thereby avoiding an explicit construction of the feature map.

The paper is organized as follows. In Section 2, we present a description of the problem considered. In Section 3, we give an overview of Gaussian processes for regression problems. In Section 4, we introduce functional Gaussian processes for functional regression problems via linear PDE models. In Section 5, we present numerical results to demonstrate our method and compare its performance to that of function Gaussian process regression. In Section 6, we provides some concluding remarks on future research. Finally, in the Appendix, we describe our nonparametric Bayesian inference method.

2 Motivation and Problem Statement

Let denote a bounded open domain with Lipschitz boundary. Let be an appropriate real-valued function space in which the true state resides. A weak formulation of the true PDE model can be stated as: Find and such that

(1a)
(1b)

where is a bilinear form, is a linear functional, and are observation functionals. We assume that the true PDE model (1) is well defined and accurately describes the physical system of interest.

In actual practice, we do not have access to , and . Hence, we can not compute and . However, we assume that we have access to the “best knowledge” of , and , which shall be denoted by , and , respectively. We then define the best knowledge PDE model: Find and such that

(2a)
(2b)

In the remainder of this paper, we shall drop the superscript “bk” for the quantities associated with the best knowledge model to simplify the notation. (In practice, we replace the continuous function space with a discrete approximation space, which is assumed to be large enough that the discrete solution is indistinguishable from the continuous one.)

We now assume that we are given the observed data , which are the measurements of the true output vector . We further assume that the measurements differ from the true outputs by additive Gaussian noise , namely,

(3)

where

are independent, identically distributed Gaussian distributions with zero mean and variance

. If is sufficiently small within the acceptable accuracy then we can use the observed data to validate the best knowledge model (2). If the best knowledge outputs are close enough to within the noise level then we may trust the best knowledge model to predict the behavior of the true model. In many cases, the best knowledge outputs do not match the observed data due to various sources of uncertainty from physical modeling, constitutive laws, boundary conditions, boundary data, material properties, and geometry.

We are interested in improving the best knowledge model when it does not produce a good estimate of the true state. In particular, we propose a method to compute a better estimate for the true state by combining the best knowledge model with the observed data. Our method has its root in Gaussian process regression. Before proceeding to describe the proposed method we review the ideas behind Gaussian processes.

3 Gaussian Process Regression

We begin by assuming that we are given a training set of observations

(4)

where denotes an input vector of dimension and denotes a scalar real-valued output. The training input vectors are aggregated in the real-valued matrix , and the outputs are collected in the real-valued vector , so we can write . We assume that

(5)

where is the true but unknown function which we want to infer. The unknown function is modeled as a Gaussian process444A Gaussian process is a generalization of the Gaussian probability distribution. A Gaussian process governs the properties of Gaussian random functions, whereas a Gaussian probability distribution describes the properties of Gaussian random variables (scalars or vectors). with zero mean555The Gaussian process is assumed to have zero mean because we can always subtract the original outputs from its average to obtain new outputs with zero average. We then work with the new outputs and add the average to our Gaussian process estimator., for simplicity, and covariance function , namely,

(6)

From a Bayesian perspective, we encode our belief that instances of are drawn from a Gaussian process with zero mean and covariance function prior to taking into account observations.666At this point, one may ask what if our belief is wrong, that is, what if the covariance function is not correctly chosen. Of course, choosing a wrong covariance function will result in very poor prediction. Hence, the covariance function should not be chosen arbitrarily. As discussed later, Gaussian processes provide a framework for optimal selection of a covariance function based on the observed data. Mathematically speaking, is assumed to reside in a reproducing kernel Hilbert space [1]

spanned by the eigenfunctions of the covariance function

.

Let be the matrix that contains test input vectors as its columns. Since

, the joint distribution of the observed outputs and the function values at the test input vectors is

(7)

where , , , , and have entries

(8)

respectively. We next apply the conditional distribution formula (see [15]) to the joint distribution (7) to obtain the predictive distribution for as

(9)

where

(10)

and and are given by

(11)

Note that the predictive mean is a linear combination of kernel functions, each one centered on a training input vector. Note also that the predictive covariance does not explicitly depend on the observed data , but only on the training input vectors and the covariance function . This is a property of the Gaussian distribution. However, as discussed below, the covariance function can be determined by using the observed data. As a result, the predictive covariance implicitly depends on the observed data.

Typically, the covariance function has some free parameters , so that the matrix depends on . These free parameters are called hyperparameters. The hyperparameters have a significant impact on the predictive mean and covariance. They are determined by maximizing the log marginal likelihood (see [15]):

(12)

Once we choose a specific form for and determine its hyperparameters, we can compute and for any given . Gaussian processes also provide us a mean to choose an appropriate family among many possible families of covariance functions. Choosing a covariance function for a particular application involves both determining hyperparameters within a family and comparing across different families. This step is termed as model selection [15].

We see that the standard GP regression provides us not only a posterior mean, but also a posterior covariance which characterizes uncertainty in our prediction of the true function. Moreover, it allows us to determine the optimal covariance function and thus the optimal reproducing Kernel Hilbert space in which the true function is believed to reside. These features differentiate GP regression from parametric regression methods such as least-squares regression, which typically provides the maximum likelihood estimate only. However, GP regression tends to require larger sample sizes than parametric regression methods because the data must supply enough information to yield a good covariance function by using model selection.

There are a number of obstacles that prevent us from applying the standard GP regression to our problem of interest described in the previous section. First, our outputs are in general not the evaluations of the state at spatial coordinates. Instead, they are linear functionals of the state. Second, the standard GP regression described here does not allow us to make use of the best knowledge model. The best knowledge model plays an important role because it carries crucial prior information about the true model. By taking advantage of the best knowledge model, we may be able to use far less observations to obtain a good prediction and thus address the main disadvantage of the standard GP regression. We propose a new approach to overcome these obstacles.

4 Functional Gaussian Process Regression

4.1 A stochastic PDE model

Let be a linear functional. We introduce a new mathematical model: Find and such that

(13a)
(13b)

Notice that the new model (13) differs from the best knowledge model (2) by the functional . We can determine and only if is known. The functional thus characterizes the solution and the output vector of the model (13). We note that if then we . Unfortunately, this particular choice of requires the true state which we do not know and thus want to infer.

In order to capture various sources of uncertainty in the best knowledge model, we represent as a functional Gaussian process777For now, a functional Gaussian process can be thought of as a generalization of the Gaussian process from a finite dimensional vector space to an infinite dimensional function space. with zero mean and covariance operator , namely,

(14)

Notice that there are three main differences between the functional Gaussian process (14) and the Gaussian process (6). First, is a functional, whereas is a function. Second, is a function, whereas is a vector. And third, is generally a differential and integral operator of and , whereas is a function of and . We will require that the covariance operator is symmetric positive-definite. That is,

(15)

As the covariance operator characterizes the space of all possible functionals prior to taking into account the observations, it plays an important role in our method. The selection of a covariance operator will be discussed later.

Since is a functional Gaussian process, the model (13) becomes a stochastic PDE. In order to solve the stochastic PDE (13), we need to compute the posterior mean and posterior covariance of after accounting for the observed data . To this end, we formulate a functional regression problem and describe a procedure for solving it as follows.

4.2 Functional regression problem

We first introduce the adjoint problems: for we find such that

(16)

We note that the adjoint states depend only on the output functionals and the bilinear form . It follows from (2), (13), and (16) that

(17)

for . Moreover, we would like our stochastic PDE to produce the outputs that are consistent with the observed data in such a way that

(18)

This equation is analogous to (3) which relates the observed data to the true outputs . We substitute into (17) to obtain

(19)

Notice that this expression characterizes the relationship between and in the same way (5) characterizes the relationship between and .

We now introduce a training set of observations

(20)

and use this training set to learn about . More specifically, we wish to determine for any given based on the training set . This problem is similar to the regression problem described in the previous section and is named the functional regression problem to emphasize that the object of interest is a functional. We next describe the solution of the functional regression problem.

4.3 Regression procedure

Let be a collection of adjoint states as determined by (16). Let be a collection of test functions. The joint distribution of the observed outputs and the functional values for the test functions according to the prior (14) is given by

(21)

where , , , , and have entries

(22)

respectively. It thus follows that the predictive distribution for is

(23)

where

(24)

and and are given by

(25)

Notice that we have correspondence with function Gaussian process regression described in the previous section, when identifying with , with , and with .

While our approach share similarities with function Gaussian process regression, it differs from the latter in many important ways. We summarize in Table 1 the differences between function Gaussian process regression and functional Gaussian process regression.

Quantities Function Gaussian process Functional Gaussian process
Input vector function
Output function functional
Prior
Kernel function operator
Training inputs adjoint states
Observations
Coefficients
Test inputs
Mean
Covariance
Table 1: Comparison between function Gaussian process regression and functional Gaussian process regression. Note that the best knowledge model enters in the functional Gaussian process regression through the adjoint states and the best knowledge outputs .

In the Appendix A, we introduce a nonparametric Bayesian framework for linear functional regression with Gaussian noise. It turns out that this nonparametric Bayesian framework is equivalent to the functional GP regression described here. In fact, functional GP regression can be viewed as an application of the kernel trick to nonparametric Bayesian inference for linear functional regression, thereby avoiding the computation of the eigenfunctions of the covariance operator . We next introduce a family of bilinear covariance operators and then describe a method for determining the hyperparameters.

4.4 Covariance operators

The covariance operator is a crucial ingredient in our approach. Here, we consider a class of bilinear covariance operators parametrized by of the form:

(26)

More general forms of the covariance operator are possible provided that they are symmetric and positive definite.

In order for a covariance operator to be used in our method, we need to specify its hyperparameters . Fortunately, Gaussian processes allow us to determine the hyperparameters by using the observed data. In order to do this, we first calculate the probability of the observed data given the hyperparameters, or marginal likelihood and choose so that this likelihood is maximized. We note from (21) that

(27)

where the matrix as defined in (25) depends on and thus on as well. Rather than maximizing (27), it is more convenient to maximize the log marginal likelihood which is given by,

(28)

Thus, we find by solving the maximization problem

(29)

Hence, the hyperparameters are chosen as the maximizer of the log marginal likelihood.

Once we determine the covariance operator, we can compute for any given set of test functions as described in Subsection 4.3. Therefore, functional GP regression is non-parametric

in the sense that both the hyperparameters are chosen in light of the observed data. In order words, the data are used to define both the prior covariance and the posterior covariance. In contrast, parametric regression methods use a number of parameters to define the prior and combine this prior with the data to determine the posterior prediction. It remains to describe how to compute the posterior mean and covariance of the solution

of the stochastic PDE.

4.5 Computation of the mean state and covariance

We recall that our stochastic PDE model consists of finding such that

(30)

Let be a “suitable” basis set of the function space , where is the dimension of . Since the functional is Gaussian and the best knowledge model is linear, we can express the solution of the stochastic PDE (30) as

(31)

In order to determine and , we choose in (30) to arrive at the stochastic linear system:

(32)

where , for , and with

(33)

for . It thus follows from (32) that

(34)

as is Gaussian and is invertible.

Now let be spatial points at which we would like to evaluate the predictive mean and covariance of . Let be a matrix with entries . It then follows from (31) that

(35)

where , . It follows from (34) and (35) that

(36)

where

(37)

with . We examine the posterior distribution as given by (37). Note first that the posterior mean is the difference between two terms: the first term is simply the best knowledge state evaluated at ; the second term is a correction term to the best knowledge state and is obtained by using our functional Gaussian process regression. Note also that the posterior covariance is a quadratic form of with the posterior covariance matrix , showing that the predictive uncertainty grows with the magnitude of . Hence, the predictive uncertainty depends on the inverse matrix . The implementation of our method for computing the posterior distribution (37) is shown in Figure 1.

Figure 1: Main algorithm for computing the posterior distribution of .

4.6 Relationship with least-squares regression

Here, we show an alternative approach to computing the posterior mean in (37) by solving a deterministic least-squares problem. We note that the posterior mean satisfies

(38)

Here, for any , is the posterior mean of and given by

(39)

where the adjoint states satisfy (16) and the coefficient vector is the solution of (25) . It thus follows that the mean state satisfies

(40)

where is the weighted sum of the adjoint states. We then evaluate the mean outputs of the stochastic PDE model as

(41)

The following lemma sheds light on the relationship between the mean outputs and the observed data.

Lemma 1.

Assume that the covariance operator is a bilinear form. We have that .

Proof.

We first note from the adjoint equation (16) and (40) that

(42)

We next recall that satisfies

(43)

where is the Kronecker delta. Moreover, we obtain from the best knowledge model (2) and the adjoint equation (16) that

(44)

The desired result immediately follows from the above three equations. This completes the proof. ∎

This lemma shows that the mean outputs differs from the observed data by the product of the noise level and the coefficient vector . When the observed data is noise-free (namely, ) we have that the mean output vector is exactly equal to the observed data. Henceforth, whenever is sufficiently large and is relatively small, we expect that our method will yield a much better estimate of the true state than the best knowledge model. The following theorem shows the optimality of the mean state.

Theorem 2.

Assume that the covariance operator is a bilinear form. Then we have , where

(45)
Proof.

We introduce the Lagrangian

(46)

where and are the Lagrange multipliers of the constraints. The optimal solution satisfies

(47)

which yields

(48a)
(48b)
(48c)
(48d)
(48e)

Note that when taking the partial derivatives we have used the assumption that is bilinear. The first two equations of (48) yield that

(49)

where are the adjoint states. And the third equation (48c) gives

(50)

Therefore, if we can show that then (48d) and (49) imply that . To this end, we note from (48d) and the adjoint equation (16) that

(51)

Moreover, we obtain from the best knowledge model (2) and the adjoint equation (16) that

(52)

Finally, it follows from (48e), (49), (50), (51), and (52) that

(53)

which implies that . This completes the proof. ∎

This theorem establishes a connection between functional GP regression and traditional least-squares regression when the covariance operator is bilinear. In particular, the posterior mean state is the optimal solution of a least-squares minimization. This is hardly a surprise as the posterior mean state is also the maximum a posteriori (MAP) estimate of linear functional regression model in the Bayesian framework discussed in the Appendix A. It is well known that the MAP estimate coincides with the least-squares solution. The main advantage of our approach over least-squares regression is that we can compute not only the posterior mean state but also the posterior covariance. Another advantage of our approach is that it allows us to choose a covariance operator based on the observed data by exploiting the marginal likelihood function, whereas least-squares regression does not provide a mechanism to optimally set the prior covariance operator.

5 A Simple Heat Conduction Example

5.1 Problem description

For the true PDE model we consider a one-dimensional heat equation:

(54)

with Dirichlet boundary conditions . The function space is then given by

(55)

The true state satisfies

(56)

where

(57)

We prescribe a synthetic source term as . It is easy to see that

(58)

The true state is unknown to us and will serve to assess the performance of our method.

We next assume that we know almost everything about the true model except for the source term and the boundary data. We introduce a function space with as