Fast sampling of parameterised Gaussian random fields

04/30/2018 ∙ by Jonas Latz, et al. ∙ Technische Universität München 0

Gaussian random fields are popular models for spatially varying uncertainties, arising for instance in geotechnical engineering, hydrology or image processing. A Gaussian random field is fully characterised by its mean function and covariance operator. In more complex models these can also be partially unknown. In this case we need to handle a family of Gaussian random fields indexed with hyperparameters. Sampling for a fixed configuration of hyperparameters is already very expensive due to the nonlocal nature of many classical covariance operators. Sampling from multiple configurations increases the total computational cost severely. In this report we employ parameterised Karhunen-Loève expansions for sampling. To reduce the cost we construct a reduced basis surrogate built from snapshots of Karhunen-Loève eigenvectors. In particular, we consider Matérn-type covariance operators with unknown correlation length and standard deviation. We suggest a linearisation of the covariance function and describe the associated online-offline decomposition. In numerical experiments we investigate the approximation error of the reduced eigenpairs. As an application we consider forward uncertainty propagation and Bayesian inversion with an elliptic partial differential equation where the logarithm of the diffusion coefficient is a parameterised Gaussian random field. In the Bayesian inverse problem we employ Markov chain Monte Carlo on the reduced space to generate samples from the posterior measure. All numerical experiments are conducted in 2D physical space, with non-separable covariance operators, and finite element grids with ∼ 10^4 degrees of freedom.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 21

page 33

page 34

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many mathematical models in science and engineering require input parameters which are often not fully known or are perturbed by observational noise. In recent years it has become standard to incorporate the noise or lack of knowledge in a model by using uncertain inputs. In this work we are interested in models based on partial differential equations (PDEs) where the inputs are spatially varying random functions. These so called random fields are characterised by a probability measure on certain function spaces.

We consider two typical tasks in uncertainty quantification (UQ): the forward propagation of uncertainty (forward problem) Ghanem2017 ; Smith2014 , and the (Bayesian) inverse problem Dashti2017 ; Stuart2010 . In we study the impact of uncertain model inputs on the model outputs and quantities of interest. The mathematical task is to derive the pushforward measure of the model input under the PDE solution operator. In we update a prior distribution of the random inputs using observations; this gives the posterior measure. Mathematically, this amounts to deriving the conditional measure of the inputs given the observations using a suitable version of Bayes’ formula. Unfortunately, in most practical cases there are no analytical expressions for either the pushforward or the posterior measure. We focus on sampling based measure approximations, specifically Monte Carlo (MC) for the forward problem, and Markov chain Monte Carlo (MCMC) for the Bayesian inverse problem. Importantly, MC and MCMC require efficient sampling procedures for the random inputs.

In this work we consider Gaussian random fields which are popular models e.g. in hydrology. We recall the following sampling approaches for Gaussian random fields. Factorisation methods construct either a spectral or Cholesky decomposition of the covariance matrix. The major computational bottleneck is the fact that the covariance operator is often nonlocal, and a discretisation will almost always result in a dense covariance matrix which is expensive to handle. Circulant embedding ChanWood:1997 ; Dietrich1997 ; Graham2018

relies on structured spatial grids and stationary covariance operators. In this case, the covariance matrix can be factorised using the Fast Fourier Transform. Alternatively, we can approximate the covariance matrix by hierarchical matrices and low rank techniques, see e.g.

Benner2018 ; ChenStein:2017 ; DElia2013 ; Feischl2018 ; Khoromskij2009 . A pivoted Cholesky decomposition is studied in Harbrechtetal:2015 . More recently, the so called SPDE-based sampling has been developed in the works BolinKirchner:2017 ; BKK ; Osbornetal:2017a ; Osb2018 ; Roininenetal:2018

. The major idea is to generate samples of Gaussian fields with Matérn covariance operators by solving a certain discretised fractional PDE with white noise forcing. The Karhunen-Loève (KL) expansion

Karhunen1947 ; Loeve1978 is another popular approach for sampling Gaussian random fields, however, it also suffers from the nonlocal nature of many covariance operators. See e.g. the works Betzetal:2014 ; Rizzi2018 ; Khoromskij2009 ; PraneshGhosh:2015 ; Saibabaetal:2016 ; SchwabTodor:2006 ; ZhengDai:2017 for the efficient computation of the KL expansion.

Gaussian random fields are completely characterised by the mean function and covariance operator, and are thus simple models of spatially varying functions. They are also flexible; depending on the regularity of the covariance operator it is possible to generate realisations with different degrees of smoothness. However, in some practical situations the full information on the covariance operator might not be available, e.g. the correlation length, smoothness, and point-wise variance of the random field are not known. Of course, these model parameters can be fixed a priori. However, the posterior measure of a Bayesian inverse problem is often very sensitive to prior information. We illustrate this in the following simple example.

Example 1.1

We consider a Gaussian random field with exponential covariance operator on the unit square and correlation length

. The goal is to estimate the statistics of this field given 9 noisy point observations within the framework of Bayesian inversion (cf. §

A.2). The noise is centered Gaussian with a noise level about 1%. In Figure 1.1 we plot a realisation of the true field together with the posterior mean and variance associated with four prior fields with a different (fixed) correlation length each. The posterior mean and variance have been computed analytically. Note that this is possible since is fixed, the prior and noise are Gaussian, and the forward response operator is linear.

Figure 1.1: Estimation of a Gaussian random field. The top-left figure shows a realisation of the true random field. The task is to estimate this field given 9 noisy point evaluations (white dots). The four top-right (bottom-right) figures show the posterior mean (pointwise posterior variance) for mean-zero Gaussian prior random fields with exponential covariance operator, standard deviation , and correlation lengths .

We make two observations in Figure 1.1. First, we see that it is not possible to identify the true random field perfectly in this experiment. This is due to the sparsity of the data; it is not a defect of the Bayesian inversion. The posterior measure is well-defined, and has been computed analytically without a sample error in this experiment.

The second observation is the main motivation for our work. We clearly see in Figure 1.1 that the posterior measure depends crucially on the underlying prior measure and associated correlation length. If the assumed correlation length is too small compared to the truth, then the posterior mean estimate is only accurate close to the observation locations. If, on the other hand, the assumed correlation length is too large, we obtain an overconfident posterior mean estimate. Inaccurate, fixed prior random field parameters can substantially deteriorate the estimation result in Bayesian inverse problems. We treat this problem by modelling unknown hyperparameters

as random variables.

In statistics, a model with multiple levels of randomness is called a hierarchical model

. In Bayesian statistics, this means that we work with parameterised prior measures (

hyperpriors). Hierarchical models in forward and inverse UQ are discussed in Wikle2017 . We also refer to (Robert2007, , Ch.10) for general hierarchical Bayesian analyses. Hierarchical Bayesian inverse problems with spatially varying random fields are discussed in BolinKirchner:2017 ; Dunlop2017 ; JiangOu:2017 ; Roininen2016 ; Sraj2016 ; TagadeChoi:2014 . Hierachical models are also considered in the frequentist approach to inference, see e.g. Haskard2007 ; Minden2017 for random field models and spatial statistics.

In our work we consider parameterised Gaussian random fields where the covariance operator depends on random variables. Notably, the resulting random field is not necessarily Gaussian, and we can thus model a larger class of spatial variations. However, the greater flexibility of parameterised Gaussian fields brings new computational challenges as we explain next.

Assume that we discretise a Gaussian random field by a KL expansion. The basis functions in this expansion are the eigenfunctions of the covariance operator. For fixed, deterministic hyperparameters it is sufficient to compute the KL eigenpairs only once since the covariance operator is fixed. However, changing the hyperparameters changes the covariance and often requires to re-compute the KL eigenpairs. The associated computational cost and memory requirement scales at least quadratically in the number of spatial unknowns. Hence it is often practically impossible to use uncertain hyperparameters in a (Gaussian) random field model in 2D or 3D physical space. To overcome this limitation we suggest and study a reduced basis (RB) surrogate for the efficient computation of parameter dependent KL expansions. In

Sraj2016 the authors use a polynomial chaos surrogate for this task. In contrast, our reduced basis surrogate approximates the KL eigenpairs by a linear combination of snapshot eigenpairs associated with certain fixed values of the hyperparameters. Since this requires the solution of eigenproblems in a small number of unknowns (which is equal to the number of reduced basis functions) the computational cost can be reduced significantly.

Reduced basis methods were introduced in Noor1980 , and are typically used to solve PDEs for a large number of parameter configurations, see e.g. Book2 ; Book1 . In contrast, parameterised eigenproblems have not been treated extensively with reduced basis ideas. We refer to Fumagalli2016 ; Horger2016 ; Horger2017 ; Machiels2000 ; Sirkovic2016 ; Vallaghe2015 for applications and reviews of reduced basis surrogates for parameterised eigenproblems with PDEs. For non-parametric KL eigenproblems we mention that reduced basis methods have been combined with domain decomposition ideas Rizzi2018 . In this situation we need to solve several low-dimensional eigenproblems on the subdomains in the physical space. It is then possible to construct an efficient reduced basis by combining the subdomain solutions, see Rizzi2018 for details.

The forward and inverse uncertainty quantification with PDE-based models typically requires many expensive model evaluations. These evaluations are the basis but also the bottleneck in virtually any UQ method. While we construct a surrogate to replace the expensive computation of KL eigenpairs, it is much more common in the UQ literature to construct surrogates for the underlying computational model. See e.g. Farcas2018 ; Peherstorfer2016 ; Peherstorfer2018 for the forward problem and Chen2015 ; Drohmann2015 ; Lieberman2010 ; Manzoni2016 ; Mattis2018 ; Rubio2018 ; Stuart2018 for the Bayesian inverse problem.

We point out that many differential operators are local, and hence the associated discretised operator is often sparse. Linear equations or eigenvalue problems with sparse coefficient matrices can often be solved with a cost that scales linearly in the number of unknowns. In contrast, the reduced linear system matrix is often dense and the solution cost is at least quadratic in the number of unknowns. Hence, for reduced basis methods to be cheaper compared to solves with the full discretised PDE operator it is necessary that the size of the reduced basis is not too large compared to the number of unknowns of the full system. Notably, in most cases the discretised KL eigenproblem results in a dense matrix since the covariance integral operator is nonlocal. Hence we expect a significant reduction of the total computational cost even if the size of the reduced basis is only slightly smaller than the number of unknowns in the unreduced eigenspace.

The remainder of this report is organised as follows. In §2 we establish briefly the mathematical theory and computational framework for working with parameterised Gaussian random fields. In §3 we propose and study a reduced basis surrogate for the parametric KL eigenpairs. Specifically, we consider Matérn-type covariance operators, and suggest a linearisation to enable an efficient offline-online decomposition within the reduced basis solver. In §4 we introduce reduced basis sampling, and analyse its computational cost and memory requirements. Finally, we present results of numerical experiments in §5. We study the approximation accuracy and speed-up of the reduced basis surrogate, and illustrate the use of the reduced basis sampling for hierarchical forward and Bayesian inverse problems.

2 Mathematical and computational framework

To begin, we introduce the notation and some key elements, in particular, Gaussian and parameterised Gaussian measures. Moreover, we discuss sampling strategies and their associated costs for Gaussian and parameterised Gaussian measures. We focus on samplers which make use of truncated KL expansions.

2.1 Gaussian measures

Let denote a probability space. We recall the notion of a real-valued Gaussian random variable which induces a Gaussian measure on .

Definition 2.1

The random variable follows a non-degenerate Gaussian measure, if

for some and . The Gaussian measure is degenerate, if . In this case, we define , the Dirac measure concentrated in .

Let denote a separable -Hilbert space with Borel--algebra . We now introduce Gaussian measures on .

Definition 2.2

The -valued random variable has a Gaussian measure, if follows a Gaussian measure for any in the dual space of . We write , where

In Definition 2.2 we distinguish two cases. If is finite-dimensional, then we call a (multivariate) Gaussian random variable

with mean vector

and covariance matrix . If is infinite-dimensional, then is called Gaussian random field with mean function and covariance operator .

We now discuss two options to construct a Gaussian measure with given mean function and covariance operator . While any can be used as a mean function, we assume that is a linear, continuous, trace-class, positive semidefinite and self-adjoint operator. This ensures that is a valid covariance operator, see Adler1990 ; Lifshits1995 . We denote the set of valid covariance operators on by .

If , we can identify a Gaussian measure on

in terms of a probability density function w.r.t. the Lebesgue measure.

Proposition 2.3

Let , and with full rank. Then, the Gaussian measure can be written as

(2.1)

where

(2.2)

is the associated probability density function.

If , we can construct a Gaussian measure on using the so-called Karhunen-Loève (KL) expansion.

Definition 2.4

Let and let denote the eigenpairs of , where form an orthonormal basis of . Let be a measurable function. Furthermore, let the components of form a sequence of independent and identically distributed (i.i.d.) random variables, where . Then, the expansion

is called KL expansion.

One can easily verify the following proposition, see Karhunen1947 ; Loeve1978 .

Proposition 2.5

is distributed according to .

In the remainder of this paper we assume that all eigenpairs are ordered descendantly with respect to the absolute value of the associated eigenvalues. For illustration purposes we give a example for a Gaussian measure on an infinite-dimensional, separable Hilbert space.

Example 2.6

Let , where () is open, bounded and connected. We define the exponential covariance operator with correlation length and standard deviation as follows,

(2.3)

where is the metric induced by the 2-norm.

We now briefly discuss the implications of and on a (mean-zero) Gaussian measure with exponential covariance operator as in (2.3). The correlation length models the strength of the correlation between the random variables and in two points in the random field . If is small, the marginals and are only strongly correlated if is small. Otherwise, if is large, then and are also strongly correlated, if is large. In Figure 2.1, we plot samples of mean-zero Gaussian measures with exponential covariance and different correlation lengths. The pointwise standard deviation is constant for all . One can show that the realisations of are a.s. continuous, independently of and .

Figure 2.1: Samples of mean-zero Gaussian random fields with exponential covariance, and .

2.2 Parameterised Gaussian measures

We now construct parameterised measures. We denote the space of parameters by and assume that is non-empty and finite-dimensional. Moreover, we assume that is a measurable space. The associated -algebra is the trace--algebra . In the following we denote an element of by and a random variable taking values in by . We recall the notion of a Markov kernel; this is a measure-theoretically consistent object for the representation a parameterised probability measure.

Definition 2.7

A Markov kernel from to is a function , where

  1. is measurable for all ,

  2. is a probability measure for all .

We consider Markov kernels where is a Gaussian measure for all . Hence, we define a parameterised Gaussian measure in terms of a Markov kernel from to . Particularly, we define

(2.4)

where

are measurable functions. The family of Gaussian measures in (2.4) does not define a valid parameterised measure yet, it remains to identify a measure on the parameter space . To this end let be an -valued random variable. We assume that is distributed according to some probability measure . Now, let be an -valued random variable. We assume that Then, the joint measure of is given by

(2.5)

The measure in (2.5) models a two-step sampling mechanism. Indeed, to generate a sample with distribution we proceed as follows:

  1. Sample ,

  2. Sample .

Finally, let be the marginal of w.r.t. to , i.e.,

(2.6)

Alternatively, can be defined in terms of the composition of and , . Note that is a measure on , and that the Markov kernel is the parameterised Gaussian measure which we wanted to construct.

Remark 2.8

We point out that even if is a Gaussian measure for any , the marginal is not necessarily a Gaussian measure. We give two examples.

  • Let , let be a Gaussian measure and . Then is a Gaussian measure. Indeed, this construction models a family of Gaussian random variables where the mean value is Gaussian.

  • Let be a finite set. Then is called Gaussian mixture and is typically not Gaussian. See §1.1 and §2.1 in Everitt1981 .

Now, we return to Example 2.6 and construct an associated Markov kernel.

Example 2.9

We consider again the exponential covariance operator in (2.3). Let and . For any and , one can show that is a valid covariance operator. The parameters are random variables on a non-empty set . The associated probability measure is given by

Here, is given such that and is a Gaussian measure that is truncated outside of . models the standard deviation of , for any . The measure and the Markov kernel induce a joint measure . This can now be understood as follows:

  1. Sample from :

    • Sample the correlation length ,

    • Sample the standard deviation .

  2. Sample the random field with exponential covariance operator, standard deviation and correlation length .

Hence, we modelled a Gaussian random field with exponential covariance, where the correlation length and standard deviation are unknown.

In the following sections we study parameterised Gaussian measures in the setting of forward uncertainty propagation and Bayesian inversion.

2.3 Sampling of Gaussian random fields

Consider a Gaussian random field on . For practical computations the infinite-dimensional parameter space must be discretised. Here, we use finite elements. Let denote an -dimensional basis of a finite element space. We approximate by . Note that we can identify .

Let denote the Euclidean inner product on . Note that is a separable Hilbert space with inner product , where is the Gramian matrix associated with the finite element basis . The Gaussian measure on can then be represented on by the measure with mean vector and covariance matrix .

From now on assume that is a finite dimensional space with inner product . Moreover, we assume that the Gaussian measure has a mean equal to zero, that is . Note, that the covariance operator is now a matrix. A simple sampling strategy uses the Cholesky decomposition of . Let . Then, it is easy to see that . The computational cost of a Cholesky decomposition is .

Alternatively, we can use the KL expansion (see Definition 2.4)

where are the eigenpairs of . Recall that the eigenvectors form an orthonormal basis of and that . Computing the spectral decomposition of a symmetric matrix is typically more expensive compared to the Cholesky decomposition. However, for some special cases the spectral decomposition can be computed cheaply. One example is circulant embedding Dietrich1997 which requires a structured mesh, and a stationary covariance function. Another example can be found in Dunlop2017 where the eigenpairs of the covariance operator are known analytically.

The KL expansion offers a natural way to reduce the stochastic dimension of an -valued Gaussian random variable by simply truncating the expansion. This can be interpreted as dimension reduction from a high-dimensional to a low-dimensional stochastic space. A reduction from an infinite-dimensional to a finite-dimensional stochastic space is also possible. Let and consider the truncated KL expansion

The sum of the remaining eigenvalues in the truncated KL expansion give the following error bound:

(2.7)

Furthermore, the truncated KL expansion solves the minimisation problem

for any given . Hence, the truncated KL expansion is the optimal -dimensional function which approximates . In the statistics literature this method is called principal component analysis.

Observe that is a Gaussian random field on with covariance operator

(2.8)

This covariance operator has the rank . Sampling with the truncated KL expansion requires only the leading eigenpairs. We assume that this reduces the computational cost asymptotically to . We discuss this in more detail in §2.4. In the remainder of this report, we generate samples with the truncated KL expansion.

2.4 Sampling of parameterised Gaussian random fields

We use sample-based techniques to approximate the pushforward and posterior measure in forward propagation of uncertainty problems and Bayesian inverse problem, respectively. To this end we require samples . We assume that sampling from is accessible and inexpensive. However, for each sample we also need to sample using the truncated KL expansion. This requires the assembly of the (dense) covariance matrix , and the computation of its leading eigenpairs. We abbreviate this process by the function which returns . The complete sampling procedure is given in Algorithm 1.

The cost for the assembly of the covariance matrix is of order . We assume that the cost of a single function call is of order . This corresponds to an Implictly Restarted Lanczos Method, where . See Calvetti1994 ; GuRuhe2000 for details. Also note that this method is implemented in ARPACK (and thus for instance in Matlab) as eigs. Thus, the total computational cost of Algorithm 1 is . The largest contribution to the computational cost is the repeated computation of the leading eigenpairs of . As mentioned before, we can avoid this cost in certain special cases, e.g. when considering a covariance operator that allows for circulant embedding or that has known eigenpairs. However, in this paper we focus on parameterised covariance operators where the full eigenproblems have to be solved numerically for all parameter values.

for  do
       Sample ) Sample
end for
Algorithm 1 Sampling from parameterised Gaussian measure

We conclude this section by discussing the choice of for parameterised Gaussian random fields. In §2.3 we study the truncation error of the Karhunen-Loève expansion of a Gaussian random field. We now extend this study to parameterised Gaussian random fields. Let

where and . Note that is an approximation to the parameterised Gaussian random field . The mean square error of and can be computed as follows:

For Gaussian random fields is typically chosen such that the root mean square error fulfills a certain threshold. For example,

where is a fixed factor. Looking at the error bound in (2.7) we see that determines which amount of the total variance of the exact (Gaussian) random field is captured by the truncated KL expansion. The same strategy can be applied for parameterised Gaussian random fields. Let

be the number of terms that fulfils the threshold for all parameters . Then, the mean square error is bounded by

(2.9)

Alternatively, it is possible to choose individually for each ,

This gives the truncated representation

Clearly, the mean square error of this expansion fulfils the exact same error bound as in (2.9). However, the total number of terms in the expansion for a fixed parameter value might be smaller. Recall that the cost of the sampling depends (linearly) on the number of KL terms. Observe that is a sharp upper bound for , . Hence, using is overall not more expensive than using , and the truncated expansion satisfies the same error bound. Moreover, the numbers are a priori unknown and have to be computed. To avoid this additional cost and to simplify the following discussion, we use independently of .

3 Reduced basis approach to parameterised eigenproblems

In §2.4 we discuss the sampling of parameterised Gaussian random fields. The largest contribution to the computational cost is the repeated computation of eigenpairs of the associated parameterised covariance matrix for multiple parameter values . Reduced basis (RB) methods construct a low-dimensional trial space for a family of parameterised eigenproblems. This is the cornerstone in our fast sampling algorithm. To begin we explain the basic idea behind reduced basis (RB) approaches for eigenproblems. There are many options to construct a reduced basis, such as the proper orthogonal decomposition (POD), as well as single- and multi-choice greedy approaches. POD and greedy approaches for parameterised eigenproblems are discussed and compared in Horger2016 . In this paper we focus on the POD.

RB algorithms have two parts, an offline or preprocessing phase and an online phase. The offline phase consists of the construction of the reduced basis. In the online phase the reduced basis is used to solve a large number of low-dimensional eigenproblems. Finally, in §3.3, we discuss the offline-online decomposition for Matérn-type covariance operators. To be able to implement the RB approach efficiently we approximate the full covariance operators by linearly separable operators. We explain how this can be done, and analyse the proposed class of approximate covariance operators.

3.1 Basic idea

Let be a measurable map, where is a finite-dimensional space arising from the discretisation of an infinite-dimensional function space (see §2.3). Recall that in Algorithm 1 we need to solve the generalised eigenproblem associated with for multiple parameter values . That is, we want to find , such that

(3.1)

is in general high-dimensional, which results in a large computational cost for solving the eigenproblems. However, it is often not necessary to consider the full space . If we assume that the eigenpairs corresponding to different parameter values are closely related, then the space

can be approximated by a low dimensional subspace , where . We point out that the truncated KL expansion requires eigenpairs by assumption. However, the reduced operators are matrices with eigenpairs. Hence, is required.

Now, let be an orthonormal basis of with respect to the inner product . is called reduced basis and is called reduced space. We can represent any function by a coefficient vector , such that . The reduced eigenproblem is obtained by a Galerkin projection of the full eigenproblem in (3.1), and is again a generalised eigenproblem. The task is to find , such that

(3.2)

In (3.2) we have the reduced operator , and the reduced Gramian matrix , that are both matrices. The eigenvector approximation in can then be obtained by

3.2 Offline-online decomposition

A reduced basis method typically has two phases. In the offline phase the reduced basis is constructed. In the online phase the reduced operator is assembled, and the reduced eigenproblem (3.2) is solved for selected parameter values . To be able to shift a large part of the computational cost from the online to the offline phase we assume that the following offline-online decomposition is available for the family of parameterised covariance operators.

Assumption 3.1

Let . We assume that there are functions and linear operators , , such that

In this case, is called a linearly separable operator.

3.2.1 Offline phase

We use snapshots of the full eigenvectors to construct the reduced basis. Meaning that we choose a vector and solve the full eigenproblem (3.1) for all elements of . We then have

where all of the computed eigenfunctions are included and here represented as column vectors. Hence, we obtain a matrix . Moreover, we define the reduced space . Next, we construct an orthonormal basis for this vector space. One option to do this is the proper orthogonal decomposition. As result of the POD we obtain a spectral decomposition of of the form

where is a diagonal matrix containing the eigenvalues of and each column of contains the associated orthonormal eigenvectors. We use the eigenvectors with non-zero eigenvalues as basis vectors of , that is,

The magnitude of the eigenvalues of is an indicator for the error when the corresponding eigenvectors are not included in . See the discussion in §2.3. Neglecting reduced basis vectors however is beneficial due to the smaller dimension of the reduced basis. Depending on the pay-off of the dimension reduction compared to the approximation accuracy of the reduced basis one can choose a threshold and work with the basis

In this case, we redefine .

Remark 3.2

In this paper we compute the singular value decomposition (SVD) of

instead of the spectral decomposition of . This is possible since the squares of the singular values of are identical to the eigenvalues of .

There are many options to choose the snapshot parameter values . In our applications is a random variable with probability measure . Hence, a straightforward method is to sample independently . Alternatively, one can select deterministic points in , e.g. quadrature nodes. We will come back to this question in §5 where we discuss the numerical experiments. Furthermore, note that it is generally possible to use different reduced bases for different subsets of hyperparameters and/or index sets of eigenpairs.

3.2.2 Online phase

In the online phase we iterate over various hyperparameter values . In every step, we assemble the operator , and then we solve the associated eigenproblem (3.2). By Assumption 3.1 it holds

Hence, the reduced operator can be assembled efficiently as follows,

The reduced operators can be computed in the offline phase and stored in the memory. In the online phase, we then only need to compute a certain linear combination of . This reduces the computational cost of the assembly of the reduced operator significantly. After the assembly step we solve the reduced eigenproblem (3.2) to obtain the eigenfunctions and eigenvalues .

3.3 Matérn-type covariance operators

Matérn-type covariance operators are widely used in spatial statistics and uncertainty quantification. They are particularly popular for modelling spatially variable uncertainties in porous media. We are interested in solving the KL eigenproblem with Matérn covariance operators with hyperparameters e.g. the correlation length. Unfortunately, the Matérn-type covariance operators are not linearly separable with respect to the hyperparameters of interest (see Assumption 3.1). For this reason we introduce and analyse a class of linearly separable covariance operators which can approximate Matérn-type covariance operators with arbitrary accuracy.

Definition 3.3

Let be an open, bounded and connected domain, and let . Furthermore, let . Define the covariance kernel as

where is the modified Bessel function of the second kind. Then, the Matérn-type covariance operator with smoothness , standard deviation and correlation length is given by

where is the Euclidean distance in .

Remark 3.4

The exponential covariance operator in Examples 2.6 and 2.9 is a Matérn-type covariance operator. Indeed, .

In Example 2.9 we discussed the possibility of using the standard deviation and the correlation length as hyperparameters in a Matérn-type covariance operator. What are the computational implications for the KL expansion? Changing only rescales the KL eigenvalues, and does not require a re-computation of the KL. However, changing the correlation length clearly changes the KL eigenfunctions. We can see this in Figure 3.1. However, the good news is that the KL eigenfunctions for different correlation lengths are very similar, for example, the number and location of extrema is preserved. This suggests that we might be able to construct a useful reduced basis from selected snapshots of KL eigenpairs corresponding to different correlation lengths.

Figure 3.1: Eigenfunctions 1, 11 and 94 of the Matérn-type covariance operator with correlation lengths and .

To be able to construct and use the reduced basis efficiently requires the linear separability of the covariance operator, see Assumption 3.1. The Matérn operator is linearly separable with respect to . Unfortunately, it is not linearly separable with respect to , since the covariance function is not linearly separable. However, it is possible to approximate

with any precision by a linearly separable operator. Using the approximate, linearly separable operator allows us to construct an offline-online decomposition for the exact Matérn covariance operator without the need to use advanced linearisation techniques, such as the discrete empirical interpolation method ((D)EIM). We show this in the remainder of this section for

. Similar approximations for follow from the analyticity of .

Assumption 3.5

The correlation length satisfies with fixed .

Definition 3.6

Let and . Moreover, let Assumption 3.5 hold. We define the -term approximate of by

The associated covariance operator is then defined as

Note that the operator is linearly separable w.r.t. . In particular, we have

for any .

The operator arises from a truncation of a series expansion of . This is detailed in the proof of the following lemma, where we derive an error bound between the exact Matérn covariance operator and the linearly separable approximation .

Lemma 3.7

Let , let and let Assumption 3.5 hold. Then,

where .

Proof.

See Appendix B. ∎

The covariance operator approximation brings new issues. The Matérn-type covariance operators are valid covariance operators in . However, this is not necessarily the case for