DeepAI

# Finite element approximation of non-Markovian random fields

In this paper, we present finite element approximations of a class of Generalized random fields defined over a bounded domain of R d or a smooth d-dimensional Riemannian manifold (d > 1). An explicit expression for the covariance matrix of the weights of the finite element representation of these fields is provided and an analysis of the approximation error is carried out. Finally, a method to generate simulations of these weights while limiting computational and storage costs is presented.

• 7 publications
• 7 publications
04/06/2020

### A matrix-free approach to geostatistical filtering

In this paper, we present a novel approach to geostatistical filtering w...
06/02/2022

### Finite Element Complexes in Two Dimensions

Two-dimensional finite element complexes with various smoothness, includ...
11/24/2022

### Numerical Approximation of Gaussian random fields on Closed Surfaces

We consider the numerical approximation of Gaussian random fields on clo...
05/06/2020

### Orthogonality relations of Crouzeix-Raviart and Raviart-Thomas finite element spaces

Identities that relate projections of Raviart-Thomas finite element vect...
07/14/2021

### Connections Between Finite Difference and Finite Element Approximations

We present useful connections between the finite difference and the fini...
01/14/2021

### The full approximation storage multigrid scheme: A 1D finite element example

This note describes the full approximation storage (FAS) multigrid schem...
07/06/2021

### Galerkin–Chebyshev approximation of Gaussian random fields on compact Riemannian manifolds

A new numerical approximation method for a class of Gaussian random fiel...

## 1 Introduction

The "SPDE approach", as popularized by lindgren2011explicit, consists in characterizing stationary continuous Markov random fields on (

) as solutions of stochastic partial differential equations (SPDE). This approach has two benefits:

• Discrete approximations of the solutions of these SPDEs, obtained using Galerkin methods such as the Finite Element method, are used in place of the original field in numerical computations. lindgren2011explicit actually derived the expression of the precision matrix of the weights of the discrete representation of the solution, thus facilitating the use of this approach.

• By tinkering with these same SPDEs, generalizations of stationary continuous Markov random fields on can be defined on manifolds, and oscillating and even non-stationary random fields can be produced (lindgren2011explicit; fuglstad2015exploring).

This work aims at generalizing the SPDE approach to fields that are not continuously Markovian while still keeping the benefits mentioned above. First, the motivations for this work are detailed in order to point out the type of random fields that will be used throughout the developments. Then, an explicit formula for the covariance matrix of the weights of the finite element representation of such fields is provided, and an error analysis is carried out. Finally, an algorithm based on a Chebyshev polynomial approximation and allowing to compute simulations of these weights with a linear computational complexity is introduced.

Note : This paper is a stub presenting the main results obtained by the authors. It is planned to be expanded.

## 2 Motivations

Denote

the spatial Gaussian white noise on

defined on a complete probability space

. It can be seen as a Gaussian random measure satisfying:

 ∀A,B∈BB(Rd),Cov[W(A),W(B)]=Leb(A∩B)

where is the collection of bounded Borel sets of and Leb denotes the Lebesgue measure.

### 2.1 Solutions of stochastic partial differential equations

Let be a continuous, polynomially bounded function such that:

 ∃N>0,∫Rd|g(∥ω∥2)|−2(1+∥ω∥2)−Ndω<∞

And let be the pseudo-differential operator defined over sufficiently regular functions of by :

 Lg[.]=F[ω↦g(∥ω∥2)F[.](ω)]

Consider then the stochastic partial differential equation (SPDE) defined over by (vergara2018general):

 LgZ=W (1)

where is a spatial Gaussian white noise and the equality is understood in the second order sense, i.e. both sides have same (generalized) covariance. The existence and uniqueness of a stationary solution of (1) are guaranteed if is inferiorly bounded by the inverse of a strictly positive polynomial (vergara2018general).

Numerical solutions of (1) on a triangulated domain can be obtained using the Finite element method. A finite element approximation of the solution of (1) is then built as:

 Z(x)=n∑k=1ziψi(x),x∈D

where is a family of deterministic basis functions and

is a vector of Gaussian weights.

lindgren2011explicit provided an expression for the precision matrix of these weights in the special case where is a real polynomial taking strictly positive values on , which corresponds to the case where is a continuous Markov random field.

A first motivation for this work is to come up with numerical solutions of (1) for a wider class of functions , using the fact that within the framework presented above, the solution of (1) is actually the Generalized random field defined by (vergara2018general):

 Z=L1gW

### 2.2 Generalized random fields

Let be an isotropic stationary real Gaussian random field on with spectral density . In particular, is a positive radial function of . lang2011fast showed that then, can be seen as a Generalized random field defined by:

 Z=L√fW=F−1[ω↦√f(∥ω∥2)F[W](ω)]

where is once again a spatial Gaussian white noise. They used this characterization of Gaussian fields with spectral density to derive algorithms for the fast generation of samples of such fields over rectangular lattices of

using Fast Fourier transform.

A second motivation for this work is to combine this characterization of Gaussian fields with a given spectral density and the SPDE approach to come up with a way to generate samples of these fields on domains more complex than lattices, namely irregular grids, general bounded domains of and even Riemannian manifolds. This type of generalization was in particular exploited by lindgren2011explicit in the particular case of continuous Markov random fields.

In the next section, the approximation of such Generalized random fields using the Finite element method is presented, and an error analysis on this approximation is carried out.

## 3 Finite element approximation of Generalized random fields

Le and let be either a bounded (convex and polygonal) domain of , or a compact -dimensional smooth Riemannian manifold. Denote , the separable Hilbert space of (real) square-integrable functions on . Denote the inner product of .

Let denote a densely defined, self-adjoint, positive semi-definite linear differential operator of second order, defined in a domain with Dirichlet boundary conditions on . is diagonalizable on a orthonormal basis of

. In particular, the eigenvalue-eigenfunction pairs of

, denoted , are arranged so that the eigenvalues satisfy (courant1966methods):

 0≤λ1≤λ2≤⋯≤λj≤…,limj→+∞λj=+∞

### 3.1 Theoretical framework

#### Differential operator on H

For , denote . Then we define the action of the differential operator on by:

 γ(L)ϕ:=∑j∈Nγ(λj)(ϕ,ej)Hej,ϕ∈Hγ (2)

Remark that the subspace is itself a Hilbert space with respect to the inner product and corresponding norm defined by:

 (ϕ,ψ)γ=(γ(L)ϕ,γ(L)ψ)H=∑j∈Nγ(λj)2(ϕ,ej)H(ψ,ej)H (3)

The following lemma gives a sufficient condition so that .

###### Lemma 3.1.

If satisfies then .

###### Proof.

, , so in particular . Therefore, such that , and so which allows to conclude that the series is convergent given that the series is convergent. ∎

In the particular case where , where denotes the Laplacian (or the Laplace-Beltrami operator) on , the operator satisfies the following property:

###### Lemma 3.2.

,

 γ(−Δ)ϕ=F−1[w↦γ(∥w∥2)F[ϕ](ω)]=Lγϕ

where denotes the extension of the Fourier transform over .

Details and proof of this lemma are provided in Appendix A. In particular, the motivational cases presented in Section 2 correspond to the case where for Section 2.1 and for Section 2.2.

#### Generalized random fields of H

Denote the spatial Gaussian white noise on defined on a complete probability space . A characterization of based on the Hilbert space is given by the following lemma.

###### Lemma 3.3.

Let

be a sequence of independent, standard normally-distributed random variables. Then, the linear functional defined over

by :

 ϕ∈H↦∑j∈N~ξj(ϕ,ej)H

is a Gaussian white noise (which is also denoted ). In particular, it satisfies:

###### Proof.

Denote . For an integer , take a set of linearly independent elements of and denote . Let’s show that

is a Gaussian vector. Indeed, the characteristic function of this vector is:

 Φ(w)=E[eiwTX]=E[exp(im∑k=1wkΞ(ϕk))]=E[exp(i∑j∈N~ξjm∑k=1wk(ϕk,ej)H)]

Using the fact that the are independent standard Gaussian variables yields:

 Φ(w) =∏j∈NE[exp(i~ξjm∑k=1wk(ϕk,ej)H)]=∏j∈Nexp⎛⎝−12(m∑k=1wk(ϕk,ej)H)2⎞⎠ =exp(−12∑j∈Nm∑k=1m∑l=1wk(ϕk,ej)Hwl(ϕl,ej)H)=exp(−12m∑k=1m∑l=1wkwl∑j∈N(ϕk,ej)H(ϕl,ej)H) =exp(−12m∑k=1m∑l=1wkwl(ϕk,ϕl)H)=exp(−12wTKw)

where is the positive definite matrix whose entries are , . Therefore, we can conclude that is a Gaussian vector, and in particular, by definition of , (10) is satisfied by .
Hence, can be identified to the spatial Gaussian white noise on by defining for the measure where is the indicator function of the set . ∎

Denote the Hilbert space of -valued random variables satisfying and equipped with the scalar product . The next result introduces a class of Generalized random fields defined through the white noise that can be identified with elements of .

###### Definition 3.1.

Let such that :

 ∑j∈Nγ(λj)2<∞ (4)

Then, denotes the Generalized random field defined by:

 (γ(L)W)[ϕ]:=W(γ(L)[ϕ]),ϕ∈H (5)
###### Lemma 3.4.

can be identified to an element defined by:

 Z=∑j∈N~ξjγ(λj)ej

for a sequence of independent standard normally-distributed random variables, through the linear functional of : , .

###### Proof.

Clearly, is an element of given that

 ∥Z∥2L2(Ω,H)=E[∥Z∥2H]=∑j∈Nγ(λj)2<∞

Let’s now show that the linear functional is equal to . Indeed, ,

 (γ(L)W)(ϕ)=W(γ(L)ϕ)=W(∑j∈Nγ(λj)(ϕ,ej)Hej)

So, using Lemma 3.3,

 (γ(L)W)(ϕ)=∑j∈N~ξjγ(λj)(ϕ,ej)H=(Z,ϕ)H

From now on, Generalized random fields of the form will be directly identified with their representation, and we will write:

 Z=γ(L)W=∑j∈N~ξjγ(λj)ej (6)

where is a sequence of independent, standard normally-distributed random variables. In the next section, the simulation of such random fields through a finite element scheme is presented.

### 3.2 Finite element approximation of a Generealized random field

Let denote a triangulation of with mesh size and a family of linearly independent basis functions associated with such that .

Denote the linear span of , which is a -dimensional space. Denote an orthonormal basis of with respect to the scalar product . The discretization of the operator on is denoted and is defined by:

 Lh:Vh→Vh,ψ↦Lhψ=n∑j=1(Lψ,fj,h)Hfj,h (7)

Let and be the matrices defined by:

 C =[(ψi,ψj)H]1≤i,j≤n G =[(Lψi,ψj)H]1≤i,j≤n

is a symmetric positive definite matrix called Mass matrix and is a symmetric positive semi-definite matrix called stiffness matrix. Denote the symmetric positive definite square root of , and its inverse.

###### Lemma 3.5.

is diagonalizable on and its eigenvalues are those of the matrix defined by:

 S=C−1/2GC−1/2

In particular, the application:

 E:v∈Rn↦n∑j=1[C−1/2v]jψj

is an isometric isomorphism that maps the eigenvectors of

to the eigenfunctions of .

###### Proof.

Take an eigenvalue of and denote an associated eigenvector. Then, and so, where . Hence, , , which, using the fact that is self-adjoint, gives

 (8)

In particular, given that is also a basis of , we denote the invertible change-of-basis matrix between and . Then (8) can be written,

 A⎛⎜ ⎜⎝(LE(v),f1,h)H⋮(LE(v),fn,h)H⎞⎟ ⎟⎠=λA⎛⎜ ⎜⎝(E(v),f1,h)H⋮(E(v),fn,h)H⎞⎟ ⎟⎠

And therefore, , . And so, given that ,

 LhE(v)=∑j(LE(v),fj,h)Hfj,h=∑jλ(E(v),fj,h)Hfj,h=λE(v)

Therefore is an eigenvalue of and maps the eigenvectors of to the eigenfunctions of .

is an isometry: Indeed, , . Consequently, given that it is also linear, is injective. And finally, using the rank–nullity theorem, is bijective (as an injective application between two vector spaces with same dimension). ∎

Let’s denote the eigenvalues of (or equivalently ) and the associated orthonormal eigenfunctions, obtained by mapping by the orthonormal eigenbasis of . In particular,

 S=V⎛⎜ ⎜ ⎜⎝λ1,h⋱λn,h⎞⎟ ⎟ ⎟⎠VT, and ej,h=E(vj),1≤j≤n (9)

Take . The discretization of the operator on , denoted , can now be defined as:

 γ(Lh):Vh→Vh,ψ↦γ(Lh)ψ:=n∑j=1γ(λj,h)(ψ,ej,h)Hej,h
###### Definition 3.2.

Let be a -valued random variable defined by :

 Wh=n∑j=1~ξiej,h

where is a vector whose entries are independent zero-mean (standard) normally-distributed random variables. Then, is called white noise on .

###### Lemma 3.6.

Let be white noise in . Then can be written where .

###### Proof.

Using the linearity of , can be written where . But also, in the basis , we get . In particular, using the fact that is bijective, which proves the result. ∎

###### Definition 3.3.

The -valued random variable defined by:

 Zh=γ(Lh)Wh (10)

is called finite element approximation of the generalized random field defined by (6).

###### Theorem 3.1.

The finite element approximation defined by (10) can be decomposed as:

 Zh=n∑j=1ziψi

for some multi-normally distributed weights with mean and covariance matrix:

 Σz=C−1/2γ2(S)C−1/2 where % γ2(S):=V⎛⎝γ(λ1,h)2⋱γ(λn,h)2⎞⎠VT (11)
###### Proof.

Notice that . But also,

 Zh=γ(Lh)Wh=n∑j=1γ(λi)~ξiE(vi)=E(n∑j=1γ(λi)~ξivi)=E(V(γ(λ1,h)⋱γ(λn,h))~ξ)

Therefore, given that is bijective, where , which proves the result. ∎

Theorem 3.1 provides an explicit expression for the the covariance matrix of the weights of the finite element approximation of a field defined by (6). This expression agrees in particular with the expression of the precision matrix of those same weights exposed in (lindgren2011explicit) for the particular case of continuous Markovian fields on . Given the generality of domains (convex bounded or Riemannian manifold), differential operators and functions , a wide class of fields are now open to study.

Note in particular that Riemannian manifolds provide a generic framework for the study of non-stationary fields on a manifold. Indeed, stationary fields on a Riemannian manifold equipped with a metric can be seen as non-stationary random fields on the manifold with locally-varying anisotropies defined by .

In the next section, an error bound between a generalized random field defined by (6) and its finite element approximation defined by (10) is provided, using the same framework as in (bolin2017numerical).

### 3.3 Error analysis of the finite element approximation

First, we recall the notations used in this paper.
Let be a family of finite element spaces indexed by a mesh size over a domain . Let’s denote be the number of basis functions associated with the triangulation of with mesh size .
Let denote a densely defined, self-adjoint, positive semi-definite linear differential operator of second order defined on a subset of , and let denote its discretization over . Let and be the eigenvalues of and , listed in non-decreasing order.
Let .

The following assumptions are considered to derive an error bound between a Generalized random field defined by (6) and its finite element approximation defined by (10).

###### Assumption 3.1 (Growth of the eigenvalues of L).

There exists tree constants , and such that:

 ∀j∈N,λj>0⇒cλjα≤λj≤Cλjα
###### Assumption 3.2 (Derivable of γ).

is derivable on , and there exist and such that:

 ∀x>0,|γ′(x)|≤CDerivxa
###### Assumption 3.3 (Asymptotic behavior of γ).

There exists a constant such that satisfies , i.e.

 ∃Cγ>0,∃Rγ>0,λ≥Rγ⇒|γ(λ)|≤Cγλ−β
###### Assumption 3.4 (Dimension of the finite element space).

There exists two constants , such that:

 Nh=dim(Vh)=CFESh−~d
###### Assumption 3.5 (Mesh size).

The mesh size shall satisfy:

 h≤(1CFES⌈(Rγcλ)1/α⌉)−1/~d

where , , and are the constants defined in Assumptions 3.4, 3.3 and 3.1.
Consequently, following Assumptions 3.4 and 3.1, for all , .

###### Assumption 3.6 (Eigenvalues and eigenvectors of Lh).

There exists constants , and exponents and such that the eigenvalues and the eigenvectors of the discretisation of the operator onto the finite element space associated with a triangulation of mesh size satisfy:

 0≤λj,h−λj≤C1hrλqj
 ∥ej,h−ej∥2H≤C2h2sλqj

for all and for all

Following the notations of the previous sections, let and be the random fields defined by:

 Z=γ(L)W=∑j∈Nγ(λj)~ξjej (12)

and

 Zh=γ(Lh)Wh=Nh∑j=1γ(λj,h)~ξjej,h (13)

The expected error defined by :

 ∥Z−Zh∥L2(Ω;H)=√E[∥Z−Zh∥2H] (14)

is bounded using the following result.

###### Theorem 3.2.

If , , and satisfy Assumptions (1-6), and if the growth of eigenvalues defined in Assumption 3.1 satisfies:

 max(12β,12a)≤α≤min(2sdq,rdq)

where the constants are defined in Assumptions (1-6), then, for sufficiently small (cf. Assumption 3.5), the error between the generalized random field defined by (6) and its finite element approximation defined by (10) is bounded by:

 ∥ZNh−Zh∥L2(Ω;H)≤Mhmin(s−dqα/2,r−dqα,~d(αβ−1/2),~d(αa−1/2)) (15)

where is a constant independent of .

The proof of this theorem is provided in Appendix B and is an adaptation of proof of Theorem 2.10 of (bolin2017numerical), which provide a bound for the same approximation error for the particular case where , , and is a positive definite.

###### Example 3.1.

In the particular case where , is a strongly elliptic differential operator of order . Weyl’s law (weyl1911asymptotische) gives an exponent for which Assumption 3.1 is satisfied:

 α=d2

Moreover, if we assume that the finite element spaces are quasi-uniform and are composed of continuous piecewise polynomial functions of degree , then Assuptions 3.4 and 3.6 are satisfied for the exponents (bolin2017numerical; strang1973analysis):

 r=2(p−1),s=min{p+1,2(p−1)},q=p+12

## 4 Finite element simulations

Simulation of the finite element approximation (10) of fields satisfying (6) comes down to the simulation of the weights of the decomposition onto the basis functions of the finite element space. These weights are Gaussian, with covariance matrix (11).

A straightforward method to generate admissible samples would consist in computing:

 z=C−1/2V(γ(λ1,h)⋱γ(λn,h))ϵ

where is a vector with independent standard Gaussian components. But this method supposes that the matrix has been diagonalized and that its eigenvalues and its eigenvectors have been stored. Both operations being tremendously costly ( for the diagonalization and for the storage), another method that doesn’t involve them is highly preferable.

We propose to rather simulate the weights using the Chebyshev simulation algorithm presented in (pereira2018efficient). Indeed, the algorithm can provides vector samples with covariance matrix (11) with a computational complexity that is linear with the number of non-zeros (which is small as is a sparse matrix) and an order of approximation which is fixed using a criterion that ensures that the statistical properties of the output are valid. Concerning the storage needs, only a matrix as sparse as needs to be stored as the algorithm relies on matrix-vector multications. This algorithm is reminded below.

## 5 Conclusion

In this work we provided an explicit expression for weights of the finite element approximation of a Generalized random field defined over of domain consisting of a bounded domain of or a -dimensional smooth Riemannian manifold, by where , is a second-order self-adjoint positive definite differential operator, and is a Gaussian white noise. An error bound on this approximation was derived and an algorithm for fast and efficient sampling of this field was exposed.

## Appendix A Laplacian and Fourier transform

### a.1 Multivariate Fourier series and transform

Let be a -periodic function of i.e. is -periodic with respect to each variable and suppose that . Then can be represented as the limit on of its Fourier series defined by (OSBORNE2010115):

 SF[g](x)=∑j∈Zdcj(g)eijTx

where the coefficients are given by:

 cj(g)=1(2π)d∫[−π,π]de−ijTxg(x)dx

The Fourier transform of is then defined from its Fourier series as a train of impulses (corresponding to the Fourier transform of each term of Fourier series):

 F[g](ω)=(2π)d∑j∈Zdcj(g)δj(ω) (16)

where and is the Dirac impulse at .

### a.2 Proof

Let’s first consider the case and denote . The eigenvalues and eigenfunctions of on with Dirichlet boundary conditions are given for by (grebenkov2013geometrical):

 ej=(2π)dd∏k=1sin(jkx),λj=d∑k=1j2k (17)
###### Lemma A.1.

Let and define by:

 ⎧⎪ ⎪⎨⎪ ⎪⎩~f(x)=f(x),∀x∈[0,π]d~f(x1,…,−xk,…,xd)=−~f(x1,…,xk,…,xd)∀x∈Rd,1≤k≤d~f(x+2πn)=~f(x),∀x∈Rd,n∈Zd (18)

Then the coefficients of the Fourier series of satisfy :

 cj(~f)=1(2i)dε(j)(f,e|j|)H (19)

where , and is an eigenfunction of on with Dirichlet boundary conditions, as defined in (17). (Note : if , otherwise)
In particular, the Fourier series of (restricted to ) coincides (up to a normalization constant) with the development of in the eigenbasis of the Laplacian.

###### Proof.
 cj(~f)=1(2π)d∫[−π,π]de−ijTx~f(x)dx=1(2π)d∫[−π,π]d−1e−id−1∑k=1jkxk∫[−π,π]e−ijdxd~f(x)dx

And,

 ∫[−π,π]e−ijdxd~f(x)dxd =∫[−π,0]e−ijdxd~f(x)dxd+∫[0,π]e−ijdxd~f(x)dxd=∫[0,π](−eijdxd+e−ijdxd)~f(x)dxd =−2i∫[0,π]sin(jdxd)~f(x)dxd

So,

 cj(~f)=−2i(2π)d∫[−π,π]d−1e−id−1∑k=1jkxk∫[0,π]sin(jdxd)~f(x)dx

By induction, the same process yields,

 cj(~f) =(−2i)d(