# Global sensitivity analysis for models described by stochastic differential equations

Many mathematical models involve input parameters, which are not precisely known. Global sensitivity analysis aims to identify the parameters whose uncertainty has the largest impact on the variability of a quantity of interest. One of the statistical tools used to quantify the influence of each input variable on the quantity of interest are the Sobol' sensitivity indices. In this paper, we consider stochastic models described by stochastic differential equations (SDE). We focus the study on mean quantities, defined as the expectation with respect to the Wiener measure of a quantity of interest related to the solution of the SDE itself. Our approach is based on a Feynman-Kac representation of the quantity of interest, from which we get a parametrized partial differential equation (PDE) representation of our initial problem. We then handle the uncertainty on the parametrized PDE using polynomial chaos expansion and a stochastic Galerkin projection.

## Authors

• 1 publication
• 4 publications
• 1 publication
• 8 publications
04/10/2022

### How much can one learn a partial differential equation from its solution?

In this work we study the problem about learning a partial differential ...
12/26/2019

### A connection between topological ligaments in shape optimization and thin tubular inhomogeneities

In this note, we propose a formal framework accounting for the sensitivi...
10/07/2020

### Actor-Critic Algorithm for High-dimensional Partial Differential Equations

We develop a deep learning model to effectively solve high-dimensional n...
01/21/2020

### Understanding the stochastic partial differential equation approach to smoothing

Correlation and smoothness are terms used to describe a wide variety of ...
10/23/2017

### Global Sensitivity Analysis via Multi-Fidelity Polynomial Chaos Expansion

The presence of uncertainties are inevitable in engineering design and a...
06/16/2021

### Fréchet derivatives of expected functionals of solutions to stochastic differential equations

In the analysis of stochastic dynamical systems described by stochastic ...
04/17/2019

### Conditional Karhunen-Loève expansion for uncertainty quantification and active learning in partial differential equation models

We use a conditional Karhunen-Loève (KL) model to quantify and reduce un...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Many mathematical models (or numerical simulators) encountered in applied sciences involve numerous poorly-known input parameters. Moreover, stochastic models are often necessary for a realistic description of the physical phenomena. In deterministic models, the model response (output) is fully determined by the values of the input parameters. A specificity of nondeterministic simulators is that the same set of parameter values will lead to an ensemble of different model responses. It is critical for practitioners to evaluate the impact of the parameter lack of knowledge onto the model response. This is the aim of sensitivity analysis. We consider in the present work the framework of global sensitivity analysis (GSA).

In the framework of GSA for deterministic models (see, e.g., Saltelli et al., 2000, and references therein)

, the uncertain vector of input parameters is modeled by some random vector, whose probability distribution accounts for the practitioner’s belief about the input parameters uncertainty. This turns the model response into a random response, whose total variance can be split down into different parts of variance under the assumption that the input parameters are independent

(this is the so-called Hoeffding decomposition, see van der Vaart, 1998). Each part of variance corresponds to the contribution of each set of input parameters on the variance of the model response. By considering the ratio of each part of variance to the variance of the model response, we obtain a measure of importance for each set of input parameters that is called the Sobol’ index or sensitivity index Sobol (1993); the most influential set of input parameters can then be identified and ranked as the set of input parameters with the largest Sobol’ indices.

Considering a similar stochastic formalism for modeling the uncertainty on the parameters of a nondeterministic model, we obtain a random response whose randomness has two sources: the randomness from the parameters, and the one due to the stochasticity inherent to the model itself. Then there are two different settings: either one is interested in the full probability distribution of the output, or we are only concerned with the output averaged over the inherent randomness of the physical system. Note that in the following, we consider that the stochasticity of the simulator can be modeled by an additional parameter, independent of the initial uncertain parameters, also considered as a random seed variable, which can be settled by the user. In other more complex frameworks, the randomness of the process is uncontrollable in the sense that it is managed by the simulator itself, and classical sensitivity analysis tools, like Monte Carlo algorithms or metamodeling, cannot be used.

In case we are only interested in the mean value relative to the intrinsic randomness of the model, a quite common procedure consists in replacing the mean quantity by the empirical mean, and then in performing usual GSA.

If one is interested in the full probability distribution of the output, both papers Iooss and Ribatet (2009) and Marrel et al. (2012) propose a strategy based on a joint modeling of the mean and dispersion of stochastic model outputs. From these joint models, it is possible to compute, for each uncertain parameter, the first-order Sobol’ index and any Sobol’ index associated to an interaction with a subset of uncertain parameters. It is also possible to compute a total Sobol’ index which contains the part of the output variance explained by the intrinsic noise of the model, by itself or in interaction with the uncertain parameters. However, such an approach does not allow to separate the effects enclosed in this total index. In Hart et al. (2017)

, the authors propose another point of view. They define Sobol’ indices as random variables and study their statistical properties.

In the present paper, we focus our analysis on stochastic models described by stochastic differential equations (SDE). Such equations are frequently used for the simulation of complex systems, such as chemical kinetics (see, e.g., Gillespie and Petzold, 2003, and related works) or ocean models relying on stochastic parametrizations of unresolved scales (see, e.g., Cooper, 2017, and references therein), to cite only a few. Global sensitivity analysis for parametrized SDEs has been considered in Le Maître and Knio (2015); Jimenez et al. (2017). In these two papers, the authors assume that the Wiener noise and the uncertain parameters in the parametrized SDE are independent. We will also consider this assumption. They are then interested in the full probability distribution of the solution of the parametrized SDE. Their study relies on polynomial chaos (PC) expansion Wiener (1938); Cameron and Martin (1947); Ghanem and Spanos (1991); Le Maître and Knio (2010) to represent the uncertain parameters, leading through a Galerkin formalism to a hierarchy of stochastic differential equations governing the evolution of the modes in the PC expansion of the overall solution. The main advantage of the PC expansion of their uncertain stochastic process is that it allows a readable expression of the conditional expectations and variances.

Contrarily to the setting of both aforementioned papers, we are not considering the full probability distribution of the solution. We are rather considering mean quantities, such as the expectation with respect to the Wiener measure of a quantity of interest related to the solution itself. We focus the study on two quantities: the first one is the expectation of the exit time from a regular bounded domain , the second one is the expectation of a functional of the solution at a time on the event . In this framework, we introduce a new methodology. We first use a Feynman-Kac representation of the quantity of interest (see, e.g., Karatzas and Shreve, 1991; Gobet, 2016). This leads to a parametrized partial differential equation (PDE) representation of our problem. We then handle the uncertainty on the parametrized PDE using polynomial chaos expansion and a stochastic Galerkin projection. The use of PC expansion for sensitivity analysis, or more generally uncertainty quantification, of parametrized PDE is common as it leads to analytical representations of the conditional expectations and variances, as already mentioned.

Our paper not only proposes a practical implementation of our new methodology. It also studies its theoretical justification. It is organised as follows: in Section 2 we fix some notation, state our problem and give some first results on the integrability of our quantities of interest. We also give a brief overview of existing methods for addressing our problem (Subsection 2.3). In Section 3 we present our methodology based on Feynman-Kac formulae. Both theoretical and numerical issues are discussed. Section 4 presents some numerical experiments on a toy model. Section 5 is an appendix gathering the proofs of several technical results, mostly related to stochastic partial differential equations.

## 2 Notation, problem statement and first results

### 2.1 First notation and assumptions

Let , equipped with the Borel -field . Let be a probability measure on . Note that by construction the random variable , , defined on has law . In the sequel we will denote in short . We introduce the following assumption.

Assumption 1 (A1). Assume that the components of are independent so that and have product structures

 Ξ=m∏j=1Ξj,Pξ(dz)=m∏j=1Pξj(dzj).

Assume moreover that for any , the probability measure

is characterized by its moments. One sufficient condition for this last assertion is that the moment generating function of

has positive radius of convergence. Assumption (A1) ensures the existence of an orthonormal polynomial basis of .

The random variable will represent the uncertain parameter in the SDEs we will consider.

Now let denote the usual -dimensional Wiener space. That is to say is the space of continuous functions from to , equipped with the -field endowed by the open sets for the metric defined in Equation p60 of Karatzas and Shreve (1991). We recall that the Wiener measure is such that the canonical process , , is a -Brownian motion on .

We now set , and . A natural consequence of the product structure of the probability space is Assumption 2 below:

Assumption 2 (A2). On the random variables and are independent.

Consider now and . Denoting both the euclidean norm on and on (i.e. for any ) we introduce the following assumptions.

Assumption 3 (A3). There exists a constant such that for all and all ,

 |b(x,z)−b(y,z)|+|σ(x,z)−σ(y,z)|≤M|x−y|

and

 |b(x,z)|2+|σ(x,z)|2≤M2(1+|x|2). (1)

We now consider the SDE with uncertain parameter ,

 dXt=b(Xt,ξ)dt+σ(Xt,ξ)dWt,t≥0,X0=x. (2)

Thanks to (A3), and using Theorem 5.2.9 in Karatzas and Shreve (1991), we can claim that for every value of the SDE (2) has a unique strong solution . That is to say there exists defined on such that for a.e. , we have: , ,

 Xit(ω)=xi+∫t0bi(Xs(ω),ξ(z))ds+d∑j=1σij(Xs(ω),ξ(z))dWjs(θ).

In fact, under (A3), such a process has all its moments as we state in the following lemma, whose proof is postponed to the Appendix.

###### Lemma 1

Assume (A2) and (A3). Let be the solution of (2) and . For any we have (in other words ).

###### Remark 1

Assumption 3, in order to get the existence of the solution and its square integrability, could be weakened. But it provides a reasonable setting to study the partial differential equation associated to our original problem (see Section 3).

###### Remark 2

Let measurable such that belongs to . Note that thanks to (A2),

 E[F(X)|ξ]=∫CF(X(θ,ξ))W(dθ),

that is to say can be seen as the average with respect to the Brownian motion driving (2).

### 2.2 Problem statement

As already mentioned in the introduction, the -Brownian motion in our study models the physical system randomness. The quantities we are interested in are then mean values, with respect to the -Brownian motion, of outputs related to the SDE. These quantities depend on uncertain parameters, and we aim at determining which parameters are influential on these quantities, by performing a global sensitivity analysis. This section aims at defining properly two quantities we are interested in (see Equations (3) and (6)).

We first need to introduce a set of assumptions and notation, as far as preliminary results.

Let denote the diffusion term in Equation (2), we then define and introduce the following uniform ellipticity assumption:

Assumption 4 (A4). There exists such that

 ∀x,y∈Rd,∀z∈Ξ,yTa(x,z)y≥λ|y|2.

Let be an open bounded subset of . Recall that is solution of (2). We define the first exit time of the process from as:

 τD=inf{t≥0:Xt∉D}.

We have the following result (Lemma 2) whose proof is postponed to the Appendix.

###### Lemma 2

Assume (A2)-(A4). For any starting point of (2) we have .

In the sequel the starting point of solution of (2) may vary. We classically denote by the expectation with respect to computed under the initial condition .

The first quantity we are interested in is the averaged (with respect to ) exit time of the process from the domain (we assume , the closure of ). It is defined as:

 U(x,ξ)=Ex[τD|ξ]. (3)

To ensure that is in , we add a regularity assumption on the domain :

Assumption 5 (A5). The boundary is of class , (see Gilbarg and Trudinger, 1983, p94 for a definition).

Then, it is possible to prove the following lemma:

###### Lemma 3

Assume (A2)-(A5). For any and almost every value of we have where is a finite constant.

###### Proof

Note that Lemma 3 is a corollary of Theorem 3.1 stated in Section 3. Its proof is postponed to the Appendix.

###### Remark 3

Even if the result of Theorem 3.1 is required for the proof of Lemma 3, we decided to postpone its statement to Section 3, devoted to the introduction of our new methodology, based on Feynman-Kac representations of the mean quantities we are interested in. Theorem 3.1 provides indeed a clear interpretation of as the solution of an elliptic stochastic partial differential equation.

Let now . Let be a function from to . We introduce the following set of assumptions on :

Assumption 6 (A6). The function is of class and satisfies the compatibility condition

 f≡0and12d∑k,jakj∂2xkxjf+d∑j=1bj∂xjf≡0 on ∂D. (4)

Besides, for all ,

 |f(x)−f(y)|+d∑k=1|∂xkf(x)−∂xkf(y)|+d∑k,j=1|∂2xjxkf(x)−∂2xjxkf(y)|≤M|x−y|

and

 |f(x)|≤M(1+|x|μ), (5)

with constants .

From Lemma 1 we immediately get Lemma 4 just below.

###### Lemma 4

Assume (A2)-(A3) and (5). Then for any we have , for any .

It is then possible to define the second quantity we are interested in. The quantity is defined as the expectation (with respect to ) of , computed on the event :

 V(t,x,ξ)=Ex[f(Xt)1t≤τD|ξ]. (6)
###### Remark 4

Lemma 4 ensures that, under (A2)-(A3) and (A6), the quantity is in .

In the sequel, in order to lighten notation, we will sometimes drop any reference to or in the quantities of interest or .

Recall that we aim at determining which parameters, or set of parameters, are influential for , resp. . We propose to compute Sobol’ sensitivity indices, with respect to the components of  assumed to be independent. These indices are defined (see, e.g., Sobol, 1993) as:

 SI(U)=V[E(U(ξ)|ξI)]V(U(ξ)),∀I⊂{1,…,m},ξI={ξℓ,ℓ∈I} (7)

and

 SI(V)=V[E(V(ξ)|ξI)]V(V(ξ)),∀I⊂{1,…,m},ξI={ξℓ,ℓ∈I}. (8)
###### Remark 5

In the following, we assume that and to ensure and .

###### Remark 6

Assumption (A6) on is a technical assumption which may appear restrictive. In practice, if does not satisfy (A6), we approximate it by some satisfying (A6) and propose a bound to control the approximation error. E.g., for on , and for any , it is possible via convolution arguments to construct a function satisfying (A6), such that and on , with a compact subset of with (note that the existence of is ensured by Assumption (A5) on the regularity of the boundary). Then we have:

 ∣∣Ex[f(Xt)1t≤τD|ξ]−Ex[fε(Xt)1t≤τD|ξ]∣∣≤Px(Xt∈D∖Kε|ξ). (9)

But, modifying and outside so that they are bounded and Lipschitz on (with constants uniform in ), and using Theorems 4.5 and 5.1 in Friedman (1975)

, we get the following estimate for the transition probability density of

, under assumptions (A3), (A4):

 (10)

with depending on , , and (but not on ). We thus conclude that the right hand side in (9) behaves as .

### 2.3 State of the art

In this section, we first recall the crude Monte Carlo procedure for the estimation of and . We then present the approach introduced recently in Jimenez et al. (2017); Le Maître and Knio (2015) for the estimation of the , with , for .

#### 2.3.1 Crude Monte Carlo estimation procedure

The crude Monte Carlo estimation procedure is probably the most intuitive. However, its main issue is its computational cost. Let us detail below what we mean by crude Monte Carlo estimation procedure.

Let (resp. ) be the application such that (resp. ). Note that the deterministic maps and exist because for fixed one passes from a path of to a path of in a deterministic way, thanks to the Yamada-Watanabe causality principle (see Karatzas and Shreve, 1991).

In practice our Monte Carlo procedures will involve some Euler scheme to approximate the paths of , so that we will in fact work with approximations and of and (e.g., in Section 4).

In the sequel we focus on , as can be treated in a similar manner. The quantity is first approximated by

 E[ˆFU(W,ξ)|ξ].

Second, let ( is supposed to be large) and , , independent samplings of the Brownian motion . We approximate by the Monte Carlo mean

 ¯¯¯¯EM[ˆFU](ξ):=1MM∑k=1ˆFU(W(k),ξ).

We then perform a sensitivity analysis of with respect to the components of . Sobol’ indices can be estimated with a classical pick freeze procedure (see, e.g., Sobol, 1993, 2001). Let . Let ( is supposed to be large). Let ,, independent samplings of . We then construct samples , in the following manner:

 ∀1≤l≤N,∀1≤ℓ≤m,ξI,(l)ℓ=⎧⎨⎩ξB,(l)ℓif ℓ∈IξA,(l)ℓotherwise.

Then the Sobol’ index is approximated by

 SI(¯¯¯¯EM[ˆFU](ξ))=V[E(¯¯¯¯EM[ˆFU](ξ)|ξI)]V[¯¯¯¯EM[ˆFU](ξ)]=V[¯¯¯¯EM[ˆFU](ξI)]V[¯¯¯¯EM[ˆFU](ξ)]

which itself is estimated by

 1N∑Nl=1¯¯¯¯EM[ˆFU](ξI,(l))×¯¯¯¯EM[ˆFU](ξB,(l))−(¯¯¯¯C)21N∑Nl=1{¯¯¯¯EM[ˆFU](ξI,(l))}2+{¯¯¯¯EM[ˆFU](ξB,(l))}22−(¯¯¯¯C)2 (11)

with

 ¯¯¯¯C=1NN∑l=1¯¯¯¯EM[ˆFU](ξI,(l))+¯¯¯¯EM[ˆFU](ξB,(l))2⋅

This estimator has been introduced first in Monod et al. (2006) and its asymptotic properties have been studied in Janon et al. (2014). The main advantage of pick freeze estimators is that it only requires the square integrability for the model. However, the number of model evaluations it requires can be huge. It is equal to , with the number of uncertain parameters.

#### 2.3.2 Hybrid Galerkin-Monte Carlo procedure

In the setting of parametrized SDE, we can exploit regularity properties of the underlying model for proposing alternatives to the crude Monte Carlo procedure. In this section, we present the procedure introduced in Le Maître and Knio (2015); Jimenez et al. (2017). It is based on a polynomial chaos analysis with stochastic expansion coefficients. In that paper, the uncertainty due to the Wiener noise and the one due to the uncertain parameters are handled at the same level for sensitivity analysis. Recall that the components of are assumed to be independent so that and have product structures

 Ξ=m∏j=1Ξj,Pξ(dz)=m∏j=1Pξj(dzj).

We endow with the inner product and associated norm denoted and respectively

 ∀U,V∈L2(Ξ,Pξ),=∫ΞU(z)V(z)Pξ(dz);
 ∥U∥2=1/2<∞⟺U∈L2(Ξ,Pξ).

Then, introducing an orthonormal basis for

(e.g., any tensorized basis), any second-order random variable

can be expanded as

 U(ξ)=∑q∈N[Uq]Ψq(ξ).

The authors in Le Maître and Knio (2015); Jimenez et al. (2017) consider then the following tensor representation: for a.e. ,

 Xit(θ,z)=∑q∈N[(Xit)q](θ)Ψq(z),i=1,…,d. (12)

Then, they consider a stochastic Galerkin projection (see, e.g., Ghanem and Spanos (1991)) to derive equations that enable the determination of the stochastic processes , . E.g., let , we get

.

From Assumption (A2), we know that and are independent thus

.

From computational purposes, the authors truncate this infinite sequence of coupled problems to an order : . They thus get a system of coupled stochastic differential equations, with initial conditions obtained by projecting the initial data on the stochastic basis . The system is then solved using standard Monte Carlo simulation, introducing a time scheme and generating trajectories for the -Brownian motion.

Consider now the quantities of interest analyzed in Jimenez et al. (2017); Le Maître and Knio (2015):

 ∀1≤i≤d,Vi(t,x,ξ)=Ex[Xit|ξ],

denoted in short , , and recall (12).

Then, due to the orthonormality of the stochastic basis , we get

 V(Vi(ξ))≈V{[(Xit)0]}+P∑q=1E{[(Xit)q]2} (13)

and

 V[E(Vi(ξ)|ξ)]≈P∑q=1{E[(Xit)q]}2 (14)

for the estimation of Sobol’ indices defined by (8) with .

The application of such a procedure requires technical assumptions on the initial parametrized SDE, which we do not develop here for sake of concision.

## 3 A new methodology based on Feynman-Kac representation formulas

This section is devoted to the presentation of the new methodology we propose for performing sensitivity analysis of parametrized stochastic differential equations. Let us recall that we consider in our approach that our model is a stochastic model depending on unknown parameters , the stochasticity being induced by the -Brownian motion. We then are only interested by the mean value of a functional of the model output, where the mean is taken with respect to the -Brownian motion which drives the inherent randomness of the model. It is different in nature from the point of view adopted in Le Maître and Knio (2015); Jimenez et al. (2017), in which the authors are interested in the sensitivity of the model output with respect to both the uncertain parameters and the noise inherent to the system. Our approach is based on Feynman-Kac representation formulas which establish a link between parabolic or elliptic partial differential equations and stochastic processes. More precisely, Feynman-Kac formulas allow interpreting the quantities and  as the solutions of some partial differential equations. The literature proposes a broad range of methods for the sensitivity analysis of parametrized partial differential equations (see, e.g., Nouy, 2017, and references therein). Hereafter, for each of both quantities of interest under study in this paper, we focus on a method based on a stochastic Galerkin polynomial chaos approximation of the solution of the parametrized PDE we obtain from the Feynman-Kac representation.

In Subsection 3.1 below, we state in Theorems 3.1 and 3.2 the Feynman-Kac representations of our quantities of interest and . Then in Subsection 3.2 (resp. Subsection 3.3), we introduce our methodology based on stochastic Galerkin approximation for the estimation of Sobol’ indices associated to (resp. ).

### 3.1 Link between U and V and some stochastic partial differential equations

Theorem 3.1 below provides an interpretation of as the solution of an elliptic problem.

###### Theorem 3.1

Let us consider the following elliptic stochastic partial differential equation

 ⎧⎪ ⎪⎨⎪ ⎪⎩12∑dk,j=1akj(x,ξ)∂2xkxju(x,ξ)+∑dk=1bk(x,ξ)∂xku(x,ξ)=−1∀x∈Du(x,ξ)=0∀x∈∂D. (15)
1. Assume (A3)-(A5). Then, for almost every value of , there exists a unique solution to (15) in .

2. If moreover (A2) is satisfied, then for a.e. , the quantity is solution of the partial differential equation defined by (15).

###### Proof (Proof of Theorem 3.1)

Point is a direct consequence of Theorem 6.14 in Gilbarg and Trudinger (1983). Taking into account Remark 2 and Lemma 2, follows simply from the elliptic Feynman-Kac formula given in Theorem 5.7.2 in Karatzas and Shreve (1991).

We now turn to the interpretation of .

###### Theorem 3.2

Let us consider the following parabolic stochastic partial differential equation

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∂tv(t,x,ξ)=12∑dk,j=1akj(x,ξ)∂2xkxjv(t,x,ξ)+∑dk=1bk(x,ξ)∂xkv(t,x,ξ),∀(t,x)∈(0,T]×Dv(t,x,ξ)=0,∀(t,x)∈(0,T]×∂Dv(0,x,ξ)=f(x),∀x∈¯D. (16)

i) Assume (A3)-(A6). Then for almost every value of , there exists a unique solution to (16) in .

ii) Assume in addition (A2). Then the quantity , , is given by the solution at time  of the parabolic stochastic partial differential equation (16).

###### Proof (Proof of Theorem 3.2)

As satisfies the compatibility condition (4) one may extend it for any in a function of class , with uniformly Lipschitz derivatives (to order in time and up to order in space) and satisfying on , and on and on (e.g., let us consider ). Point i) then follows from Theorem 5.14 in Lieberman (1996). Taking into account Remark 2, the result ii) follows simply from the parabolic Feynman-Kac formula given in Theorem 4.4.5 in Gobet (2016). Note however that this Feynman-Kac formula is given for parabolic problems in backward form and with terminal condition instead of initial one, as the authors deal with the general case of time-inhomogeneous coefficients. But in the case of time-homogeneous coefficients, one may use a reverting time argument, and the time-homogeneous Markov property of , in order to get the Feynman-Kac formula for the parabolic problem in the forward form and with initial condition.

At this stage of the paper, we are concerned with classical solutions of stochastic PDEs, in the sense of Definition 1 below. In Subsections 3.2 and 3.3, we will introduce the notion of weak solutions at the stochastic level.

###### Definition 1

A function (resp. ) is a classical solution of (15) (resp. (16)) if for a.e. value of the function (resp. ) is a solution to (15) (resp. (16)) in (resp. ).

###### Corollary 1

Assume (A3)-(A5) (resp. (A3)-(A6)). There exists a unique classical solution of the stochastic PDE (15) (resp. (16)). If in addition is satisfied, this classical solution is provided by (resp. ).

From the Feynman-Kac representations of our quantities of interest stated in Corollary 1, we propose in Subsections 3.2 and 3.3 to apply a methodology based on stochastic Galerkin polynomial chaos approximations to approximate Sobol’ indices. For the clarity of exposure we have found convenient to present first the elliptic case (Subsection 3.2), and then to turn to the parabolic one which is technically more cumbersome (Subsection 3.3).

### 3.2 Approximation of SI(U) using the elliptic stochastic PDE

We want now to construct a discretized approximation of . So far (Corollary 1) the function is seen as a classical solution of (15) but it can be shown to be a weak solution at the stochastic level of some equivalent PDE in divergence form, living in the space (this will be addressed in Theorem 3.3 below; the definition of will be recalled in Subsubsection 3.2.1). Therefore it is possible to perform a Galerkin projection on some finite dimensional subspace where with introduced in Subsection 2.3.2 and with some finite element basis of functions in (see Subsubsection 3.2.1 for details). That is to say will be discretized both with respect to the space variable and the uncertain parameter . Concerning weak solutions and Galerkin projection we are inspired by the framework in Nouy (2009), Nouy (2017). In this article we provide a rigorous proof of the convergence of our Galerkin approximation toward the weak solution when the discretization parameters in and simultaneously converge (see in particular the proof of Lemma 5, postponed to the Appendix). Once we have got an approximation (also called reduced model or metamodel)  of the weak solution, therefore of , we deduce an analytical formula for approximating Sobol’ indices , . The arguments to derive such an analytical formula are similar to the ones used to derive (13) and (14) in Subsection 2.3.2, and mainly rely on the orthonormality of the basis .

The subsection is organised ad follows: in Subsubsection 3.2.1 we introduce some notation on the functional spaces we will use both in this subsection and in Subsection 3.3 about the parabolic case. In Subsubsection 3.2.2 we reformulate the elliptic stochastic PDE (15) in divergence form. This is needed in order to introduce in Subsubsection 3.2.3 the definition of weak solutions at the stochastic level of elliptic stochastic PDEs, and of their Galerkin approximation. The aforementioned Theorem 3.3 is stated in Subsubsection 3.2.4 and establishes the convergence of the Galerkin approximation toward the quantity of interest . In Subsubsection 3.2.5 it is explained how to use in order to compute approximated Sobol’ indices of .

#### 3.2.1 Functional spaces notation

We denote by the usual Sobolev space of functions in vanishing on  (note that in the sequel the derivatives may be understood in the weak sense). The space is equipped with the norm . We denote the topological dual space of . Recall that we have the Gelfand triple (embeddings are continuous and dense).

We then introduce the tensor Hilbert spaces and . The space is isomorphic to and . The space is isomorphic to and

. The space is equipped with the scalar product defined by

 ∀v,w∈H,⟨v,w⟩H=Eξ(∫Dvw),

and with the norm . The space is equipped with the norm defined as:

 u↦Eξ||u(⋅,ξ)||2H1(D)=:||u||2V.

Denoting the topological dual of we have the new Gelfand triple . Note that . Note that will also denote the extension by continuity of the previous scalar product to the dual pairing .

We introduce the finite dimensional approximation space defined as the tensor space

 VN,P=VN⊗SP. (17)

Recall that the space in (17) is defined as with introduced in Subsection 2.3.2. Recall also that the space is defined as with some finite element basis of functions in . More precisely, we choose , , such that . This can be done for example by using linear

-spline basis functions. Using interpolation properties, we get that

is dense in .

We then have for any , and is dense in .

#### 3.2.2 Elliptic PDE in divergence form

For any vector field we denote the divergence of and, for any we denote the gradient of .

Assume that for a.e. value of the function is of class and define the coefficients , , the vector field , and the function on by

It is then clear that (15) is equivalent to

 ⎧⎪⎨⎪⎩−∇⋅(~a(x,ξ)∇u(x,ξ))+(∇u(x,ξ))T~b(x,ξ)=~f(x,ξ)∀x∈Du(x,ξ)=0∀x∈∂D. (19)

#### 3.2.3 Weak solutions at the stochastic level to (19) and their Galerkin projection

We now introduce the bilinear form

 ∀u,v∈V,A(u,v)=Eξ(∫D(∇u(⋅,ξ))T~a(⋅,ξ)∇v(⋅,ξ)+∫D(∇u(⋅,ξ))T~b(⋅,ξ)v(⋅,ξ)) (20)

and the linear form

 ∀v∈V,F(v)=Eξ(∫D~f(⋅,ξ)v(⋅,ξ))=⟨~f,v⟩H.

We introduce now assumptions which ensure the continuity and the coercivity of the bilinear form .

Assumption 7 (A7). There is a constant s.t.

 ∀1≤i,j≤d,∀x∈¯D,∀z∈Ξ,|~aij(x,z)|≤~Λ.

Assumption 8 (A8). There is a constant s.t.

 ∀y∈Rd,∀x∈¯D,∀z∈Ξ,yT~a(x,z)y≥~λ|y|2.

Let us now recall that thanks to Poincaré’s inequalities the norm

is equivalent to on . More precisely

 ∀v∈H10(D),(∫D|∇v|2)1/2≤||u||H1(D)≤C(d,|D