Global Sensitivity Analysis via Multi-Fidelity Polynomial Chaos Expansion

10/23/2017 ∙ by Pramudita Satria Palar, et al. ∙ 0

The presence of uncertainties are inevitable in engineering design and analysis, where failure in understanding their effects might lead to the structural or functional failure of the systems. The role of global sensitivity analysis in this aspect is to quantify and rank the effects of input random variables and their combinations to the variance of the random output. In problems where the use of expensive computer simulations are required, metamodels are widely used to speed up the process of global sensitivity analysis. In this paper, a multi-fidelity framework for global sensitivity analysis using polynomial chaos expansion (PCE) is presented. The goal is to accelerate the computation of Sobol sensitivity indices when the deterministic simulation is expensive and simulations with multiple levels of fidelity are available. This is especially useful in cases where a partial differential equation solver computer code is utilized to solve engineering problems. The multi-fidelity PCE is constructed by combining the low-fidelity and correction PCE. Following this step, the Sobol indices are computed using this combined PCE. The PCE coefficients for both low-fidelity and correction PCE are computed with spectral projection technique and sparse grid integration. In order to demonstrate the capability of the proposed method for sensitivity analysis, several simulations are conducted. On the aerodynamic example, the multi-fidelity approach is able to obtain an accurate value of Sobol indices with 36.66 a nearly similar accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 30

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Computational methods are widely deployed to predict the behavior and to compute the output of interests (e.g., stress distribution or force) of physical or engineering systems. In engineering design and analysis, computational partial differential equation (PDE) solvers are routinely employed to aid and enhance such processes. However, due to the existence of uncertainties in the system, deterministic analysis might result in a failure to understand the true probabilistic nature of the system. An example of this is in the field of aerospace engineering, where manufacturing error and environmental uncertainties such as gust, turbulence, and change in design conditions perturb the nominal condition of the aircraft. Understanding the effect of the uncertainties to the system is then a vital task in many engineering and scientific problems. Depending on the nature of the uncertainties, uncertainties can be categorized as either aleatory or epistemic uncertainties. While aleatoric uncertainties are irreducible and inherent to the system, epistemic uncertainties are caused by our lack of knowledge on the system being investigated. Aleatoric uncertainties can be conveniently expressed by probability theory using specific measures such as statistical moments.

Sensitivity analysis (SA) in probabilistic modeling plays an important role in understanding the impact of input random variables and their combinations to the model output. By performing SA, the contribution of the input random variables to the model output can be ranked, thus giving the analysts information of which variables are the most and least responsible. This is very useful in UQ, where the dimensionality of the random variable can be high and it is desired to reduce the complexity beforehand. Moreover, sensitivity information can provide important knowledge about the physics of the system being investigated. In engineering analysis, numerical evaluation of PDE is often required in order to obtain necessary information for SA. In the field of fluid and solid mechanics, the model responses are typically evaluated using computational fluid dynamics (CFD) or finite element methods (FEM), respectively.

Based on the range of the domain to be analyzed, SA is commonly classified into two groups 

saltelli2000sensitivity :

  • local sensitivity analysis, which studies the local impact of the input parameters on the model output and typically provides the partial derivatives of the output to the input parameters.

  • global sensitivity analysis, which studies the uncertainties in the output due to changes of the input parameters over the entire domain.

Based on how it exploits the model response, SA can be further categorized into saltelli2000sensitivity :

  • regression based methods

    , which perform linear regression to the relationship between the model input and output. These methods are limited to cases with a linear or almost linear relationship but are inadequate in cases with highly non-linear relationship.

  • variance-based methods, which decompose the variance of the output into the sum of contributions of each input variables and their combination. In statistic literature, variance based methods are widely known as ANOVA (ANalysis Of VAriance) efron1981jackknife

In this paper, we have interest in global SA using variance-based methods since our main focus is on UQ, although the methodology itself can be used for other applications such as optimization. More specifically, this paper focuses on Sobol sensitivity analysis method sobol1990sensitivity .

Monte Carlo simulation (MCS) is the most conventional method to perform SA sobol1990sensitivity

. The advantage of MCS for SA is that it is straightforward and very simple to be implemented. However, it is infeasible to obtain accurate results using MCS if a demanding computational simulation is used. To cope with this, more advanced techniques have to be employed. SA using cheap analytical replacement of the true function, or a metamodel, is one way to handle this. Metamodeling techniques such as radial basis function 

park1991universal , Kriging krige1951statistical ; matheron1963principles ; cressie2015statistics ; sacks1989design ; gandin1965objective , probabilistic collocation loeven2007probabilistic , and polynomial chaos expansion (PCE) knio2001stochastic ; ghiocel2002stochastic ; xiu2002wiener ; ghanem2003stochastic can be employed for SA purpose.

In the closely related field of structural reliability analysis, metamodeling techniques have been widely employed to accelerate the computation of failure probability das2000cumulative ; papadrakakis2002reliability ; kaymaz2005application ; dubourg2011metamodel ; bichon2011efficient ; balesdent2013kriging

. Compared to other metamodels, Kriging metamodel is particularly useful for reliability analysis due to the direct availability of error estimation that allows one to deploy active learning strategy 

bichon2008efficient ; echard2011ak ; huang2016assessing (i.e., Kriging with adaptive sampling). Active learning works by adding more samples so as to increase the accuracy of the metamodel near the region of interest, that is, the limit state. Besides Kriging, PCE has also been employed for structural reliability analysis purpose choi2004structural ; berveiller2006stochastic ; hu2011adaptive ; sudret2013response . Recently, a combination of PCE and Kriging within the framework of universal Kriging and active learning was developed to handle rare event estimation problem schobi2016rare .

Metamodels in UQ and SA are required to be globally accurate for a precise estimation of sensitivity indices. Although not in the context of global SA, the use of Kriging models with adaptive sampling to progressively refine the global accuracy has been explored by some researchers within the context of UQ bilionis2012multi ; shimoyama2013dynamic

. Dwight and Han proposed an adaptive sampling technique for UQ by using a criterion based on the product of the Kriging uncertainty and the probability density function of the random inputs 

dwight2009efficient

, which is useful for cases with non-uniform distributions. Another alternative is to construct local surrogate models such as Dutch intrapolation 

rumpfkeil2011dynamic

and multivariate interpolation and regression 

wang2010high ; boopathy2014unified in order to provide the error measure. A recent survey of adaptive sampling for Kriging and other surrogate models for various applications can be found in Liu et al. liu2017survey . Other than Kriging, polynomial-based metamodels, especially PCE, are highly useful and competitive to the Kriging model when the quantities of interest are the statistical moments and sensitivity indices.

The advantage of PCE for UQ and SA purpose is that the calculation of Sobol sensitivity indices can be directly performed as the post-processing step sudret2008global ; crestaux2009polynomial . This is in contrast to other metamodelling techniques where MCS still needs to be performed on the analytical metamodel to obtain the Sobol indices. The approximation quality of the metamodel can be further enhanced by including gradient information. Some methods that have been reformulated in order to include gradient information are gradient-enhanced Kriging morris1993bayesian ; liu2003development ; dwight2009efficient ; de2014improvements ; de2015exploiting , compressive sensing-based PCE peng2016polynomial , and sparse grid method de2015gradient . However, it is worth noting that obtaining gradient information is a tedious task and is not always possible for many applications. In this paper, we assume that gradient information is not available.

The PCE method is based on homogeneous chaos expansion, which itself is based on the seminal work of Wiener wiener1938homogeneous . The earliest version of PCE computes the coefficients by using Galerkin minimization. This intrusive scheme needs modification of the governing equation and the simulation code in order to perform the UQ procedure ghanem2003stochastic . The difficulty with the intrusive scheme is that the derivation of the modified governing equation might be very complex and highly time-consuming. In this paper, the method of interest is non-intrusive PCE ghanem2003stochastic ; ghiocel2002stochastic , that allows the use of legacy code since it can be treated as black box simulation. Based on how it estimates the PCE coefficients, non-intrusive PCE can be classified into two categories:

  1. Spectral projection, which estimates the coefficients by exploiting the orthogonality of the polynomial and quadrature knio2001stochastic ; ghiocel2002stochastic .

  2. Regression, which estimates the coefficients by using least-square minimization choi2004polynomial ; berveiller2006stochastic .

Note that the definition of regression-based PCE is different from the regression-based SA. The definition of the former is that a regression-based technique is used to build the PCE metamodel, while the definition of the latter is the utilization of linear approximation within SA framework.

PCE has been developed and implemented as a tool for SA since the work of Sudret sudret2008global . Further simulations are not necessary because the Sobol indices in PCE are obtained in the post-processing phase sudret2008global ; crestaux2009polynomial by exploiting the orthogonality of the polynomial bases. PCE is now a widely used approach in the field of UQ due to its strong mathematical basis eldred2009comparison . For regression-based PCE, sparse-PCE based on least-angle-regression is an efficient method to obtain the Sobol indices without the need to determine the polynomial bases a priori blatman2010efficient ; blatman2011adaptive . Another alternative for sparse PCE is using a compressive sensing-based technique doostan2011non ; jakeman2015enhancing that employs orthogonal matching pursuit to scan the most influential polynomial bases. While for the spectral-projection based PCE, the sparse grid interpolation technique smolyak1960interpolation ; gerstner1998numerical ; bungartz2004sparse ; garcke2012sparse can be employed to reduce the number of collocation points in high-dimensionality for SA purpose smolyak1963quadrature ; xiu2005high ; constantine2012sparse ; buzzard2012global ; buzzard2011variance . Recent advances in this field include the use of PCE for multivariate SA garcia2014global . Our interest in this paper is to obtain Sobol sensitivity indices for global SA purpose by using polynomial chaos expansion (PCE).

In some applications, simulations with multiple levels of fidelity are available. The fidelity here is defined as the measure of how accurate the model approximates the reality. The fidelity level is typically categorized into high- and low-fidelity, where the categorization of the model into high- or low-fidelity depends on the case being investigated. The high-fidelity (HF) simulation is the most accurate but is typically more computationally demanding than the low-fidelity (LF) simulation. On the other hand, the LF simulation is less accurate but many evaluations could be performed since it is cheaper to evaluate. For example, a PDE-solver with a very fine and coarse mesh can be treated as the HF and LF simulation, respectively. An approach that utilizes simulations with multiple levels of accuracy is commonly called multi-fidelity (MF) method. The common technique to apply MF simulations is to use LF simulations to capture the response trend and employ HF simulations to correct the response.

Multi-fidelity simulation is commonly used to aid and accelerate the optimization process. Co-Kriging kennedy2000predicting ; forrester2007multi and space-mapping bandler2004space ; shah2015multi are two examples of surrogate-based methods that rely on MF simulations; mainly for optimization purposes. Multi-fidelity techniques for UQ purpose recently appeared in literature. Among the first is the multi-level Monte Carlo (MLMC) method which was firstly developed in the context of solving stochastic PDEs problem giles2008multilevel ; barth2011multi and was further developed to handle general black-box problems ng2014multifidelity . The Kriging method is also applicable for UQ purpose de2015uncertainty . Recently, a non-intrusive MF-PCE based technique to solve UQ problems was developed. The method works by combining the LF and correction PCE into a single MF-PCE ng2012multifidelity . The coefficients of the LF and correction PCE can be obtained by employing spectral projection ng2012multifidelity or regression-based technique palar2015decomposition ; palar2016multi . Another similar technique is the MF stochastic collocation that relies on Lagrange-polynomial interpolation narayan2014stochastic . Although the use of MF simulations is common to be found in optimization and, recently, UQ literatures, its application to SA is still rare and scarce. One example of works in MF SA is the application of Co-Kriging for SA of a spherical tank under internal pressure le2014bayesian .

To the best of our knowledge, there is still no research that investigates the capability of MF-PCE for SA purpose. Such work is important to further enhance the capability of MF-PCE for goals other than standard UQ. Furthermore, it has a potential to reduce the computational cost to perform SA, which is desirable in applications that involve expensive simulations. In this paper, we investigate and demonstrate the usefulness and capabilities of MF-PCE to perform SA when simulations with different level of fidelity are available.

The rest of this paper is structured as follows: The explanation of the general framework of SA and the methodology of MF-PCE are given in Section 2. The numerical studies on artificial and real-world test problems we undertook are discussed in Section 3. Finally, conclusions are drawn and future work is discussed in Section 4.

2 Methodology

2.1 Global SA

2.1.1 Probabilistic formulation of the problem

We first consider a physical model with

input parameters, with the relationship between the input vector

and the scalar output is defined by

(1)

where and is the vector of the input variables. Because our interest in this paper is for UQ, the input vector is assumed as random variables with joint probability density function (PDF) . Due to the assumption of independent components of , one gets where is the marginal PDF of . The random output is then defined by

(2)

2.1.2 Sobol decomposition

The model response can be decomposed into summands of its main effects and interactions (called Sobol decomposition sobol1990sensitivity ), reads as

(3)

where is a constant and is the mean value of the function defined by

(4)

where is the support of the PDF. A property of the Sobol decomposition is that its summand satisfies

(5)

where is the support of the PDF. As a consequence of Eq. 5, the summands except are mutually orthogonal in the following sense:

(6)

Based on this derivation, we can then derive the formula of sensitivity indices.

2.1.3 Sensitivity indices

As a consequence of the orthogonality property shown in Eq. 6, the total variance of the model response can then be decomposed as

(7)

where the are the partial variances and defined as

(8)

We can then define the Sobol indices

(9)

where they satisfy the following relationship:

(10)

This means that each index is a measure of sensitivity describing the portion of the total variance due to the uncertainties in the set of input parameter . With this formula, one can understand which variables are the most and least influential to the output. Furthermore, one can also investigate the interaction between variables.

It is sometimes necessary to rank the total contribution of each variable to the output. For this purpose, the total effect of the input parameter, denoted as the total sensitivity indices homma1996importance , can be used. Total sensitivity indices are defined by

(11)

The total sensitivity indices measure the total contribution of each variable to the output variance which includes all variances caused by its interaction with any other input variable. For example, if we have

(12)

To compute the value of Sobol indices for a given problem, the most standard method is MCS which offers simplicity and ease of implementation. However, this might come with the high price of computational cost. This is because MCS typically needs more than a thousand simulations to obtain an accurate value of the Sobol indices xiu2003modeling . This is clearly infeasible if the computational time for each deterministic simulation is time demanding. To handle this, PCE can be used as a metamodel to accelerate the computation of the Sobol indices. More specifically, the MF scheme of non-intrusive PCE is considered as a tool to accelerate SA in this paper.

2.2 Non-Intrusive PCE

2.2.1 Pce

PCE approximates the relationship between the stochastic response output and each of the random inputs with the following expansion:

(13)

where and are the PCE coefficients and multivariate polynomials as the product of one-dimensional orthogonal polynomial, respectively. Consider an index defined by and an index set , the expansion has be truncated for practical purpose so the expression becomes

(14)

The orthogonal polynomial sequence is a family of polynomials such that the collection of polynomials in the sequence are orthogonal to each other by the following relation:

(15)

where =1 if and 0 if

. The type of the polynomials used depends on the given probability distribution. Table 

1 shows the Askey scheme of some continuous hyper-geometric polynomials corresponding to their probability density function askey1985some ; xiu2002wiener .

Distribution type Polynomial chaos Support
Gaussian Hermite chaos
Gamma Laguerre chaos [0,)
Beta Jacobi chaos []
Uniform Legendre chaos []
Table 1: Standard forms of continuous PDF and their Askey scheme polynomials.

A tensor product expansion that includes all combinations of one-dimensional polynomial bases can be utilized in order to extend the PCE to multi-variable approximation. The polynomial bases can then be expanded by using the tensor product operator of order

via

(16)

The total number of polynomial terms for the tensor product expansion is calculated by

(17)

Total order expansion can be used as an alternative to tensor product operator eldred2009comparison . For the total-order expansion of order , the index set is defined by

(18)

where . The total number of polynomial terms for the total order expansion is computed by

(19)

After the PCE has been built, the next task is to compute the coefficients.

2.2.2 Spectral-projection based PCE

In this paper, we focus on the spectral projection method that calculates the coefficients based on the orthogonality of the bases. The coefficients are computed via

(20)

The main part of the work is the calculation of multi-dimensional integral in the numerator. For this purpose, tensor-product quadrature can be efficiently employed to perform the integration in low dimensionality problems (). However, tensor-product quadrature might produce a large number of collocation points in problems with high-dimensionality. Sparse grid quadrature is an important alternative to tensor product quadrature in performing this integration. In this paper, we use sparse grid integration to compute the numerator in all examples. By using a sparse grid rule, the number of collocation points is reduced if compared to the tensor-product rule, thus, making it feasible to use spectral-projection based PCE in high-dimension. Moreover, if a sparse grid is employed, it is necessary to use the polynomial bases built from the sum of tensor product expansion to avoid numerical noise in the coefficients of higher order terms constantine2012sparse ; we used this approach in all examples.

We define the isotropic sparse grid at level where  smolyak1960interpolation ; bungartz2004sparse ; garcke2012sparse ; griebel1990combination by

(21)

where are the tensor product quadrature formulas for sparse grid equation. The form of formulas depends on the type of polynomial used in PCE. For example, Gauss-Legendre and Gauss-Hermite quadrature are used when Legendre and Hermite polynomials are employed in the PCE approximation, respectively.

Another way to express sparse grid is in terms of difference formulas gerstner1998numerical

(22)

where and with .

2.2.3 Multi-fidelity extension

The PCE can be extended into the MF version by employing a combination of the LF and correction PCE ng2012multifidelity . The correction PCE, which serves as a corrector for LF-PCE, is built by using the difference between the LF and HF function at selected sample locations. The combination can be in the form of additive or multiplicative form ng2012multifidelity . In this work, we only use the additive form defined by

(23)

so

(24)

where is the difference between the LF () and HF function (), which serves as a correction term.

In integrating sparse grid with PCE, we first define as the PCE of at sparse grid level with dimension . Within MF framework, the HF-PCE can be constructed by

(25)

where and are the LF and correction PCE, respectively. Here, is defined as sparse grid level offset between the LF and correction PCE.

If and are the coefficients of the LF and correction PCE, respectively, and is the set of multi-indices of the dimensional PCE on sparse grid level , we can then formulate the following:

(26)
(27)

The MF polynomial expansion can then be expressed as

(28)

where are the common bases of the LF and correction expansion.

After the PCE has been built and the coefficients been computed, the statistical moments can then be directly computed from the PCE coefficients. The sparse grid growth rule used in this paper is , where . Figure 1 shows various examples of the sparse grids with various values and for MF-PCE purpose. It can be seen in this figure that the HF samples (lower level sparse grid) are the subset of the LF samples (higher level sparse grid), thus made it possible to correct the LF values in the intersecting samples.

(a) .
(b) .
(c) .
Figure 1: Examples of sparse grid for MF-PCE where blue filled and red circles are HF and LF samples, respectively. The HF sparse grid is always on a lower sparse grid level than that of the LF one.

2.2.4 Calculation of statistical moments

The mean for the standard single HF-PCE is simply the first PCE coefficient which is given by

(29)

The variance can be obtained as follows:

(30)

where is the size of polynomial basis.

For the MF-PCE expression shown in Eq. 28, the moments are calculated via the following formulations:

(31)
(32)

Furthermore, as the core of this work, the PCE coefficients are used in the computation of the PCE-based Sobol indices.

2.2.5 PCE-based Sobol indices

Obtaining the PCE-based Sobol indices is straightforward when the PCE coefficients for a given expansion are already obtained sudret2008global . This can be done by firstly deriving the Sobol decomposition of the PCE approximation of . First, we define as the set of tuples, with the indices are nonzero, as

(33)

By gathering the polynomials according to the parameters they depend on, the Sobol decomposition of PCE can then be reads as

(34)

Based on the above decomposition, the PCE-based Sobol indices can then be simply defined as

(35)

It is clear from the above equation that the PCE-based Sobol indices can be directly obtained when the PCE coefficients are available. Using a similar procedure, we can then easily derive the expression for PCE-based total sensitivity indices. The PCE-based total sensitivity indices then can be reads as

(36)

which can be simplified into

(37)

where is the set of all indices with a non-zero th components, formally defined as

(38)

In MF-PCE, the polynomial terms and coefficients involved in Sobol indices computation are built from the LF and the correction expansion shown in Eq. 28. The process starts by building the sparse grid and the corresponding polynomial bases for LF-PCE followed by correction PCE. After the two PCEs are combined into a single MF-PCE, post processing can be directly performed to calculate the statistical moments and the Sobol indices.

3 Results

The capability of MF-PCE on algebraic cases when the “high” and “low” fidelity models are available are studied. In the algebraic case considered, the term of “high” and “low” fidelity models does not mean that the models are expensive and cheap in the real sense. The LF model is artificially constructed so it represents the HF model but with a discrepancy, which is measured by the correlation value and mean absolute relative error (MARE) between the LF and HF function . These two statistical measures can be used to perform a preliminary analysis of whether the LF function is similar to the HF one or not toal2015some . The equation for correlation and are

(39)

and

(40)

respectively, where and are a set of observations of the HF and LF data for identical inputs with the bar () denoting the mean of these sets.

Analysis of the decay of PCE coefficients is a proper way to analyze whether the LF function can improve the capability of MF-PCE in approximating the HF function ng2012multifidelity . The LF function can assist MF-PCE if the decay of the correction PCE coefficients is faster than that of the HF one ng2012multifidelity . This indicates that the correction function is less complex than the HF function. Basically, this means that for the same level of sparse grid, the combination of correction and LF-PCE captures more relevant information regarding the HF function than HF-PCE. As an example, consider a case where the HF and LF function is perfectly correlated with equals to one with an offset in the magnitude. This basically means that the correction function is just a constant which can be approximated by simply a zeroth order polynomial. Assuming that the LF function is much cheaper than the HF function so we can construct a high-level sparse grid for LF-PCE, one can build a highly accurate MF-PCE by combining this LF-PCE with this constant correction function. In this paper, we plot the coefficients of correction, LF-, and HF-PCE to do this analysis. To analyze the decay of the PCE coefficients, the absolute value of the coefficients must be sorted first and then plotted in a single plot.

We measured the error of total and all Sobol sensitivity indices, expressed as

(41)

and

(42)

respectively, where is the number of Sobol indices for a given dimension. Here, and are the reference value for the single and total Sobol indices, respectively. For implementation, the value of and are determined by using analytical calculation (such as in the Ishigami function) or PCE with a sparse grid of sufficiently high level.

Moreover, in order to assess the quality of the PCE approximation, the and MARE of the prediction to the true function is used, which are defined by

(43)

and

(44)

respectively, where is the output of the PCE prediction. Notice that and MARE without any subscript denote the prediction error, and not the discrepancy between the LF and HF function.

In the following examples, the sparse grid level for HF-PCE is written as “SG-”, while for MF-PCE it is written as “SG--)-”. The convergence of the errors is plotted with the axis as the number of function evaluations . The calculation of does not distinguish the HF or LF function evaluation when HF- and LF-PCE are considered since it does not make sense to do so in algebraic problems. However, when calculating for MF-PCE in algebraic function, the cost of LF function is not considered to make a fair comparison between both the HF- and MF-PCE from the algorithmic point of view. Furthermore, we also compare the performance of HF- and MF-PCE in estimating Sobol indices with the hypothetical cost of the LF simulation is taken into account. This hypothetical cost of the LF simulation is evaluated in terms of the ratio between the LF and HF evaluation cost (denoted as RT). The purpose is to evaluate the capability and usefulness of MF-PCE in accelerating the computation of Sobol indices when it is applied in a real-world setting, where the cost of the LF simulation is usually not negligible. Here, we use four values for the hypothetical cost, that is, RT= and . The total simulation cost of MF-PCE (i.e., ) is then the sum of the HF and LF evaluation cost, in the unit equivalent to one HF evaluation function.

3.1 Borehole function

The first example is the borehole function, which is an eight-dimensional function that models water flow through a borehole morris1993bayesian . Mathematically it is expressed as

(45)

where the various parameters are as defined in Table 2.

The LF borehole function xiong2013sequential is expressed as

(46)

For SA purpose, the random variables are all uniformly distributed as shown in Table 2.

Random variable Definition Probability distribution
radius of borehole (m) Uniform [0.05, 0.15]
radius of influence (m) Uniform [100, 50000]
transmissivity of upper aquifer (m2/yr) Uniform [63700, 115600]
potentiometric head of upper aquifer (m) Uniform [990, 1100]
transmissivity of lower aquifer (m2/yr) Uniform [63.1, 116]
potentiometric head of lower aquifer (m) Uniform [700, 820]
length of borehole (m) Uniform [1120, 1680]
hydraulic conductivity of borehole (m/yr) Uniform [9855, 12045]
Table 2: Random variable distributions for the borehole test case.

The LF Borehole function exhibits an almost perfect correlation with the HF one (r), which means that the trend response of the LF function is similar to the HF one, but with response value offset (MARE=0.204). To acquire monotonous convergence of the Sobol indices error, the reference values for errors calculation are obtained using level 5 HF sparse grid, in which the first order and total sensitivity indices computed using this high-level sparse grid are shown in Table 3 (note that the analytical Sobol indices are not available for the borehole function). Since the total Sobol indices are not equal to the first order, this means that the borehole function exhibits a correlation between the variables that need to be captured by sparse grid level higher than one. The errors of Sobol sensitivity indices are then computed using these reference values. For this test function, the sparse grid level offset is set to 1.

Total First order
0.8668 0.8289
0 0
0 0
0.0541 0.0414
0 0
0.0541 0.0414
0.0521 0.0393
0.0127 0.0095
Table 3: Reference Sobol indices for the borehole function obtained using HF-PCE SG-5 for computing the error.

The errors of the Sobol indices obtained from various schemes of PCE are shown in Fig. 2. It can be seen in Figs. 1(a) and 1(b) that the MF-PCE method is able to reduce the MARE and r with the same amount of HF function evaluations with HF-PCE. Results also show that MF-PCE reduces the e and e compared to the HF-PCE method with a monotonous decreasing trend. The convergence trends for both e and e are roughly similar for all schemes. Nonetheless, the e and e obtained from LF-PCE without correction are already very accurate and a good representation of the HF borehole function from the Sobol indices point of view. This means that the trend of the HF borehole function is obviously captured by the LF borehole function, due to the very high correlation value between the LF and HF function. In fact, the convergence of r is very similar for both HF-PCE and LF-PCE. This suggests that the LF function is adequate to be used for estimating the Sobol indices of the HF function without any correction term if the correlation is near one. However, although the Sobol indices obtained from LF-PCE are a highly accurate representation of the HF function, the obtained statistical moments are inaccurate because the response is not corrected. This means that if the PCE model is to be used for estimating the statistical moments, the LF function alone is not enough and needs correction to be properly used for both UQ and SA purposes (this is why the MARE for LF-PCE is not depicted since it is already obvious).

Figure 3 shows the convergence of e and e for the borehole problem when the hypothetical cost of the LF simulation is counted. Here, the MF-PCE method provides a good alternative for estimating the Sobol indices when one cannot afford the luxury of evaluating HF-PCE with higher sparse grid level. To reach a threshold value and for both e and e, MF-PCE only needs an HF sparse grid with one level lower than that of HF-PCE. The total computational cost is, of course, greater when the computational cost ratio is relatively high (i.e., RT=); however, there is still a clear advantage of using MF-PCE over HF-PCE even when the RT is high for the borehole problem. In reaching a threshold value of , which we think is already a fine accuracy for engineering purpose, the MF-PCE method with RT= and need a computational cost equals to , , , and of the cost needed by HF-PCE to reach the same threshold (i.e., HF-PCE with SG-3).

Figure 4 shows the decay of PCE coefficients for the borehole problem. We use the SG-4-5 scheme to generate this plot. By analyzing Fig. 4, it is clear that the correction function is less complex than the HF- and LF-function, as indicated by its lower coefficients’ magnitude and faster decay. Note that the correction PCE always seems to decay faster than LF-PCE due to the higher cardinality of LF-PCE. However, we can see that a significant portion of the correction PCE basis has lower coefficients values than that of LF-PCE. This less complexity indicates that it is easier to approximate the correction rather than the HF function, which leads to a more accurate approximation of the MF-PCE model compared to the standard HF-PCE model. Obviously, this leads to a more accurate approximation of the true Sobol indices.

(a) MARE.
(b) r2.
(c) e.
(d) e.
Figure 2: Convergence of error metrics for the borehole function with respect to the number of function evaluations. The MF-PCE method estimates the Sobol indices better than HF-PCE with the same amount of function evaluations, while the estimated values from HF- and LF-PCE are similar to each other.
(a) e.
(b) e.
Figure 3: Convergence of error metrics for the borehole function with respect to the total simulation cost assuming a hypothetical LF model cost.
Figure 4: The decay of correction, HF-, and LF-PCE coefficients for the borehole function. Here, the correction function is less complex than HF- and LF-PCE, indicating a successful approximation.

3.2 Ishigami function

The Ishigami function is a three-dimensional highly non-linear function which is frequently used as a benchmark problem for SA purpose ishigami1990importance . Here, we construct three LF representations of the Ishigami function to test the performance of MF-PCE in approximating the Sobol indices of a highly non-linear function. The standard form of the Ishigami function is defined as

(47)

The HF Ishigami function used in this paper, which is the standard expression of the Ishigami function, is expressed as

(48)

We then build three LF representations of the Ishigami function as follows:

(49)
(50)

and

(51)

The r and MARE of the LF to the HF ishigami function are listed in Table 4. The distribution of the input random variables is Uniform for all . The three LF Ishigami models represent three scenarios of discrepancies between the HF and LF function. The 1st LF function has the highest r correlation and lowest MARE while the 2nd one has the opposite characteristics. On the other hand, the 3rd LF model has exactly the same r as the 2nd LF model but with higher MARE.

LF Model r MARE
1 0.9875 0.4504
2 0.8826 1.0672
3 0.8826 1.5587
Table 4: Statistical similarities between the LF and HF Ishigami functions.

The HF sparse grid level is varied from 1 to 6 with the fixed value of . For the Ishigami function, the true Sobol indices can be obtained analytically, where the values are shown in Table 5. These analytical Sobol Indices are obtained by decomposing the variance of the as shown in Eq. 52 as

(52)

The results, as shown in Fig. 5, indicate that all MF-PCE schemes are able to reduce the Sobol indices error compared to HF-PCE. It can also be seen that the convergence trend for all error metrics is roughly similar. The MF-PCE method with the 1st LF model produces the MF model with the lowest error of all, which is reasonable due to its high r with the HF function. Furthermore, there is no significant difference between the convergence error of the MF scheme with the 2nd and 3rd LF models. Although the MARE of the 3rd LF model is higher than the 2nd model (See Fig. 4(a)), the trend of both functions is similar to each other which results in almost the same quality of the MF-PCE model. All LF-PCEs without correction (see Fig. 4(b)), as it is already obvious, cannot accurately estimate the r of the true Ishigami function. Thus, LF-PCE fails to estimate the true Sobol Indices (See Fig. 4(c) and 4(d)). This is mainly due to the r correlation which is not very close to one, unlike the borehole case as explained before. In cases like this, the LF model without correction should not be trusted as the representation of the HF function to estimate the Sobol indices. However, when the LF model is used within the MF-PCE framework, it can be efficiently used to estimate the Sobol indices of the true HF function with lower computational cost. This is very useful if the computational cost ratio of the HF to the LF function is very high.

The convergence of error metrics with the hypothetical actual cost is shown in Fig. 6. Here, only MF-PCE with the first LF model is shown since it is the best LF model available for the Ishigami function. By observing the plot, we can see that MF-PCE fails to yield a significant beneficial effect when the computational cost ratio is equal to . We can see that the computational cost of MF-PCE with RT is of HF-PCE to reach a threshold value of for both e and e, which is actually not a significant cost reduction. On the other hand, if the computational cost ratio is assumed to be and , the cost to reach this threshold are , , and of the cost needed by HF-PCE, respectively. When the threshold is set to be very strict, that is, , the cost of MF-PCE to reach this threshold with RT, and are , , and of HF-PCE, respectively. With this very strict threshold, the cost reduction becomes significant only when the computational cost ratio equals to or . Conversely, when the computational cost ratio equals to , MF-PCE requires more functions evaluations than that of HF-PCE to reach this very high accuracy.

The decay of PCE coefficients, as shown in Fig. 7, clearly indicates that the coefficients of all correction PCEs decay faster than those of LF- and HF-PCE for all the LF models. This means that the correction functions are all less complex than the LF one. By visual investigation, it can be clearly seen that the gap between the HF and correction PCE model is wider for MF-PCE with the 1st model than that of the others. The wider this gap, the less complex the correction to the LF function, which results in a more successful MF model in estimating the Sobol indices.

Index Analytical
0.3138
0.4424
0
0
0.2436
0
0
0.5574
0.4424
0.2436
Table 5: First order and total Sobol indices of the HF Ishigami function.
(a) MARE.
(b) r
(c) e.
(d) e.
Figure 5: Convergence of error metrics for the Ishigami function with respect to the number of function evaluations. The MF-PCE scheme with the first LF model estimates the Sobol indices better than the other methods. Here, solely LF-PCE cannot be relied on to estimate the Sobol indices since the correlation is not near one.
(a) e.
(b) e.
Figure 6: Convergence of error metrics for the Ishigami function with respect to the total simulation cost assuming a hypothetical LF model cost. Only MF-PCE with the first LF model is shown due to its highest accuracy over the other LF models.
(a) 1st LF model.
(b) 2nd LF model.
(c) 3rd LF model.
Figure 7: The decay of correction, HF-, and LF-PCE coefficients for the Ishigami function. The correction functions from all the three LF models are less complex (i.e., lower magnitude and faster decay) than their corresponding HF- and LF-PCE, with the gap is wider for the first LF model.

3.3 Short column problem

The third algebraic test problem is a short column problem with five LF models ng2012multifidelity . Furthermore, this problem also involves random variables with a non-uniform distribution.

The HF response for the short column problem is defined as

(53)

The LF responses are defined as ng2012multifidelity

(54)
(55)
(56)
(57)

and

(58)

where with the distribution listed in Table 6. The five LF short column functions have various values of r correlation and MARE to the HF one, as shown in Table 7. As it can be seen from this table, there are some LF models that have high r and low MARE (model 1, for example) and LF models with low correlation value and very large MARE (model 3, for example). We aim to further investigate the criteria of which LF model can yield better accuracy in estimating Sobol indices when employed in the MF-PCE framework. The sparse grid level offset is set to 2 for this problem.

Random variable Probability distribution
Uniform [5,15]
Uniform [15,25]
Normal [500,100]
Normal [2000,400]
Normal [5,0.5]
Table 6: Random variable definition and distributions for the short column test case.
LF model r MARE
1 0.9234 8.6926
2 0.7612 126.6604
3 0.4780 154.7131
4 0.6931 15.4713
5 0.5914 1547.13
Table 7: Correlation and MARE values between the LF and HF short column responses.
Index MCS
0.5240
0.2059
0.0696
0.0284
0.0532
0.6257
0.2708
0.1143
0.0346
0.0799
Table 8: First order and total Sobol indices of the HF short column function.

The convergence of the error metrics for the short column problem is shown in Fig. 8. By closer investigation of this figure, we observe that only the 1st and 4th LF functions are able to successfully reduce the error of Sobol indices relative to HF-PCE for all levels of sparse grid. As expected, the decrement of MARE and r corresponds to a better estimation of Sobol indices. However, it is not always the case as it can be observed from the result of MF-PCE using the 3rd LF model which produces a better estimate of Sobol indices than that of HF-PCE from HF sparse grid level of 3. The 2nd and 5th LF models produce MF models with lower quality than those of HF-PCE in terms of estimating the Sobol indices.

Figure 9 depicts the convergence of error metrics for the short column function with the hypothetical cost of the LF simulation is taken into account. It is clear from this figure that when the computational cost ratio is equal to 1/4, there is no advantage gained by utilizing MF-PCE. This is because MF-PCE with RT=1/4 needs 110% of the computational cost needed by HF-PCE to reach the e and e below . On the other hand, we observe that MF-PCE with RT=1/8 requires 67.8% of the HF-PCE cost to reach a similar accuracy, which is quite a noteworthy cost reduction for reaching an engineering accuracy. The case with RT=1/16 and RT=1/32 require 46.61% and 36.02% of the HF-PCE cost to reach the engineering accuracy of .

By only investigating statistical similarities between the HF and LF models, it is hard to determine whether an MF-PCE with a certain LF-model could successfully estimate the true Sobol indices or not. It is clear for the 1st model, but not for the 4th model since its r correlation is low and the MARE is relatively high. It becomes much more difficult to make this inference for the 2nd LF model (its correlation is moderate while the MARE is very high). Although for some cases it is obvious to see the adequacy of the LF model for assisting MF-PCE, inspecting the PCE coefficients decay is the proper way to do this kind of analysis.

The decay of the PCE coefficients for all LF models are shown in Fig. 10. All PCE coefficients are obtained using level 4 correction sparse grids (means level 6 LF sparse grid), which are adequate enough to obtain good convergence of the Sobol indices. It is very clear from this figure that the behavior of coefficients decay for the HF, LF, and correction PCE are similar for unsuccessful MF models. Here, we can see that the magnitude of the correction PCEs’ coefficients is much larger and it also decay slower than that of HF-PCE. Moreover, the coefficients of the LF and correction PCE decay similarly to each other for the 2nd, 3rd, and 5th LF models. Such similar decay means that the trend of correction PCE almost mimics that of LF-PCE, indicating that the correction PCE is not less complex than LF- and HF-PCE. The MF-PCE scheme with the 3rd LF model is a special case, where the convergence of e and e shows that it slightly outperformed HF-PCE when correction sparse grid with level 3 and 4 are used. However, it is not suggested to use the LF model like the 3rd model in a real application. This is because we cannot trust the adequacy of the 3rd model in aiding MF-PCE when investigating the PCE coefficients decay. Indeed, with a low level sparse grid, MF-PCE with the 3rd LF model fails to estimate the Sobol indices accurately. On the other hand, the decay trend of MF-PCE with the 1st and 4th models suggests that the MF approximation is better than the HF one. The gap between the decay of the correction and HF-, LF-PCE is visible for these two LF models, indicating the less complexity of the correction-PCE model. This difference especially can be clearly observed when MF-PCE with the 1st LF model is compared with the 4th LF model. In addition, if we pay a close attention to the decay of the PCE coefficients for the 4th model, the magnitude of the correction PCE’s first coefficient is larger than that of HF-PCE (i.e., the mean is higher). However, its correction PCE’s coefficients decay faster than that of HF-PCE. In the light of these results, we observe that it is the rate of decay of the coefficients that is important for a successful MF-PCE model and not the difference between the mean value of the correction and HF-PCE. Moreover, it is also better to look at the difference between correction PCE and both HF- and LF- PCE.

(a) MARE.
(b) r2.
(c) e.
(d) e.
Figure 8: Convergence of error metrics for the short column function with respect to the number of function evaluations. The MF-PCE scheme with the first and the fourth LF models consistently outperform HF-PCE for all sparse grid levels.
(a) e.
(b) e.
Figure 9: Convergence of error metrics for the short column function with respect to the total simulation cost assuming a hypothetical LF model cost. Only the first LF model is used since it is the best available LF model for MF approximation.
(a) 1st model.
(b) 2nd model.
(c) 3rd model.
(d) 4th model.
(e) 5th model.
Figure 10: The decay of the correction, HF-, and LF-PCE coefficients for the short column function with various LF models. The correction PCE’s coefficients of the first and the fourth LF models decay faster than those of the corresponding HF-PCE, indicating a successful MF model.

3.4 Transonic airfoil in inviscid flow

The final test is the SA of a transonic airfoil in inviscid flow. The open source CFD code of SU was used to solve the Euler equation in order to obtain the aerodynamic coefficients palacios2013stanford . The airfoil model in this case is assumed to be subjected to geometrical uncertainties. Here, we used the RAE 2822 airfoil which was parameterized via PARSEC method to find the variables that describe the datum shape sobieczky1999parametric . Note that since the datum shape is an approximation of the original one, there is an error between the coordinate of the true and the approximated shape. However, our goal here is to demonstrate the capability of MF-PCE in performing SA, and not to measure the randomness of the original airfoil itself; hence, this test case is appropriate enough for the purpose of demonstrating an SA algorithm. For this problem, the random output of interest is the lift-to-drag ratio (), under the fix flight condition of Mach number and angle of attack of 0.73 and 2, respectively.

After the suitable PARSEC parameters were found, we assume that there were five uncertain parameters, that is, the and coordinate of the lower crest (i.e., and , respectively), and coordinate of the upper crest (i.e., and , respectively), and trailing edge angle (i.e., ), with the distribution listed in Table 9. An example of one realization evaluated by the CFD solver is shown in Fig. 11. The datum shape and several realizations of the uncertain shape are shown in Fig. 12. For this problem, we used the fully and partially converged simulations as the HF and LF simulations, respectively. The criterion for the fully converged simulation is the residual of the drag that reaches , which on average requires 500 iterations to reach this criterion. On the other hand, the iteration for the partially converged simulation to be used as the LF simulations is set to 100, which means that the computational cost ratio between the LF and HF simulation is 0.2 for this case. This fully and partially converged simulation strategy has been employed for optimization forrester2006optimization ; branke2017efficient , surrogate model construction courrier2014use , and UQ palar2016multi . One advantage of this strategy is that some LF samples are evaluated together with the HF samples, which leads to further reduction in the computational cost.

Figure 11: An example of one CFD realization for the inviscid transonic airfoil case.
Figure 12: The datum airfoil shape (red line) and the ensemble of various realizations of the uncertain shape (black lines) for the inviscid transonic airfoil case.

Since the statistical moments are typically also the quantities of interest in real-world applications, we plotted the convergence of statistical moments together with the Sobol indices. For reference value, an HF-PCE with level 4 sparse grid is used to obtain the reference total Sobol indices and statistical moments, as shown in Table 10. Shown in this table are also the Sobol indices obtained by LF-PCE with level 4 sparse grid. By ranking the total sensitivity indices, it is found that is the most important variable followed by and , while the contribution of and

is very small but not negligible. The sum of the total Sobol indices is 1.015, which means that the largest contributors to the total variance are the first order sensitivity indices with small interaction between variables. Although the Sobol indices obtained by LF-PCE seem close to the true Sobol indices, they are not accurate representations of the true indices. Moreover, the standard deviation of the HF- and LF-PCE differs by 1.78%. This means that correction of LF-PCE must be applied within MF-PCE framework.

No. Random variable Probability distribution
1 Uniform [0.4415, 0.4549]
2 Uniform [0.060, 0.066]
3 Uniform [0.344, 0.3800]
4 Uniform [-0.0618, -0.0560]
5 Uniform [-0.1182,-0.1070]
Table 9: Random variable definition and distributions for the inviscid transonic airfoil test case.
Sensitivity index HF, SG-4 LF, SG-4
0.3207 0.3136
0.6964 0.6970
0.0044 0.0037
0.0214 0.0198
0.0085 0.0084
Mean 121.9250 121.5645
St. D. 39.6901 37.3553
Table 10: Sobol indices obtained by HF- and LF-PCE with high level sparse grid for the inviscid transonic airfoil problem.
(a) Mean.
(b) Standard deviation.
(c) e
Figure 13: Convergence of statistical moments and total Sobol indices for the inviscid transonic airfoil problem. The advantage of utilizing MF-PCE is more evident when it is used to estimate the total Sobol indices as shown by the faster convergence of MF-PCE in approaching engineering accuracy.
(a) .
(b)
(c)
(d)
(e)
Figure 14: Convergence of total Sobol indices for the inviscid transonic airfoil problem. The faster convergence of MF-PCE compared to HF-PCE is especially clear when it is used to approximate the total Sobol indices of and .
Figure 15: The decay of the correction, HF-, and LF-PCE coefficients for the inviscid transonic airfoil problem. The coefficients of correction PCE evidently decay faster than those of HF and LF-PCE.

First, convergence analysis of statistical moments and error of total Sobol indices is performed and the results are depicted in Fig. 13 (the error of all Sobol indices is not shown since the interaction term is small). Shown in the axis of Fig. 13 is the actual cost of MF-PCE that includes the cost of the LF function. From the mean and standard deviation perspective, the advantage of utilizing MF-PCE over HF-PCE is not clearly seen. Also, note that there is a jump in computational cost for MF-PCE with when the HF sparse grid level is set to 2. Since increasing the value of means higher computational cost, one has to be careful if setting the value too high, especially if the computational cost of the LF function is not negligible. Nonetheless, a clear advantage of MF-PCE is well observed when it is used to estimate the Sobol Indices. Here, MF-PCE with and reach the e level below after the equivalent of 125 and 325 HF function evaluations, respectively, where HF-PCE needs 341 function evaluations for the same accuracy; this means that the computational cost is reduced to about 36.66% and 95.3% for the MF-PCE method with and , respectively. Although the gain from utilizing MF-PCE with seems to be minimal in achieving this accuracy, it reaches the error level of total Sobol indices below with the equivalent of 77 HF function evaluations.

Figure 14 shows the convergence of each term of total Sobol indices with respect to the actual cost. The convergence trend is not especially clear for the , , and , since their contribution are relatively small compared to and . We observe that MF-PCE with both and reaches a good accuracy for and with HF sparse grid level of 2. However, MF-PCE with is more efficient here since it reaches the engineering accuracy with fewer function evaluations than those of MF-PCE with and HF-PCE.

The decay of the correction and LF-PCE coefficients is depicted in Fig. 15. The plot is prepared by using MF-PCE with correction and LF-PCE sparse grid level of 3 and 4, respectively. By investigating the figure, it is obvious that the correction PCE model is less complex than the LF- and HF-PCE. Thus, the given LF model is appropriate to be used within the MF-PCE framework for this problem. The reason for the similarity between the decay of LF- and HF-PCE coefficients is because that the difference in magnitude between the two functions is small. However, we can see that for polynomial terms with small coefficients (i.e., beyond the 40th term) the two PCEs start to differ. This is where the correction PCE plays its role.

4 Conclusions

An approach for SA using PCE with an MF model is presented in this paper. The goal is to accelerate the SA process when simulations with multiple levels of fidelity are available. The spectral projection-based MF-PCE is used as the method of interest in this paper. The method uses two PCEs: LF and correction, which are combined into a single MF-PCE. SA is directly performed in the post-processing phase by computation of the Sobol indices from the MF-PCE coefficients.

The proposed method is demonstrated and investigated on some algebraic and engineering test problems. On cases where the correlation of the LF to the HF function is very high (near 1), the Sobol indices obtained from the LF- and HF-PCE are almost the same; revealing that for such cases, the use of LF function is enough to obtain accurate Sobol indices. However, the Sobol indices obtained from LF- and HF-PCE differ greatly if the LF and HF functions are not perfectly correlated. It is also worth noting that one usually wants to compute the statistical moments together with Sobol indices, which means that correction of the LF function is always desired in order to obtain accurate values. The current study shows that MF-PCE can be employed to obtain accurate Sobol indices with lower computational cost than that of HF-PCE. Our investigation shows that MF-PCE is an efficient method for estimating the Sobol indices with a requirement that the LF and HF function are highly correlated with each other. The decrease of error estimation of Sobol indices positively correlated with the decrease of MARE and, especially, r. Investigation on the artificial problems with a hypothetical LF simulations cost suggests that the computational cost ratio between the LF and HF function should be at least lower than , in order to ensure the advantage of utilizing MF-PCE over HF-PCE in terms of computational efficiency. In the inviscid transonic airfoil case with five uncertain parameters, the MF approach successfully estimates the Sobol indices with the computational cost of about 36.66% needed by the HF-PCE method in order to obtain a nearly similar engineering accuracy.

As for the future work, implementing adaptive sparse grid for MF-PCE is a promising approach to efficiently obtain accurate Sobol indices in high-dimensional problems by focusing on the most important dimensions. Moreover, the Sobol indices themselves can be used to guide the adaptivity. Because such approach is not yet considered, more work is needed to investigate the adaptive MF-PCE approach for SA purpose. Moreover, the use of cost-efficient quadrature such as Clenshaw-Curtis quadrature should also be considered. Investigation on the use of regression-based MF-PCE for SA, which allows the flexibility in sampling location and size, is also a promising research direction.

Acknowledgement

Part of this work is funded through Riset KK ITB. Koji Shimoyama was supported in part by the Grant-in-Aid for Scientific Research (B) No. H1503600 administered by the Japan Society for the Promotion of Science (JSPS).

References

  • (1) A. Saltelli, K. Chan, E. M. Scott, et al., Sensitivity analysis, Vol. 1, Wiley New York, 2000.
  • (2) B. Efron, C. Stein, The jackknife estimate of variance, The Annals of Statistics (1981) 586–596.
  • (3) I. M. Sobol’, On sensitivity estimation for nonlinear mathematical models, Matematicheskoe Modelirovanie 2 (1) (1990) 112–118.
  • (4)

    J. Park, I. W. Sandberg, Universal approximation using radial-basis-function networks, Neural computation 3 (2) (1991) 246–257.

  • (5) D. G. Krige, A statistical approach to some mine valuation and allied problems on the Witwatersrand, Ph.D. thesis, University of the Witwatersrand (1951).
  • (6) G. Matheron, Principles of geostatistics, Economic geology 58 (8) (1963) 1246–1266.
  • (7) N. Cressie, Statistics for spatial data, John Wiley & Sons, 2015.
  • (8) J. Sacks, W. J. Welch, T. J. Mitchell, H. P. Wynn, Design and analysis of computer experiments, Statistical science (1989) 409–423.
  • (9) L. S. Gandin, R. Hardin, Objective analysis of meteorological fields, Vol. 242, Israel program for scientific translations Jerusalem, 1965.
  • (10) G. Loeven, J. Witteveen, H. Bijl, Probabilistic collocation: an efficient non-intrusive approach for arbitrarily distributed parametric uncertainties, in: 45th AIAA Aerospace Sciences Meeting and Exhibit, 2007, p. 317.
  • (11) O. M. Knio, H. N. Najm, R. G. Ghanem, et al., A stochastic projection method for fluid flow: I. basic formulation, Journal of computational Physics 173 (2) (2001) 481–511.
  • (12) D. M. Ghiocel, R. G. Ghanem, Stochastic finite-element analysis of seismic soil-structure interaction, Journal of Engineering Mechanics 128 (1) (2002) 66–77.
  • (13) D. Xiu, G. E. Karniadakis, The Wiener–Askey polynomial chaos for stochastic differential equations, SIAM journal on scientific computing 24 (2) (2002) 619–644.
  • (14) R. G. Ghanem, P. D. Spanos, Stochastic finite elements: a spectral approach, Courier Corporation, 2003.
  • (15) P. Das, Y. Zheng, Cumulative formation of response surface and its use in reliability analysis, Probabilistic Engineering Mechanics 15 (4) (2000) 309–315.
  • (16)

    M. Papadrakakis, N. D. Lagaros, Reliability-based structural optimization using neural networks and monte carlo simulation, Computer methods in applied mechanics and engineering 191 (32) (2002) 3491–3507.

  • (17) I. Kaymaz, Application of kriging method to structural reliability problems, Structural Safety 27 (2) (2005) 133–151.
  • (18) V. Dubourg, F. Deheeger, B. Sudret, Metamodel-based importance sampling for the simulation of rare events, Applications of Statistics and Probability in Civil Engineering 26 (2011) 192.
  • (19) B. J. Bichon, J. M. McFarland, S. Mahadevan, Efficient surrogate models for reliability analysis of systems with multiple failure modes, Reliability Engineering & System Safety 96 (10) (2011) 1386–1395.
  • (20) M. Balesdent, J. Morio, J. Marzat, Kriging-based adaptive importance sampling algorithms for rare event estimation, Structural Safety 44 (2013) 1–10.
  • (21) B. J. Bichon, M. S. Eldred, L. P. Swiler, S. Mahadevan, J. M. McFarland, Efficient global reliability analysis for nonlinear implicit performance functions, AIAA J 46 (10) (2008) 2459–2468.
  • (22) B. Echard, N. Gayton, M. Lemaire, Ak-mcs: an active learning reliability method combining kriging and monte carlo simulation, Structural Safety 33 (2) (2011) 145–154.
  • (23) X. Huang, J. Chen, H. Zhu, Assessing small failure probabilities by ak–ss: an active learning method combining kriging and subset simulation, Structural Safety 59 (2016) 86–95.
  • (24) S.-K. Choi, R. V. Grandhi, R. A. Canfield, Structural reliability under non-gaussian stochastic behavior, Computers & structures 82 (13) (2004) 1113–1121.
  • (25) M. Berveiller, B. Sudret, M. Lemaire, Stochastic finite element: a non intrusive approach by regression, European Journal of Computational Mechanics/Revue Européenne de Mécanique Numérique 15 (1-3) (2006) 81–92.
  • (26) C. Hu, B. D. Youn, Adaptive-sparse polynomial chaos expansion for reliability analysis and design of complex engineering systems, Structural and Multidisciplinary Optimization 43 (3) (2011) 419–442.
  • (27) B. Sudret, G. Blatman, M. Berveiller, Response surfaces based on polynomial chaos expansions, Construction reliability: safety, variability and sustainability (2013) 147–167.
  • (28) R. Schöbi, B. Sudret, S. Marelli, Rare event estimation using polynomial-chaos kriging, ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering 3 (2) (2016) D4016002.
  • (29) R. Jin, W. Chen, A. Sudjianto, On sequential sampling for global metamodeling in engineering design, in: Proceedings of DETC, Vol. 2, 2002, pp. 539–548.
  • (30) I. Bilionis, N. Zabaras, Multi-output local gaussian process regression: Applications to uncertainty quantification, Journal of Computational Physics 231 (17) (2012) 5718–5746.
  • (31) K. Shimoyama, S. Kawai, J. J. Alonso, Dynamic adaptive sampling based on kriging surrogate models for efficient uncertainty quantification, in: 15th AIAA Non-Deterministic Approaches Conference, 2013, pp. 2013–1470.
  • (32) R. P. Dwight, Z.-H. Han, Efficient uncertainty quantification using gradient-enhanced kriging, AIAA paper 2276 (2009) 2009.
  • (33) M. P. Rumpfkeil, W. Yamazaki, D. J. Mavriplis, A dynamic sampling method for kriging and cokriging surrogate models, in: 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. American Institute of Aeronautics et Astronautics (cf. p. 162), 2011.
  • (34) Q. Wang, P. Moin, G. Iaccarino, A high order multivariate approximation scheme for scattered data sets, Journal of Computational Physics 229 (18) (2010) 6343–6361.
  • (35) K. Boopathy, M. P. Rumpfkeil, Unified framework for training point selection and error estimation for surrogate models, Aiaa Journal.
  • (36) B. Sudret, Global sensitivity analysis using polynomial chaos expansions, Reliability Engineering & System Safety 93 (7) (2008) 964–979.
  • (37) T. Crestaux, O. Le Maıˆtre, J.-M. Martinez, Polynomial chaos expansion for sensitivity analysis, Reliability Engineering & System Safety 94 (7) (2009) 1161–1172.
  • (38) M. D. Morris, T. J. Mitchell, D. Ylvisaker, Bayesian design and analysis of computer experiments: use of derivatives in surface prediction, Technometrics 35 (3) (1993) 243–255.
  • (39) W. Liu, Development of gradient-enhanced kriging approximations for multidisciplinary design optimization, 2003.
  • (40) J. H. de Baar, R. P. Dwight, H. Bijl, Improvements to gradient-enhanced kriging using a bayesian interpretation, International Journal for Uncertainty Quantification 4 (3).
  • (41) J. de Baar, T. Scholcz, R. Dwight, Exploiting adjoint derivatives in high-dimensional metamodels, AIAA Journal 53 (5) (2015) 1391–1395.
  • (42) J. Peng, J. Hampton, A. Doostan, On polynomial chaos expansion via gradient-enhanced ℓ 1-minimization, Journal of Computational Physics.
  • (43) J. H. de Baar, B. Harding, A gradient-enhanced sparse grid algorithm for uncertainty quantification, International Journal for Uncertainty Quantification 5 (5).
  • (44) N. Wiener, The homogeneous chaos, American Journal of Mathematics 60 (4) (1938) 897–936.
  • (45)

    S.-K. Choi, R. V. Grandhi, R. A. Canfield, C. L. Pettit, Polynomial chaos expansion with latin hypercube sampling for estimating response variability, AIAA journal 42 (6) (2004) 1191–1198.

  • (46) M. S. Eldred, J. Burkardt, Comparison of non-intrusive polynomial chaos and stochastic collocation methods for uncertainty quantification, in: Proceedings of the 47th AIAA Aerospace Sciences Meeting, no. 2009-0976, AIAA, Orlando, FL, 2009.
  • (47) G. Blatman, B. Sudret, Efficient computation of global sensitivity indices using sparse polynomial chaos expansions, Reliability Engineering & System Safety 95 (11) (2010) 1216–1229.
  • (48) G. Blatman, B. Sudret, Adaptive sparse polynomial chaos expansion based on least angle regression, Journal of Computational Physics 230 (6) (2011) 2345–2367.
  • (49) A. Doostan, H. Owhadi, A non-adapted sparse approximation of PDEs with stochastic inputs, Journal of Computational Physics 230 (8) (2011) 3015–3034.
  • (50) J. D. Jakeman, M. S. Eldred, K. Sargsyan, Enhancing ℓ1-minimization estimates of polynomial chaos expansions using basis selection, Journal of Computational Physics 289 (2015) 18–34.
  • (51) S. Smolyak, Interpolation and quadrature formulas for the classes wa s and ea s, in: Dokl. Akad. Nauk SSSR, Vol. 131, 1960, pp. 1028–1031.
  • (52) T. Gerstner, M. Griebel, Numerical integration using sparse grids, Numerical algorithms 18 (3) (1998) 209–232.
  • (53) H.-J. Bungartz, M. Griebel, Sparse grids, Acta numerica 13 (2004) 147–269.
  • (54) J. Garcke, Sparse grids in a nutshell, in: Sparse grids and applications, Springer, 2012, pp. 57–80.
  • (55) S. A. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions, in: Dokl. Akad. Nauk SSSR, Vol. 4, 1963, p. 123.
  • (56) D. Xiu, J. S. Hesthaven, High-order collocation methods for differential equations with random inputs, SIAM Journal on Scientific Computing 27 (3) (2005) 1118–1139.
  • (57) P. G. Constantine, M. S. Eldred, E. T. Phipps, Sparse pseudospectral approximation method, Computer Methods in Applied Mechanics and Engineering 229 (2012) 1–12.
  • (58) G. T. Buzzard, Global sensitivity analysis using sparse grid interpolation and polynomial chaos, Reliability Engineering & System Safety 107 (2012) 82–89.
  • (59) G. T. Buzzard, D. Xiu, Variance-based global sensitivity analysis via sparse-grid interpolation and cubature, Communications in Computational Physics 9 (03) (2011) 542–567.
  • (60) O. Garcia-Cabrejo, A. Valocchi, Global sensitivity analysis for multivariate output using polynomial chaos expansion, Reliability Engineering & System Safety 126 (2014) 25–36.
  • (61) M. C. Kennedy, A. O’Hagan, Predicting the output from a complex computer code when fast approximations are available, Biometrika 87 (1) (2000) 1–13.
  • (62) A. I. Forrester, A. Sóbester, A. J. Keane, Multi-fidelity optimization via surrogate modelling, in: Proceedings of the Royal Society of London a: mathematical, physical and engineering sciences, Vol. 463, The Royal Society, 2007, pp. 3251–3269.
  • (63) J. W. Bandler, Q. S. Cheng, S. Dakroury, A. S. Mohamed, M. H. Bakr, K. Madsen, J. Søndergaard, et al., Space mapping: the state of the art, Microwave Theory and Techniques, IEEE Transactions on 52 (1) (2004) 337–361.
  • (64) H. Shah, S. Hosder, S. Koziel, Y. A. Tesfahunegn, L. Leifsson, Multi-fidelity robust aerodynamic design optimization under mixed uncertainty, Aerospace Science and Technology 45 (2015) 17–29.
  • (65) M. B. Giles, Multilevel Monte Carlo path simulation, Operations Research 56 (3) (2008) 607–617.
  • (66) A. Barth, C. Schwab, N. Zollinger, Multi-level Monte Carlo finite element method for elliptic PDEs with stochastic coefficients, Numerische Mathematik 119 (1) (2011) 123–161.
  • (67) L. W. Ng, K. E. Willcox, Multifidelity approaches for optimization under uncertainty, International Journal for Numerical Methods in Engineering 100 (10) (2014) 746–772.
  • (68) J. de Baar, S. Roberts, R. Dwight, B. Mallol, Uncertainty quantification for a sailing yacht hull, using multi-fidelity kriging, Computers & Fluids 123 (2015) 185–201.
  • (69) L. W.-T. Ng, M. Eldred, Multifidelity uncertainty quantification using nonintrusive polynomial chaos and stochastic collocation, in: Proceedings of the 14th AIAA Non-Deterministic Approaches Conference, number AIAA-2012-1852, Honolulu, HI, Vol. 45, 2012.
  • (70) P. S. Palar, T. Tsuchiya, G. Parks, Decomposition-based evolutionary aerodynamic robust optimization with multi-fidelity point collocation non-intrusive polynomial chaos, in: 17th AIAA Non-Deterministic Approaches Conference, 2015, p. 1377.
  • (71) P. S. Palar, T. Tsuchiya, G. T. Parks, Multi-fidelity non-intrusive polynomial chaos based on regression, Computer Methods in Applied Mechanics and Engineering 305 (2016) 579–606.
  • (72) A. Narayan, C. Gittelson, D. Xiu, A stochastic collocation algorithm with multifidelity models, SIAM Journal on Scientific Computing 36 (2) (2014) A495–A521.
  • (73) L. Le Gratiet, C. Cannamela, B. Iooss, A Bayesian approach for global sensitivity analysis of (multifidelity) computer codes, SIAM/ASA Journal on Uncertainty Quantification 2 (1) (2014) 336–363.
  • (74) T. Homma, A. Saltelli, Importance measures in global sensitivity analysis of nonlinear models, Reliability Engineering & System Safety 52 (1) (1996) 1–17.
  • (75) D. Xiu, G. E. Karniadakis, Modeling uncertainty in flow simulations via generalized polynomial chaos, Journal of computational physics 187 (1) (2003) 137–167.
  • (76) R. Askey, J. A. Wilson, Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials, Vol. 319, American Mathematical Soc., 1985.
  • (77) M. Griebel, M. Schneider, C. Zenger, A combination technique for the solution of sparse grid problems, Technische Universität, 1990.
  • (78) D. J. Toal, Some considerations regarding the use of multi-fidelity kriging in the construction of surrogate models, Structural and Multidisciplinary Optimization 51 (6) (2015) 1223–1245.
  • (79) S. Xiong, P. Z. Qian, C. J. Wu, Sequential design and analysis of high-accuracy and low-accuracy computer codes, Technometrics 55 (1) (2013) 37–46.
  • (80) T. Ishigami, T. Homma, An importance quantification technique in uncertainty analysis for computer models, in: Uncertainty Modeling and Analysis, 1990. Proceedings., First International Symposium on, IEEE, 1990, pp. 398–403.
  • (81) F. Palacios, M. R. Colonno, A. C. Aranake, A. Campos, S. R. Copeland, T. D. Economon, A. K. Lonkar, T. W. Lukaczyk, T. W. Taylor, J. J. Alonso, Stanford university unstructured (SU): An open-source integrated computational environment for multi-physics simulation and design, AIAA Paper 287 (2013) 2013.
  • (82) H. Sobieczky, Parametric airfoils and wings, in: Recent Development of Aerodynamic Design Methodologies, Springer, 1999, pp. 71–87.
  • (83) A. I. Forrester, N. W. Bressloff, A. J. Keane, Optimization using surrogate models and partially converged computational fluid dynamics simulations, in: Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, Vol. 462, The Royal Society, 2006, pp. 2177–2204.
  • (84)

    J. Branke, M. Asafuddoula, K. S. Bhattacharjee, T. Ray, Efficient use of partially converged simulations in evolutionary optimization, IEEE Transactions on Evolutionary Computation 21 (1) (2017) 52–64.

  • (85) N. Courrier, P.-A. Boucard, B. Soulier, The use of partially converged simulations in building surrogate models, Advances in Engineering Software 67 (2014) 186–197.
  • (86) H. Liu, Y.-S. Ong, J. Cai, A survey of adaptive sampling for global metamodeling in support of simulation-based complex engineering design, Structural and Multidisciplinary Optimization (2017) 1–24.
  • (87) P. Wang, Z. Lu, Z. Tang, An application of the Kriging method in global sensitivity analysis with parameter uncertainty, Applied Mathematical Modelling 37 (9) (2013) 6543–6555.