1 Introduction
Computational methods are widely deployed to predict the behavior and to compute the output of interests (e.g., stress distribution or force) of physical or engineering systems. In engineering design and analysis, computational partial differential equation (PDE) solvers are routinely employed to aid and enhance such processes. However, due to the existence of uncertainties in the system, deterministic analysis might result in a failure to understand the true probabilistic nature of the system. An example of this is in the field of aerospace engineering, where manufacturing error and environmental uncertainties such as gust, turbulence, and change in design conditions perturb the nominal condition of the aircraft. Understanding the effect of the uncertainties to the system is then a vital task in many engineering and scientific problems. Depending on the nature of the uncertainties, uncertainties can be categorized as either aleatory or epistemic uncertainties. While aleatoric uncertainties are irreducible and inherent to the system, epistemic uncertainties are caused by our lack of knowledge on the system being investigated. Aleatoric uncertainties can be conveniently expressed by probability theory using specific measures such as statistical moments.
Sensitivity analysis (SA) in probabilistic modeling plays an important role in understanding the impact of input random variables and their combinations to the model output. By performing SA, the contribution of the input random variables to the model output can be ranked, thus giving the analysts information of which variables are the most and least responsible. This is very useful in UQ, where the dimensionality of the random variable can be high and it is desired to reduce the complexity beforehand. Moreover, sensitivity information can provide important knowledge about the physics of the system being investigated. In engineering analysis, numerical evaluation of PDE is often required in order to obtain necessary information for SA. In the field of fluid and solid mechanics, the model responses are typically evaluated using computational fluid dynamics (CFD) or finite element methods (FEM), respectively.
Based on the range of the domain to be analyzed, SA is commonly classified into two groups
saltelli2000sensitivity :
local sensitivity analysis, which studies the local impact of the input parameters on the model output and typically provides the partial derivatives of the output to the input parameters.

global sensitivity analysis, which studies the uncertainties in the output due to changes of the input parameters over the entire domain.
Based on how it exploits the model response, SA can be further categorized into saltelli2000sensitivity :

regression based methods
, which perform linear regression to the relationship between the model input and output. These methods are limited to cases with a linear or almost linear relationship but are inadequate in cases with highly nonlinear relationship.

variancebased methods, which decompose the variance of the output into the sum of contributions of each input variables and their combination. In statistic literature, variance based methods are widely known as ANOVA (ANalysis Of VAriance) efron1981jackknife
In this paper, we have interest in global SA using variancebased methods since our main focus is on UQ, although the methodology itself can be used for other applications such as optimization. More specifically, this paper focuses on Sobol sensitivity analysis method sobol1990sensitivity .
Monte Carlo simulation (MCS) is the most conventional method to perform SA sobol1990sensitivity
. The advantage of MCS for SA is that it is straightforward and very simple to be implemented. However, it is infeasible to obtain accurate results using MCS if a demanding computational simulation is used. To cope with this, more advanced techniques have to be employed. SA using cheap analytical replacement of the true function, or a metamodel, is one way to handle this. Metamodeling techniques such as radial basis function
park1991universal , Kriging krige1951statistical ; matheron1963principles ; cressie2015statistics ; sacks1989design ; gandin1965objective , probabilistic collocation loeven2007probabilistic , and polynomial chaos expansion (PCE) knio2001stochastic ; ghiocel2002stochastic ; xiu2002wiener ; ghanem2003stochastic can be employed for SA purpose.In the closely related field of structural reliability analysis, metamodeling techniques have been widely employed to accelerate the computation of failure probability das2000cumulative ; papadrakakis2002reliability ; kaymaz2005application ; dubourg2011metamodel ; bichon2011efficient ; balesdent2013kriging
. Compared to other metamodels, Kriging metamodel is particularly useful for reliability analysis due to the direct availability of error estimation that allows one to deploy active learning strategy
bichon2008efficient ; echard2011ak ; huang2016assessing (i.e., Kriging with adaptive sampling). Active learning works by adding more samples so as to increase the accuracy of the metamodel near the region of interest, that is, the limit state. Besides Kriging, PCE has also been employed for structural reliability analysis purpose choi2004structural ; berveiller2006stochastic ; hu2011adaptive ; sudret2013response . Recently, a combination of PCE and Kriging within the framework of universal Kriging and active learning was developed to handle rare event estimation problem schobi2016rare .Metamodels in UQ and SA are required to be globally accurate for a precise estimation of sensitivity indices. Although not in the context of global SA, the use of Kriging models with adaptive sampling to progressively refine the global accuracy has been explored by some researchers within the context of UQ bilionis2012multi ; shimoyama2013dynamic
. Dwight and Han proposed an adaptive sampling technique for UQ by using a criterion based on the product of the Kriging uncertainty and the probability density function of the random inputs
dwight2009efficient, which is useful for cases with nonuniform distributions. Another alternative is to construct local surrogate models such as Dutch intrapolation
rumpfkeil2011dynamicand multivariate interpolation and regression
wang2010high ; boopathy2014unified in order to provide the error measure. A recent survey of adaptive sampling for Kriging and other surrogate models for various applications can be found in Liu et al. liu2017survey . Other than Kriging, polynomialbased metamodels, especially PCE, are highly useful and competitive to the Kriging model when the quantities of interest are the statistical moments and sensitivity indices.The advantage of PCE for UQ and SA purpose is that the calculation of Sobol sensitivity indices can be directly performed as the postprocessing step sudret2008global ; crestaux2009polynomial . This is in contrast to other metamodelling techniques where MCS still needs to be performed on the analytical metamodel to obtain the Sobol indices. The approximation quality of the metamodel can be further enhanced by including gradient information. Some methods that have been reformulated in order to include gradient information are gradientenhanced Kriging morris1993bayesian ; liu2003development ; dwight2009efficient ; de2014improvements ; de2015exploiting , compressive sensingbased PCE peng2016polynomial , and sparse grid method de2015gradient . However, it is worth noting that obtaining gradient information is a tedious task and is not always possible for many applications. In this paper, we assume that gradient information is not available.
The PCE method is based on homogeneous chaos expansion, which itself is based on the seminal work of Wiener wiener1938homogeneous . The earliest version of PCE computes the coefficients by using Galerkin minimization. This intrusive scheme needs modification of the governing equation and the simulation code in order to perform the UQ procedure ghanem2003stochastic . The difficulty with the intrusive scheme is that the derivation of the modified governing equation might be very complex and highly timeconsuming. In this paper, the method of interest is nonintrusive PCE ghanem2003stochastic ; ghiocel2002stochastic , that allows the use of legacy code since it can be treated as black box simulation. Based on how it estimates the PCE coefficients, nonintrusive PCE can be classified into two categories:

Spectral projection, which estimates the coefficients by exploiting the orthogonality of the polynomial and quadrature knio2001stochastic ; ghiocel2002stochastic .

Regression, which estimates the coefficients by using leastsquare minimization choi2004polynomial ; berveiller2006stochastic .
Note that the definition of regressionbased PCE is different from the regressionbased SA. The definition of the former is that a regressionbased technique is used to build the PCE metamodel, while the definition of the latter is the utilization of linear approximation within SA framework.
PCE has been developed and implemented as a tool for SA since the work of Sudret sudret2008global . Further simulations are not necessary because the Sobol indices in PCE are obtained in the postprocessing phase sudret2008global ; crestaux2009polynomial by exploiting the orthogonality of the polynomial bases. PCE is now a widely used approach in the field of UQ due to its strong mathematical basis eldred2009comparison . For regressionbased PCE, sparsePCE based on leastangleregression is an efficient method to obtain the Sobol indices without the need to determine the polynomial bases a priori blatman2010efficient ; blatman2011adaptive . Another alternative for sparse PCE is using a compressive sensingbased technique doostan2011non ; jakeman2015enhancing that employs orthogonal matching pursuit to scan the most influential polynomial bases. While for the spectralprojection based PCE, the sparse grid interpolation technique smolyak1960interpolation ; gerstner1998numerical ; bungartz2004sparse ; garcke2012sparse can be employed to reduce the number of collocation points in highdimensionality for SA purpose smolyak1963quadrature ; xiu2005high ; constantine2012sparse ; buzzard2012global ; buzzard2011variance . Recent advances in this field include the use of PCE for multivariate SA garcia2014global . Our interest in this paper is to obtain Sobol sensitivity indices for global SA purpose by using polynomial chaos expansion (PCE).
In some applications, simulations with multiple levels of fidelity are available. The fidelity here is defined as the measure of how accurate the model approximates the reality. The fidelity level is typically categorized into high and lowfidelity, where the categorization of the model into high or lowfidelity depends on the case being investigated. The highfidelity (HF) simulation is the most accurate but is typically more computationally demanding than the lowfidelity (LF) simulation. On the other hand, the LF simulation is less accurate but many evaluations could be performed since it is cheaper to evaluate. For example, a PDEsolver with a very fine and coarse mesh can be treated as the HF and LF simulation, respectively. An approach that utilizes simulations with multiple levels of accuracy is commonly called multifidelity (MF) method. The common technique to apply MF simulations is to use LF simulations to capture the response trend and employ HF simulations to correct the response.
Multifidelity simulation is commonly used to aid and accelerate the optimization process. CoKriging kennedy2000predicting ; forrester2007multi and spacemapping bandler2004space ; shah2015multi are two examples of surrogatebased methods that rely on MF simulations; mainly for optimization purposes. Multifidelity techniques for UQ purpose recently appeared in literature. Among the first is the multilevel Monte Carlo (MLMC) method which was firstly developed in the context of solving stochastic PDEs problem giles2008multilevel ; barth2011multi and was further developed to handle general blackbox problems ng2014multifidelity . The Kriging method is also applicable for UQ purpose de2015uncertainty . Recently, a nonintrusive MFPCE based technique to solve UQ problems was developed. The method works by combining the LF and correction PCE into a single MFPCE ng2012multifidelity . The coefficients of the LF and correction PCE can be obtained by employing spectral projection ng2012multifidelity or regressionbased technique palar2015decomposition ; palar2016multi . Another similar technique is the MF stochastic collocation that relies on Lagrangepolynomial interpolation narayan2014stochastic . Although the use of MF simulations is common to be found in optimization and, recently, UQ literatures, its application to SA is still rare and scarce. One example of works in MF SA is the application of CoKriging for SA of a spherical tank under internal pressure le2014bayesian .
To the best of our knowledge, there is still no research that investigates the capability of MFPCE for SA purpose. Such work is important to further enhance the capability of MFPCE for goals other than standard UQ. Furthermore, it has a potential to reduce the computational cost to perform SA, which is desirable in applications that involve expensive simulations. In this paper, we investigate and demonstrate the usefulness and capabilities of MFPCE to perform SA when simulations with different level of fidelity are available.
The rest of this paper is structured as follows: The explanation of the general framework of SA and the methodology of MFPCE are given in Section 2. The numerical studies on artificial and realworld test problems we undertook are discussed in Section 3. Finally, conclusions are drawn and future work is discussed in Section 4.
2 Methodology
2.1 Global SA
2.1.1 Probabilistic formulation of the problem
We first consider a physical model with
input parameters, with the relationship between the input vector
and the scalar output is defined by(1) 
where and is the vector of the input variables. Because our interest in this paper is for UQ, the input vector is assumed as random variables with joint probability density function (PDF) . Due to the assumption of independent components of , one gets where is the marginal PDF of . The random output is then defined by
(2) 
2.1.2 Sobol decomposition
The model response can be decomposed into summands of its main effects and interactions (called Sobol decomposition sobol1990sensitivity ), reads as
(3) 
where is a constant and is the mean value of the function defined by
(4) 
where is the support of the PDF. A property of the Sobol decomposition is that its summand satisfies
(5)  
where is the support of the PDF. As a consequence of Eq. 5, the summands except are mutually orthogonal in the following sense:
(6)  
Based on this derivation, we can then derive the formula of sensitivity indices.
2.1.3 Sensitivity indices
As a consequence of the orthogonality property shown in Eq. 6, the total variance of the model response can then be decomposed as
(7) 
where the are the partial variances and defined as
(8) 
We can then define the Sobol indices
(9) 
where they satisfy the following relationship:
(10) 
This means that each index is a measure of sensitivity describing the portion of the total variance due to the uncertainties in the set of input parameter . With this formula, one can understand which variables are the most and least influential to the output. Furthermore, one can also investigate the interaction between variables.
It is sometimes necessary to rank the total contribution of each variable to the output. For this purpose, the total effect of the input parameter, denoted as the total sensitivity indices homma1996importance , can be used. Total sensitivity indices are defined by
(11) 
The total sensitivity indices measure the total contribution of each variable to the output variance which includes all variances caused by its interaction with any other input variable. For example, if we have
(12) 
To compute the value of Sobol indices for a given problem, the most standard method is MCS which offers simplicity and ease of implementation. However, this might come with the high price of computational cost. This is because MCS typically needs more than a thousand simulations to obtain an accurate value of the Sobol indices xiu2003modeling . This is clearly infeasible if the computational time for each deterministic simulation is time demanding. To handle this, PCE can be used as a metamodel to accelerate the computation of the Sobol indices. More specifically, the MF scheme of nonintrusive PCE is considered as a tool to accelerate SA in this paper.
2.2 NonIntrusive PCE
2.2.1 Pce
PCE approximates the relationship between the stochastic response output and each of the random inputs with the following expansion:
(13) 
where and are the PCE coefficients and multivariate polynomials as the product of onedimensional orthogonal polynomial, respectively. Consider an index defined by and an index set , the expansion has be truncated for practical purpose so the expression becomes
(14) 
The orthogonal polynomial sequence is a family of polynomials such that the collection of polynomials in the sequence are orthogonal to each other by the following relation:
(15) 
where =1 if and 0 if
. The type of the polynomials used depends on the given probability distribution. Table
1 shows the Askey scheme of some continuous hypergeometric polynomials corresponding to their probability density function askey1985some ; xiu2002wiener .Distribution type  Polynomial chaos  Support 

Gaussian  Hermite chaos  
Gamma  Laguerre chaos  [0,) 
Beta  Jacobi chaos  [] 
Uniform  Legendre chaos  [] 
A tensor product expansion that includes all combinations of onedimensional polynomial bases can be utilized in order to extend the PCE to multivariable approximation. The polynomial bases can then be expanded by using the tensor product operator of order
via(16) 
The total number of polynomial terms for the tensor product expansion is calculated by
(17) 
Total order expansion can be used as an alternative to tensor product operator eldred2009comparison . For the totalorder expansion of order , the index set is defined by
(18) 
where . The total number of polynomial terms for the total order expansion is computed by
(19) 
After the PCE has been built, the next task is to compute the coefficients.
2.2.2 Spectralprojection based PCE
In this paper, we focus on the spectral projection method that calculates the coefficients based on the orthogonality of the bases. The coefficients are computed via
(20) 
The main part of the work is the calculation of multidimensional integral in the numerator. For this purpose, tensorproduct quadrature can be efficiently employed to perform the integration in low dimensionality problems (). However, tensorproduct quadrature might produce a large number of collocation points in problems with highdimensionality. Sparse grid quadrature is an important alternative to tensor product quadrature in performing this integration. In this paper, we use sparse grid integration to compute the numerator in all examples. By using a sparse grid rule, the number of collocation points is reduced if compared to the tensorproduct rule, thus, making it feasible to use spectralprojection based PCE in highdimension. Moreover, if a sparse grid is employed, it is necessary to use the polynomial bases built from the sum of tensor product expansion to avoid numerical noise in the coefficients of higher order terms constantine2012sparse ; we used this approach in all examples.
We define the isotropic sparse grid at level where smolyak1960interpolation ; bungartz2004sparse ; garcke2012sparse ; griebel1990combination by
(21) 
where are the tensor product quadrature formulas for sparse grid equation. The form of formulas depends on the type of polynomial used in PCE. For example, GaussLegendre and GaussHermite quadrature are used when Legendre and Hermite polynomials are employed in the PCE approximation, respectively.
Another way to express sparse grid is in terms of difference formulas gerstner1998numerical
(22) 
where and with .
2.2.3 Multifidelity extension
The PCE can be extended into the MF version by employing a combination of the LF and correction PCE ng2012multifidelity . The correction PCE, which serves as a corrector for LFPCE, is built by using the difference between the LF and HF function at selected sample locations. The combination can be in the form of additive or multiplicative form ng2012multifidelity . In this work, we only use the additive form defined by
(23) 
so
(24) 
where is the difference between the LF () and HF function (), which serves as a correction term.
In integrating sparse grid with PCE, we first define as the PCE of at sparse grid level with dimension . Within MF framework, the HFPCE can be constructed by
(25) 
where and are the LF and correction PCE, respectively. Here, is defined as sparse grid level offset between the LF and correction PCE.
If and are the coefficients of the LF and correction PCE, respectively, and is the set of multiindices of the dimensional PCE on sparse grid level , we can then formulate the following:
(26) 
(27) 
The MF polynomial expansion can then be expressed as
(28) 
where are the common bases of the LF and correction expansion.
After the PCE has been built and the coefficients been computed, the statistical moments can then be directly computed from the PCE coefficients. The sparse grid growth rule used in this paper is , where . Figure 1 shows various examples of the sparse grids with various values and for MFPCE purpose. It can be seen in this figure that the HF samples (lower level sparse grid) are the subset of the LF samples (higher level sparse grid), thus made it possible to correct the LF values in the intersecting samples.
2.2.4 Calculation of statistical moments
The mean for the standard single HFPCE is simply the first PCE coefficient which is given by
(29) 
The variance can be obtained as follows:
(30) 
where is the size of polynomial basis.
For the MFPCE expression shown in Eq. 28, the moments are calculated via the following formulations:
(31) 
(32) 
Furthermore, as the core of this work, the PCE coefficients are used in the computation of the PCEbased Sobol indices.
2.2.5 PCEbased Sobol indices
Obtaining the PCEbased Sobol indices is straightforward when the PCE coefficients for a given expansion are already obtained sudret2008global . This can be done by firstly deriving the Sobol decomposition of the PCE approximation of . First, we define as the set of tuples, with the indices are nonzero, as
(33) 
By gathering the polynomials according to the parameters they depend on, the Sobol decomposition of PCE can then be reads as
(34)  
Based on the above decomposition, the PCEbased Sobol indices can then be simply defined as
(35) 
It is clear from the above equation that the PCEbased Sobol indices can be directly obtained when the PCE coefficients are available. Using a similar procedure, we can then easily derive the expression for PCEbased total sensitivity indices. The PCEbased total sensitivity indices then can be reads as
(36) 
which can be simplified into
(37) 
where is the set of all indices with a nonzero th components, formally defined as
(38) 
In MFPCE, the polynomial terms and coefficients involved in Sobol indices computation are built from the LF and the correction expansion shown in Eq. 28. The process starts by building the sparse grid and the corresponding polynomial bases for LFPCE followed by correction PCE. After the two PCEs are combined into a single MFPCE, post processing can be directly performed to calculate the statistical moments and the Sobol indices.
3 Results
The capability of MFPCE on algebraic cases when the “high” and “low” fidelity models are available are studied. In the algebraic case considered, the term of “high” and “low” fidelity models does not mean that the models are expensive and cheap in the real sense. The LF model is artificially constructed so it represents the HF model but with a discrepancy, which is measured by the correlation value and mean absolute relative error (MARE) between the LF and HF function . These two statistical measures can be used to perform a preliminary analysis of whether the LF function is similar to the HF one or not toal2015some . The equation for correlation and are
(39) 
and
(40) 
respectively, where and are a set of observations of the HF and LF data for identical inputs with the bar () denoting the mean of these sets.
Analysis of the decay of PCE coefficients is a proper way to analyze whether the LF function can improve the capability of MFPCE in approximating the HF function ng2012multifidelity . The LF function can assist MFPCE if the decay of the correction PCE coefficients is faster than that of the HF one ng2012multifidelity . This indicates that the correction function is less complex than the HF function. Basically, this means that for the same level of sparse grid, the combination of correction and LFPCE captures more relevant information regarding the HF function than HFPCE. As an example, consider a case where the HF and LF function is perfectly correlated with equals to one with an offset in the magnitude. This basically means that the correction function is just a constant which can be approximated by simply a zeroth order polynomial. Assuming that the LF function is much cheaper than the HF function so we can construct a highlevel sparse grid for LFPCE, one can build a highly accurate MFPCE by combining this LFPCE with this constant correction function. In this paper, we plot the coefficients of correction, LF, and HFPCE to do this analysis. To analyze the decay of the PCE coefficients, the absolute value of the coefficients must be sorted first and then plotted in a single plot.
We measured the error of total and all Sobol sensitivity indices, expressed as
(41) 
and
(42) 
respectively, where is the number of Sobol indices for a given dimension. Here, and are the reference value for the single and total Sobol indices, respectively. For implementation, the value of and are determined by using analytical calculation (such as in the Ishigami function) or PCE with a sparse grid of sufficiently high level.
Moreover, in order to assess the quality of the PCE approximation, the and MARE of the prediction to the true function is used, which are defined by
(43) 
and
(44) 
respectively, where is the output of the PCE prediction. Notice that and MARE without any subscript denote the prediction error, and not the discrepancy between the LF and HF function.
In the following examples, the sparse grid level for HFPCE is written as “SG”, while for MFPCE it is written as “SG)”. The convergence of the errors is plotted with the axis as the number of function evaluations . The calculation of does not distinguish the HF or LF function evaluation when HF and LFPCE are considered since it does not make sense to do so in algebraic problems. However, when calculating for MFPCE in algebraic function, the cost of LF function is not considered to make a fair comparison between both the HF and MFPCE from the algorithmic point of view. Furthermore, we also compare the performance of HF and MFPCE in estimating Sobol indices with the hypothetical cost of the LF simulation is taken into account. This hypothetical cost of the LF simulation is evaluated in terms of the ratio between the LF and HF evaluation cost (denoted as RT). The purpose is to evaluate the capability and usefulness of MFPCE in accelerating the computation of Sobol indices when it is applied in a realworld setting, where the cost of the LF simulation is usually not negligible. Here, we use four values for the hypothetical cost, that is, RT= and . The total simulation cost of MFPCE (i.e., ) is then the sum of the HF and LF evaluation cost, in the unit equivalent to one HF evaluation function.
3.1 Borehole function
The first example is the borehole function, which is an eightdimensional function that models water flow through a borehole morris1993bayesian . Mathematically it is expressed as
(45) 
where the various parameters are as defined in Table 2.
The LF borehole function xiong2013sequential is expressed as
(46) 
For SA purpose, the random variables are all uniformly distributed as shown in Table 2.
Random variable  Definition  Probability distribution 

radius of borehole (m)  Uniform [0.05, 0.15]  
radius of influence (m)  Uniform [100, 50000]  
transmissivity of upper aquifer (m^{2}/yr)  Uniform [63700, 115600]  
potentiometric head of upper aquifer (m)  Uniform [990, 1100]  
transmissivity of lower aquifer (m^{2}/yr)  Uniform [63.1, 116]  
potentiometric head of lower aquifer (m)  Uniform [700, 820]  
length of borehole (m)  Uniform [1120, 1680]  
hydraulic conductivity of borehole (m/yr)  Uniform [9855, 12045] 
The LF Borehole function exhibits an almost perfect correlation with the HF one (r), which means that the trend response of the LF function is similar to the HF one, but with response value offset (MARE=0.204). To acquire monotonous convergence of the Sobol indices error, the reference values for errors calculation are obtained using level 5 HF sparse grid, in which the first order and total sensitivity indices computed using this highlevel sparse grid are shown in Table 3 (note that the analytical Sobol indices are not available for the borehole function). Since the total Sobol indices are not equal to the first order, this means that the borehole function exhibits a correlation between the variables that need to be captured by sparse grid level higher than one. The errors of Sobol sensitivity indices are then computed using these reference values. For this test function, the sparse grid level offset is set to 1.
Total  First order  

0.8668  0.8289  
0  0  
0  0  
0.0541  0.0414  
0  0  
0.0541  0.0414  
0.0521  0.0393  
0.0127  0.0095 
The errors of the Sobol indices obtained from various schemes of PCE are shown in Fig. 2. It can be seen in Figs. 1(a) and 1(b) that the MFPCE method is able to reduce the MARE and r with the same amount of HF function evaluations with HFPCE. Results also show that MFPCE reduces the e and e compared to the HFPCE method with a monotonous decreasing trend. The convergence trends for both e and e are roughly similar for all schemes. Nonetheless, the e and e obtained from LFPCE without correction are already very accurate and a good representation of the HF borehole function from the Sobol indices point of view. This means that the trend of the HF borehole function is obviously captured by the LF borehole function, due to the very high correlation value between the LF and HF function. In fact, the convergence of r is very similar for both HFPCE and LFPCE. This suggests that the LF function is adequate to be used for estimating the Sobol indices of the HF function without any correction term if the correlation is near one. However, although the Sobol indices obtained from LFPCE are a highly accurate representation of the HF function, the obtained statistical moments are inaccurate because the response is not corrected. This means that if the PCE model is to be used for estimating the statistical moments, the LF function alone is not enough and needs correction to be properly used for both UQ and SA purposes (this is why the MARE for LFPCE is not depicted since it is already obvious).
Figure 3 shows the convergence of e and e for the borehole problem when the hypothetical cost of the LF simulation is counted. Here, the MFPCE method provides a good alternative for estimating the Sobol indices when one cannot afford the luxury of evaluating HFPCE with higher sparse grid level. To reach a threshold value and for both e and e, MFPCE only needs an HF sparse grid with one level lower than that of HFPCE. The total computational cost is, of course, greater when the computational cost ratio is relatively high (i.e., RT=); however, there is still a clear advantage of using MFPCE over HFPCE even when the RT is high for the borehole problem. In reaching a threshold value of , which we think is already a fine accuracy for engineering purpose, the MFPCE method with RT= and need a computational cost equals to , , , and of the cost needed by HFPCE to reach the same threshold (i.e., HFPCE with SG3).
Figure 4 shows the decay of PCE coefficients for the borehole problem. We use the SG45 scheme to generate this plot. By analyzing Fig. 4, it is clear that the correction function is less complex than the HF and LFfunction, as indicated by its lower coefficients’ magnitude and faster decay. Note that the correction PCE always seems to decay faster than LFPCE due to the higher cardinality of LFPCE. However, we can see that a significant portion of the correction PCE basis has lower coefficients values than that of LFPCE. This less complexity indicates that it is easier to approximate the correction rather than the HF function, which leads to a more accurate approximation of the MFPCE model compared to the standard HFPCE model. Obviously, this leads to a more accurate approximation of the true Sobol indices.
3.2 Ishigami function
The Ishigami function is a threedimensional highly nonlinear function which is frequently used as a benchmark problem for SA purpose ishigami1990importance . Here, we construct three LF representations of the Ishigami function to test the performance of MFPCE in approximating the Sobol indices of a highly nonlinear function. The standard form of the Ishigami function is defined as
(47) 
The HF Ishigami function used in this paper, which is the standard expression of the Ishigami function, is expressed as
(48) 
We then build three LF representations of the Ishigami function as follows:
(49) 
(50) 
and
(51) 
The r and MARE of the LF to the HF ishigami function are listed in Table 4. The distribution of the input random variables is Uniform for all . The three LF Ishigami models represent three scenarios of discrepancies between the HF and LF function. The 1^{st} LF function has the highest r correlation and lowest MARE while the 2^{nd} one has the opposite characteristics. On the other hand, the 3^{rd} LF model has exactly the same r as the 2^{nd} LF model but with higher MARE.
LF Model  r  MARE 

1  0.9875  0.4504 
2  0.8826  1.0672 
3  0.8826  1.5587 
The HF sparse grid level is varied from 1 to 6 with the fixed value of . For the Ishigami function, the true Sobol indices can be obtained analytically, where the values are shown in Table 5. These analytical Sobol Indices are obtained by decomposing the variance of the as shown in Eq. 52 as
(52)  
The results, as shown in Fig. 5, indicate that all MFPCE schemes are able to reduce the Sobol indices error compared to HFPCE. It can also be seen that the convergence trend for all error metrics is roughly similar. The MFPCE method with the 1^{st} LF model produces the MF model with the lowest error of all, which is reasonable due to its high r with the HF function. Furthermore, there is no significant difference between the convergence error of the MF scheme with the 2^{nd} and 3^{rd} LF models. Although the MARE of the 3^{rd} LF model is higher than the 2^{nd} model (See Fig. 4(a)), the trend of both functions is similar to each other which results in almost the same quality of the MFPCE model. All LFPCEs without correction (see Fig. 4(b)), as it is already obvious, cannot accurately estimate the r of the true Ishigami function. Thus, LFPCE fails to estimate the true Sobol Indices (See Fig. 4(c) and 4(d)). This is mainly due to the r correlation which is not very close to one, unlike the borehole case as explained before. In cases like this, the LF model without correction should not be trusted as the representation of the HF function to estimate the Sobol indices. However, when the LF model is used within the MFPCE framework, it can be efficiently used to estimate the Sobol indices of the true HF function with lower computational cost. This is very useful if the computational cost ratio of the HF to the LF function is very high.
The convergence of error metrics with the hypothetical actual cost is shown in Fig. 6. Here, only MFPCE with the first LF model is shown since it is the best LF model available for the Ishigami function. By observing the plot, we can see that MFPCE fails to yield a significant beneficial effect when the computational cost ratio is equal to . We can see that the computational cost of MFPCE with RT is of HFPCE to reach a threshold value of for both e and e, which is actually not a significant cost reduction. On the other hand, if the computational cost ratio is assumed to be and , the cost to reach this threshold are , , and of the cost needed by HFPCE, respectively. When the threshold is set to be very strict, that is, , the cost of MFPCE to reach this threshold with RT, and are , , and of HFPCE, respectively. With this very strict threshold, the cost reduction becomes significant only when the computational cost ratio equals to or . Conversely, when the computational cost ratio equals to , MFPCE requires more functions evaluations than that of HFPCE to reach this very high accuracy.
The decay of PCE coefficients, as shown in Fig. 7, clearly indicates that the coefficients of all correction PCEs decay faster than those of LF and HFPCE for all the LF models. This means that the correction functions are all less complex than the LF one. By visual investigation, it can be clearly seen that the gap between the HF and correction PCE model is wider for MFPCE with the 1^{st} model than that of the others. The wider this gap, the less complex the correction to the LF function, which results in a more successful MF model in estimating the Sobol indices.
Index  Analytical 

0.3138  
0.4424  
0  
0  
0.2436  
0  
0  
0.5574  
0.4424  
0.2436 
3.3 Short column problem
The third algebraic test problem is a short column problem with five LF models ng2012multifidelity . Furthermore, this problem also involves random variables with a nonuniform distribution.
The HF response for the short column problem is defined as
(53) 
The LF responses are defined as ng2012multifidelity
(54) 
(55) 
(56) 
(57) 
and
(58) 
where with the distribution listed in Table 6. The five LF short column functions have various values of r correlation and MARE to the HF one, as shown in Table 7. As it can be seen from this table, there are some LF models that have high r and low MARE (model 1, for example) and LF models with low correlation value and very large MARE (model 3, for example). We aim to further investigate the criteria of which LF model can yield better accuracy in estimating Sobol indices when employed in the MFPCE framework. The sparse grid level offset is set to 2 for this problem.
Random variable  Probability distribution 

Uniform [5,15]  
Uniform [15,25]  
Normal [500,100]  
Normal [2000,400]  
Normal [5,0.5] 
LF model  r  MARE 

1  0.9234  8.6926 
2  0.7612  126.6604 
3  0.4780  154.7131 
4  0.6931  15.4713 
5  0.5914  1547.13 
Index  MCS 

0.5240  
0.2059  
0.0696  
0.0284  
0.0532  
0.6257  
0.2708  
0.1143  
0.0346  
0.0799 
The convergence of the error metrics for the short column problem is shown in Fig. 8. By closer investigation of this figure, we observe that only the 1^{st} and 4^{th} LF functions are able to successfully reduce the error of Sobol indices relative to HFPCE for all levels of sparse grid. As expected, the decrement of MARE and r corresponds to a better estimation of Sobol indices. However, it is not always the case as it can be observed from the result of MFPCE using the 3^{rd} LF model which produces a better estimate of Sobol indices than that of HFPCE from HF sparse grid level of 3. The 2^{nd} and 5^{th} LF models produce MF models with lower quality than those of HFPCE in terms of estimating the Sobol indices.
Figure 9 depicts the convergence of error metrics for the short column function with the hypothetical cost of the LF simulation is taken into account. It is clear from this figure that when the computational cost ratio is equal to 1/4, there is no advantage gained by utilizing MFPCE. This is because MFPCE with RT=1/4 needs 110% of the computational cost needed by HFPCE to reach the e and e below . On the other hand, we observe that MFPCE with RT=1/8 requires 67.8% of the HFPCE cost to reach a similar accuracy, which is quite a noteworthy cost reduction for reaching an engineering accuracy. The case with RT=1/16 and RT=1/32 require 46.61% and 36.02% of the HFPCE cost to reach the engineering accuracy of .
By only investigating statistical similarities between the HF and LF models, it is hard to determine whether an MFPCE with a certain LFmodel could successfully estimate the true Sobol indices or not. It is clear for the 1^{st} model, but not for the 4^{th} model since its r correlation is low and the MARE is relatively high. It becomes much more difficult to make this inference for the 2^{nd} LF model (its correlation is moderate while the MARE is very high). Although for some cases it is obvious to see the adequacy of the LF model for assisting MFPCE, inspecting the PCE coefficients decay is the proper way to do this kind of analysis.
The decay of the PCE coefficients for all LF models are shown in Fig. 10. All PCE coefficients are obtained using level 4 correction sparse grids (means level 6 LF sparse grid), which are adequate enough to obtain good convergence of the Sobol indices. It is very clear from this figure that the behavior of coefficients decay for the HF, LF, and correction PCE are similar for unsuccessful MF models. Here, we can see that the magnitude of the correction PCEs’ coefficients is much larger and it also decay slower than that of HFPCE. Moreover, the coefficients of the LF and correction PCE decay similarly to each other for the 2^{nd}, 3^{rd}, and 5^{th} LF models. Such similar decay means that the trend of correction PCE almost mimics that of LFPCE, indicating that the correction PCE is not less complex than LF and HFPCE. The MFPCE scheme with the 3^{rd} LF model is a special case, where the convergence of e and e shows that it slightly outperformed HFPCE when correction sparse grid with level 3 and 4 are used. However, it is not suggested to use the LF model like the 3^{rd} model in a real application. This is because we cannot trust the adequacy of the 3^{rd} model in aiding MFPCE when investigating the PCE coefficients decay. Indeed, with a low level sparse grid, MFPCE with the 3^{rd} LF model fails to estimate the Sobol indices accurately. On the other hand, the decay trend of MFPCE with the 1^{st} and 4^{th} models suggests that the MF approximation is better than the HF one. The gap between the decay of the correction and HF, LFPCE is visible for these two LF models, indicating the less complexity of the correctionPCE model. This difference especially can be clearly observed when MFPCE with the 1^{st} LF model is compared with the 4^{th} LF model. In addition, if we pay a close attention to the decay of the PCE coefficients for the 4^{th} model, the magnitude of the correction PCE’s first coefficient is larger than that of HFPCE (i.e., the mean is higher). However, its correction PCE’s coefficients decay faster than that of HFPCE. In the light of these results, we observe that it is the rate of decay of the coefficients that is important for a successful MFPCE model and not the difference between the mean value of the correction and HFPCE. Moreover, it is also better to look at the difference between correction PCE and both HF and LF PCE.
3.4 Transonic airfoil in inviscid flow
The final test is the SA of a transonic airfoil in inviscid flow. The open source CFD code of SU was used to solve the Euler equation in order to obtain the aerodynamic coefficients palacios2013stanford . The airfoil model in this case is assumed to be subjected to geometrical uncertainties. Here, we used the RAE 2822 airfoil which was parameterized via PARSEC method to find the variables that describe the datum shape sobieczky1999parametric . Note that since the datum shape is an approximation of the original one, there is an error between the coordinate of the true and the approximated shape. However, our goal here is to demonstrate the capability of MFPCE in performing SA, and not to measure the randomness of the original airfoil itself; hence, this test case is appropriate enough for the purpose of demonstrating an SA algorithm. For this problem, the random output of interest is the lifttodrag ratio (), under the fix flight condition of Mach number and angle of attack of 0.73 and 2, respectively.
After the suitable PARSEC parameters were found, we assume that there were five uncertain parameters, that is, the and coordinate of the lower crest (i.e., and , respectively), and coordinate of the upper crest (i.e., and , respectively), and trailing edge angle (i.e., ), with the distribution listed in Table 9. An example of one realization evaluated by the CFD solver is shown in Fig. 11. The datum shape and several realizations of the uncertain shape are shown in Fig. 12. For this problem, we used the fully and partially converged simulations as the HF and LF simulations, respectively. The criterion for the fully converged simulation is the residual of the drag that reaches , which on average requires 500 iterations to reach this criterion. On the other hand, the iteration for the partially converged simulation to be used as the LF simulations is set to 100, which means that the computational cost ratio between the LF and HF simulation is 0.2 for this case. This fully and partially converged simulation strategy has been employed for optimization forrester2006optimization ; branke2017efficient , surrogate model construction courrier2014use , and UQ palar2016multi . One advantage of this strategy is that some LF samples are evaluated together with the HF samples, which leads to further reduction in the computational cost.
Since the statistical moments are typically also the quantities of interest in realworld applications, we plotted the convergence of statistical moments together with the Sobol indices. For reference value, an HFPCE with level 4 sparse grid is used to obtain the reference total Sobol indices and statistical moments, as shown in Table 10. Shown in this table are also the Sobol indices obtained by LFPCE with level 4 sparse grid. By ranking the total sensitivity indices, it is found that is the most important variable followed by and , while the contribution of and
is very small but not negligible. The sum of the total Sobol indices is 1.015, which means that the largest contributors to the total variance are the first order sensitivity indices with small interaction between variables. Although the Sobol indices obtained by LFPCE seem close to the true Sobol indices, they are not accurate representations of the true indices. Moreover, the standard deviation of the HF and LFPCE differs by 1.78%. This means that correction of LFPCE must be applied within MFPCE framework.
No.  Random variable  Probability distribution 

1  Uniform [0.4415, 0.4549]  
2  Uniform [0.060, 0.066]  
3  Uniform [0.344, 0.3800]  
4  Uniform [0.0618, 0.0560]  
5  Uniform [0.1182,0.1070] 
Sensitivity index  HF, SG4  LF, SG4 

0.3207  0.3136  
0.6964  0.6970  
0.0044  0.0037  
0.0214  0.0198  
0.0085  0.0084  
Mean  121.9250  121.5645 
St. D.  39.6901  37.3553 
First, convergence analysis of statistical moments and error of total Sobol indices is performed and the results are depicted in Fig. 13 (the error of all Sobol indices is not shown since the interaction term is small). Shown in the axis of Fig. 13 is the actual cost of MFPCE that includes the cost of the LF function. From the mean and standard deviation perspective, the advantage of utilizing MFPCE over HFPCE is not clearly seen. Also, note that there is a jump in computational cost for MFPCE with when the HF sparse grid level is set to 2. Since increasing the value of means higher computational cost, one has to be careful if setting the value too high, especially if the computational cost of the LF function is not negligible. Nonetheless, a clear advantage of MFPCE is well observed when it is used to estimate the Sobol Indices. Here, MFPCE with and reach the e level below after the equivalent of 125 and 325 HF function evaluations, respectively, where HFPCE needs 341 function evaluations for the same accuracy; this means that the computational cost is reduced to about 36.66% and 95.3% for the MFPCE method with and , respectively. Although the gain from utilizing MFPCE with seems to be minimal in achieving this accuracy, it reaches the error level of total Sobol indices below with the equivalent of 77 HF function evaluations.
Figure 14 shows the convergence of each term of total Sobol indices with respect to the actual cost. The convergence trend is not especially clear for the , , and , since their contribution are relatively small compared to and . We observe that MFPCE with both and reaches a good accuracy for and with HF sparse grid level of 2. However, MFPCE with is more efficient here since it reaches the engineering accuracy with fewer function evaluations than those of MFPCE with and HFPCE.
The decay of the correction and LFPCE coefficients is depicted in Fig. 15. The plot is prepared by using MFPCE with correction and LFPCE sparse grid level of 3 and 4, respectively. By investigating the figure, it is obvious that the correction PCE model is less complex than the LF and HFPCE. Thus, the given LF model is appropriate to be used within the MFPCE framework for this problem. The reason for the similarity between the decay of LF and HFPCE coefficients is because that the difference in magnitude between the two functions is small. However, we can see that for polynomial terms with small coefficients (i.e., beyond the 40th term) the two PCEs start to differ. This is where the correction PCE plays its role.
4 Conclusions
An approach for SA using PCE with an MF model is presented in this paper. The goal is to accelerate the SA process when simulations with multiple levels of fidelity are available. The spectral projectionbased MFPCE is used as the method of interest in this paper. The method uses two PCEs: LF and correction, which are combined into a single MFPCE. SA is directly performed in the postprocessing phase by computation of the Sobol indices from the MFPCE coefficients.
The proposed method is demonstrated and investigated on some algebraic and engineering test problems. On cases where the correlation of the LF to the HF function is very high (near 1), the Sobol indices obtained from the LF and HFPCE are almost the same; revealing that for such cases, the use of LF function is enough to obtain accurate Sobol indices. However, the Sobol indices obtained from LF and HFPCE differ greatly if the LF and HF functions are not perfectly correlated. It is also worth noting that one usually wants to compute the statistical moments together with Sobol indices, which means that correction of the LF function is always desired in order to obtain accurate values. The current study shows that MFPCE can be employed to obtain accurate Sobol indices with lower computational cost than that of HFPCE. Our investigation shows that MFPCE is an efficient method for estimating the Sobol indices with a requirement that the LF and HF function are highly correlated with each other. The decrease of error estimation of Sobol indices positively correlated with the decrease of MARE and, especially, r. Investigation on the artificial problems with a hypothetical LF simulations cost suggests that the computational cost ratio between the LF and HF function should be at least lower than , in order to ensure the advantage of utilizing MFPCE over HFPCE in terms of computational efficiency. In the inviscid transonic airfoil case with five uncertain parameters, the MF approach successfully estimates the Sobol indices with the computational cost of about 36.66% needed by the HFPCE method in order to obtain a nearly similar engineering accuracy.
As for the future work, implementing adaptive sparse grid for MFPCE is a promising approach to efficiently obtain accurate Sobol indices in highdimensional problems by focusing on the most important dimensions. Moreover, the Sobol indices themselves can be used to guide the adaptivity. Because such approach is not yet considered, more work is needed to investigate the adaptive MFPCE approach for SA purpose. Moreover, the use of costefficient quadrature such as ClenshawCurtis quadrature should also be considered. Investigation on the use of regressionbased MFPCE for SA, which allows the flexibility in sampling location and size, is also a promising research direction.
Acknowledgement
Part of this work is funded through Riset KK ITB. Koji Shimoyama was supported in part by the GrantinAid for Scientific Research (B) No. H1503600 administered by the Japan Society for the Promotion of Science (JSPS).
References
 (1) A. Saltelli, K. Chan, E. M. Scott, et al., Sensitivity analysis, Vol. 1, Wiley New York, 2000.
 (2) B. Efron, C. Stein, The jackknife estimate of variance, The Annals of Statistics (1981) 586–596.
 (3) I. M. Sobol’, On sensitivity estimation for nonlinear mathematical models, Matematicheskoe Modelirovanie 2 (1) (1990) 112–118.

(4)
J. Park, I. W. Sandberg, Universal approximation using radialbasisfunction networks, Neural computation 3 (2) (1991) 246–257.
 (5) D. G. Krige, A statistical approach to some mine valuation and allied problems on the Witwatersrand, Ph.D. thesis, University of the Witwatersrand (1951).
 (6) G. Matheron, Principles of geostatistics, Economic geology 58 (8) (1963) 1246–1266.
 (7) N. Cressie, Statistics for spatial data, John Wiley & Sons, 2015.
 (8) J. Sacks, W. J. Welch, T. J. Mitchell, H. P. Wynn, Design and analysis of computer experiments, Statistical science (1989) 409–423.
 (9) L. S. Gandin, R. Hardin, Objective analysis of meteorological fields, Vol. 242, Israel program for scientific translations Jerusalem, 1965.
 (10) G. Loeven, J. Witteveen, H. Bijl, Probabilistic collocation: an efficient nonintrusive approach for arbitrarily distributed parametric uncertainties, in: 45th AIAA Aerospace Sciences Meeting and Exhibit, 2007, p. 317.
 (11) O. M. Knio, H. N. Najm, R. G. Ghanem, et al., A stochastic projection method for fluid flow: I. basic formulation, Journal of computational Physics 173 (2) (2001) 481–511.
 (12) D. M. Ghiocel, R. G. Ghanem, Stochastic finiteelement analysis of seismic soilstructure interaction, Journal of Engineering Mechanics 128 (1) (2002) 66–77.
 (13) D. Xiu, G. E. Karniadakis, The Wiener–Askey polynomial chaos for stochastic differential equations, SIAM journal on scientific computing 24 (2) (2002) 619–644.
 (14) R. G. Ghanem, P. D. Spanos, Stochastic finite elements: a spectral approach, Courier Corporation, 2003.
 (15) P. Das, Y. Zheng, Cumulative formation of response surface and its use in reliability analysis, Probabilistic Engineering Mechanics 15 (4) (2000) 309–315.

(16)
M. Papadrakakis, N. D. Lagaros, Reliabilitybased structural optimization using neural networks and monte carlo simulation, Computer methods in applied mechanics and engineering 191 (32) (2002) 3491–3507.
 (17) I. Kaymaz, Application of kriging method to structural reliability problems, Structural Safety 27 (2) (2005) 133–151.
 (18) V. Dubourg, F. Deheeger, B. Sudret, Metamodelbased importance sampling for the simulation of rare events, Applications of Statistics and Probability in Civil Engineering 26 (2011) 192.
 (19) B. J. Bichon, J. M. McFarland, S. Mahadevan, Efficient surrogate models for reliability analysis of systems with multiple failure modes, Reliability Engineering & System Safety 96 (10) (2011) 1386–1395.
 (20) M. Balesdent, J. Morio, J. Marzat, Krigingbased adaptive importance sampling algorithms for rare event estimation, Structural Safety 44 (2013) 1–10.
 (21) B. J. Bichon, M. S. Eldred, L. P. Swiler, S. Mahadevan, J. M. McFarland, Efficient global reliability analysis for nonlinear implicit performance functions, AIAA J 46 (10) (2008) 2459–2468.
 (22) B. Echard, N. Gayton, M. Lemaire, Akmcs: an active learning reliability method combining kriging and monte carlo simulation, Structural Safety 33 (2) (2011) 145–154.
 (23) X. Huang, J. Chen, H. Zhu, Assessing small failure probabilities by ak–ss: an active learning method combining kriging and subset simulation, Structural Safety 59 (2016) 86–95.
 (24) S.K. Choi, R. V. Grandhi, R. A. Canfield, Structural reliability under nongaussian stochastic behavior, Computers & structures 82 (13) (2004) 1113–1121.
 (25) M. Berveiller, B. Sudret, M. Lemaire, Stochastic finite element: a non intrusive approach by regression, European Journal of Computational Mechanics/Revue Européenne de Mécanique Numérique 15 (13) (2006) 81–92.
 (26) C. Hu, B. D. Youn, Adaptivesparse polynomial chaos expansion for reliability analysis and design of complex engineering systems, Structural and Multidisciplinary Optimization 43 (3) (2011) 419–442.
 (27) B. Sudret, G. Blatman, M. Berveiller, Response surfaces based on polynomial chaos expansions, Construction reliability: safety, variability and sustainability (2013) 147–167.
 (28) R. Schöbi, B. Sudret, S. Marelli, Rare event estimation using polynomialchaos kriging, ASCEASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering 3 (2) (2016) D4016002.
 (29) R. Jin, W. Chen, A. Sudjianto, On sequential sampling for global metamodeling in engineering design, in: Proceedings of DETC, Vol. 2, 2002, pp. 539–548.
 (30) I. Bilionis, N. Zabaras, Multioutput local gaussian process regression: Applications to uncertainty quantification, Journal of Computational Physics 231 (17) (2012) 5718–5746.
 (31) K. Shimoyama, S. Kawai, J. J. Alonso, Dynamic adaptive sampling based on kriging surrogate models for efficient uncertainty quantification, in: 15th AIAA NonDeterministic Approaches Conference, 2013, pp. 2013–1470.
 (32) R. P. Dwight, Z.H. Han, Efficient uncertainty quantification using gradientenhanced kriging, AIAA paper 2276 (2009) 2009.
 (33) M. P. Rumpfkeil, W. Yamazaki, D. J. Mavriplis, A dynamic sampling method for kriging and cokriging surrogate models, in: 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. American Institute of Aeronautics et Astronautics (cf. p. 162), 2011.
 (34) Q. Wang, P. Moin, G. Iaccarino, A high order multivariate approximation scheme for scattered data sets, Journal of Computational Physics 229 (18) (2010) 6343–6361.
 (35) K. Boopathy, M. P. Rumpfkeil, Unified framework for training point selection and error estimation for surrogate models, Aiaa Journal.
 (36) B. Sudret, Global sensitivity analysis using polynomial chaos expansions, Reliability Engineering & System Safety 93 (7) (2008) 964–979.
 (37) T. Crestaux, O. Le Maıˆtre, J.M. Martinez, Polynomial chaos expansion for sensitivity analysis, Reliability Engineering & System Safety 94 (7) (2009) 1161–1172.
 (38) M. D. Morris, T. J. Mitchell, D. Ylvisaker, Bayesian design and analysis of computer experiments: use of derivatives in surface prediction, Technometrics 35 (3) (1993) 243–255.
 (39) W. Liu, Development of gradientenhanced kriging approximations for multidisciplinary design optimization, 2003.
 (40) J. H. de Baar, R. P. Dwight, H. Bijl, Improvements to gradientenhanced kriging using a bayesian interpretation, International Journal for Uncertainty Quantification 4 (3).
 (41) J. de Baar, T. Scholcz, R. Dwight, Exploiting adjoint derivatives in highdimensional metamodels, AIAA Journal 53 (5) (2015) 1391–1395.
 (42) J. Peng, J. Hampton, A. Doostan, On polynomial chaos expansion via gradientenhanced ℓ 1minimization, Journal of Computational Physics.
 (43) J. H. de Baar, B. Harding, A gradientenhanced sparse grid algorithm for uncertainty quantification, International Journal for Uncertainty Quantification 5 (5).
 (44) N. Wiener, The homogeneous chaos, American Journal of Mathematics 60 (4) (1938) 897–936.

(45)
S.K. Choi, R. V. Grandhi, R. A. Canfield, C. L. Pettit, Polynomial chaos expansion with latin hypercube sampling for estimating response variability, AIAA journal 42 (6) (2004) 1191–1198.
 (46) M. S. Eldred, J. Burkardt, Comparison of nonintrusive polynomial chaos and stochastic collocation methods for uncertainty quantification, in: Proceedings of the 47th AIAA Aerospace Sciences Meeting, no. 20090976, AIAA, Orlando, FL, 2009.
 (47) G. Blatman, B. Sudret, Efficient computation of global sensitivity indices using sparse polynomial chaos expansions, Reliability Engineering & System Safety 95 (11) (2010) 1216–1229.
 (48) G. Blatman, B. Sudret, Adaptive sparse polynomial chaos expansion based on least angle regression, Journal of Computational Physics 230 (6) (2011) 2345–2367.
 (49) A. Doostan, H. Owhadi, A nonadapted sparse approximation of PDEs with stochastic inputs, Journal of Computational Physics 230 (8) (2011) 3015–3034.
 (50) J. D. Jakeman, M. S. Eldred, K. Sargsyan, Enhancing ℓ1minimization estimates of polynomial chaos expansions using basis selection, Journal of Computational Physics 289 (2015) 18–34.
 (51) S. Smolyak, Interpolation and quadrature formulas for the classes wa s and ea s, in: Dokl. Akad. Nauk SSSR, Vol. 131, 1960, pp. 1028–1031.
 (52) T. Gerstner, M. Griebel, Numerical integration using sparse grids, Numerical algorithms 18 (3) (1998) 209–232.
 (53) H.J. Bungartz, M. Griebel, Sparse grids, Acta numerica 13 (2004) 147–269.
 (54) J. Garcke, Sparse grids in a nutshell, in: Sparse grids and applications, Springer, 2012, pp. 57–80.
 (55) S. A. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions, in: Dokl. Akad. Nauk SSSR, Vol. 4, 1963, p. 123.
 (56) D. Xiu, J. S. Hesthaven, Highorder collocation methods for differential equations with random inputs, SIAM Journal on Scientific Computing 27 (3) (2005) 1118–1139.
 (57) P. G. Constantine, M. S. Eldred, E. T. Phipps, Sparse pseudospectral approximation method, Computer Methods in Applied Mechanics and Engineering 229 (2012) 1–12.
 (58) G. T. Buzzard, Global sensitivity analysis using sparse grid interpolation and polynomial chaos, Reliability Engineering & System Safety 107 (2012) 82–89.
 (59) G. T. Buzzard, D. Xiu, Variancebased global sensitivity analysis via sparsegrid interpolation and cubature, Communications in Computational Physics 9 (03) (2011) 542–567.
 (60) O. GarciaCabrejo, A. Valocchi, Global sensitivity analysis for multivariate output using polynomial chaos expansion, Reliability Engineering & System Safety 126 (2014) 25–36.
 (61) M. C. Kennedy, A. O’Hagan, Predicting the output from a complex computer code when fast approximations are available, Biometrika 87 (1) (2000) 1–13.
 (62) A. I. Forrester, A. Sóbester, A. J. Keane, Multifidelity optimization via surrogate modelling, in: Proceedings of the Royal Society of London a: mathematical, physical and engineering sciences, Vol. 463, The Royal Society, 2007, pp. 3251–3269.
 (63) J. W. Bandler, Q. S. Cheng, S. Dakroury, A. S. Mohamed, M. H. Bakr, K. Madsen, J. Søndergaard, et al., Space mapping: the state of the art, Microwave Theory and Techniques, IEEE Transactions on 52 (1) (2004) 337–361.
 (64) H. Shah, S. Hosder, S. Koziel, Y. A. Tesfahunegn, L. Leifsson, Multifidelity robust aerodynamic design optimization under mixed uncertainty, Aerospace Science and Technology 45 (2015) 17–29.
 (65) M. B. Giles, Multilevel Monte Carlo path simulation, Operations Research 56 (3) (2008) 607–617.
 (66) A. Barth, C. Schwab, N. Zollinger, Multilevel Monte Carlo finite element method for elliptic PDEs with stochastic coefficients, Numerische Mathematik 119 (1) (2011) 123–161.
 (67) L. W. Ng, K. E. Willcox, Multifidelity approaches for optimization under uncertainty, International Journal for Numerical Methods in Engineering 100 (10) (2014) 746–772.
 (68) J. de Baar, S. Roberts, R. Dwight, B. Mallol, Uncertainty quantification for a sailing yacht hull, using multifidelity kriging, Computers & Fluids 123 (2015) 185–201.
 (69) L. W.T. Ng, M. Eldred, Multifidelity uncertainty quantification using nonintrusive polynomial chaos and stochastic collocation, in: Proceedings of the 14th AIAA NonDeterministic Approaches Conference, number AIAA20121852, Honolulu, HI, Vol. 45, 2012.
 (70) P. S. Palar, T. Tsuchiya, G. Parks, Decompositionbased evolutionary aerodynamic robust optimization with multifidelity point collocation nonintrusive polynomial chaos, in: 17th AIAA NonDeterministic Approaches Conference, 2015, p. 1377.
 (71) P. S. Palar, T. Tsuchiya, G. T. Parks, Multifidelity nonintrusive polynomial chaos based on regression, Computer Methods in Applied Mechanics and Engineering 305 (2016) 579–606.
 (72) A. Narayan, C. Gittelson, D. Xiu, A stochastic collocation algorithm with multifidelity models, SIAM Journal on Scientific Computing 36 (2) (2014) A495–A521.
 (73) L. Le Gratiet, C. Cannamela, B. Iooss, A Bayesian approach for global sensitivity analysis of (multifidelity) computer codes, SIAM/ASA Journal on Uncertainty Quantification 2 (1) (2014) 336–363.
 (74) T. Homma, A. Saltelli, Importance measures in global sensitivity analysis of nonlinear models, Reliability Engineering & System Safety 52 (1) (1996) 1–17.
 (75) D. Xiu, G. E. Karniadakis, Modeling uncertainty in flow simulations via generalized polynomial chaos, Journal of computational physics 187 (1) (2003) 137–167.
 (76) R. Askey, J. A. Wilson, Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials, Vol. 319, American Mathematical Soc., 1985.
 (77) M. Griebel, M. Schneider, C. Zenger, A combination technique for the solution of sparse grid problems, Technische Universität, 1990.
 (78) D. J. Toal, Some considerations regarding the use of multifidelity kriging in the construction of surrogate models, Structural and Multidisciplinary Optimization 51 (6) (2015) 1223–1245.
 (79) S. Xiong, P. Z. Qian, C. J. Wu, Sequential design and analysis of highaccuracy and lowaccuracy computer codes, Technometrics 55 (1) (2013) 37–46.
 (80) T. Ishigami, T. Homma, An importance quantification technique in uncertainty analysis for computer models, in: Uncertainty Modeling and Analysis, 1990. Proceedings., First International Symposium on, IEEE, 1990, pp. 398–403.
 (81) F. Palacios, M. R. Colonno, A. C. Aranake, A. Campos, S. R. Copeland, T. D. Economon, A. K. Lonkar, T. W. Lukaczyk, T. W. Taylor, J. J. Alonso, Stanford university unstructured (SU): An opensource integrated computational environment for multiphysics simulation and design, AIAA Paper 287 (2013) 2013.
 (82) H. Sobieczky, Parametric airfoils and wings, in: Recent Development of Aerodynamic Design Methodologies, Springer, 1999, pp. 71–87.
 (83) A. I. Forrester, N. W. Bressloff, A. J. Keane, Optimization using surrogate models and partially converged computational fluid dynamics simulations, in: Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, Vol. 462, The Royal Society, 2006, pp. 2177–2204.

(84)
J. Branke, M. Asafuddoula, K. S. Bhattacharjee, T. Ray, Efficient use of partially converged simulations in evolutionary optimization, IEEE Transactions on Evolutionary Computation 21 (1) (2017) 52–64.
 (85) N. Courrier, P.A. Boucard, B. Soulier, The use of partially converged simulations in building surrogate models, Advances in Engineering Software 67 (2014) 186–197.
 (86) H. Liu, Y.S. Ong, J. Cai, A survey of adaptive sampling for global metamodeling in support of simulationbased complex engineering design, Structural and Multidisciplinary Optimization (2017) 1–24.
 (87) P. Wang, Z. Lu, Z. Tang, An application of the Kriging method in global sensitivity analysis with parameter uncertainty, Applied Mathematical Modelling 37 (9) (2013) 6543–6555.
Comments
There are no comments yet.