1 Introduction
Mathematical models are essential tools in science and engineering to study and express the behavior of real systems. Exact predictions of system responses can be made only if the model inputs are accurately described. However, this cannot be achieved since uncertainties widely exist. Therefore, uncertainty quantification is crucial for mathematical modeling and has been attracting growing interest around the world.
For real structural systems, loadings, material properties and boundary conditions are usually uncertain and can be described with random variables and random fields. In the uncertainty quantification of structural systems, the analysis of both sensitivity and reliability plays an important role. Sensitivity analysis aims at identifying the effect of different inputs on the uncertainties of model outputs, and includes local and global methods
Blatman2010a. Global methods are under active research since they can quantitatively describe the relative importance of different inputs by taking their variability in the entire distribution space into account. Various global methods such as variancebased methods
Saltelli2010 ; Zhang2014, momentindependent methods
Borgonovo2007 ; Greegar2015 and derivativebased methods Sobol2009 ; Sudret2015have been proposed in the literature. Reliability analysis aims at evaluating the safety of a structure with failure probability which is the integral of the joint probability density function of model inputs in the failure region. Many reliability analysis methods have been proposed in the past decades including Monte Carlo Simulation (MCS), Importance Sampling
Au1999 ; Dai2012 ; Dai2016 , Subset Simulation Papaioannou2015 ; Zuev2015 , Line Sampling Koutsourelakis2004 ; Koutsourelakis2004aand other metamodelling approaches such as low rank tensor approximation
Konakli2016 Dai2015 Dai2017 and so on.The necessary foundation for analyzing sensitivity and reliability of a structural model is uncertainty propagation, i.e. representing the random response. The random response of the model can be explicitly expressed by projecting onto a Hilbert space spanned by the polynomials which are mutually orthogonal with some probability measure. This method is named as polynomial chaos expansion Wiener1938
with three merits: a sound mathematical background of probability theory and functional analysis, a wide capability suitable for all the random responses with finite variance and an exponential convergence rate for smooth inputoutput relationships. Currently, it is becoming a popular tool for the uncertainty quantification of structural systems. The probability distribution of stochastic response is characterized by a set of expansion coefficients (i.e. coordinates in the Hilbert space) which can be computed intrusively or nonintrusively. In the intrusive methods, such as the spectral stochastic finite element method (SSFEM)
Ghanem2003 , the coefficients are obtained with a Galerkin scheme which requires modifications of the original deterministic computer codes, hence the term intrusive for this approach. On the contrary, the nonintrusive methods only need repeated calls of the deterministic model, which is convenient for analyzing large complex structural systems with wellvalidated computer codes. Two approaches are usually distinguished, namely projection and regressionbased ones. The projectionbased approaches are based on the orthogonality between different polynomials, and are essentially to compute multidimensional numerical integrals since closedform solutions are hardly achieved. The regressionbased approaches treat the computation of coordinates as a statistical regression problem, having more flexibility in design of experiments and faster convergence rates Blatman2009 .The most commonly used regression approach is the ordinary least squares regression (OLSR). However, the required number of model evaluations dramatically increases with the number of model inputs. This problem is called the curse of dimensionality. It is crucial to reduce the number of model evaluations because a single simulation of highfidelity structural model is often expensive and large data sets easily lead to overflow. Moreover, the OLSR will unavoidably encounter multicollinearity under small sample sizes, leading to poor estimations of coordinates. A good way to circumvent these problems is to find an advanced regression technique that can be more robust under small sample sizes. Since the randomness of response is mainly caused by a small part of polynomials for most industrial problems, many adaptive sparse PCE approaches based on stepwise regression Blatman2010 ; Blatman2010a ; Abraham2017 ; Liu2018 , least angle regression Blatman2011a , support vector regression Cheng2018 DMORPH regression Cheng2018a , compressive sensing Jakeman2017 were proposed to suffice the curse of dimensionality.
This paper proposes a novel regressionbased PCE methodology for analyzing global sensitivity and reliability of highdimensional models. Breaking the existing routine that important terms are directly selected from the huge candidate set of finite order polynomials, the new method on one hand builds a hierarchical regression algorithm based on the idea of divideandconquer, and on the other hand detects principal directions which captures the probabilistic content in each subspace by using a stateoftheart regression approach named partial least squares regression (PLSR) Rosipal2006 ; Rosipal2010 . Optimal nonlinearity degrees and interaction degrees are automatically selected with a modified cross validation scheme. The new method not only can achieve significant dimension reduction, but contribute to uncovering the latent hierarchical lowdimensional structure of the model as well. This work is distinguished from our previous work Zhao2019 in two features: Firstly, the manner of subspace division is more elaborate in order to separate the contributions of terms with different degrees of nonlinearity and interaction. Secondly, the updated subspace division manner naturally derives a new regression scheme with an additional level of hierarchy. The optimal subspaces are extracted in a consistent manner in all levels rather than selected with a penalized regression scheme in higher levels.
The remainder of this paper is organized as follows. The next section provides a brief review of polynomial chaos representation of the stochastic response. A novel methodology based on partial least squares regression and hierarchical modeling is proposed in Section 3. The computational gain of the proposed method is illustrated in Section 4 with three finite element models.
2 Polynomial chaos representation of the stochastic response
Denote as a probability space and as the space of random variables with finite second moments:
(1) 
Assume that the stochastic response of the model has finite variance, i.e. , thus, it can be represented with the following polynomial chaos expansion Ghanem2003 :
(2) 
where is the mean value, is a vector composed of independent standard Gaussian random variables, each is a multidimensional Hermite polynomial with degree and each is a deterministic coefficient. For the sake of simplicity, it is assumed that the random inputs of structures can be described with a set of finite independent random variables. In this context, multidimensional Hermite polynomials can be expressed as tensor product of the corresponding univariate Hermite polynomials. For the case correlated inputs, readers may refer to Soize2004 ; Blatman2010 ; Rahman2017 . The series is meansquare convergent Cameron1947 , hence can be truncated with maximum order :
The key to PCE is the determination of expansion coefficients
in which the whole information about the probability distribution of the stochastic response is encapsulated. Generally, the computational procedures can be classified into two categories, namely intrusive and nonintrusive methods. In the intrusive methods, the deterministic computer codes have to be modified, which is often cumbersome for large complex structural systems. On the contrary, the deterministic model is treated as a blackbox in the nonintrusive method, hence PCE is regarded as a metamodel which can be built with either projectionbased or regressionbased approaches. The former are based on the orthogonality of polynomials:
(6) 
where is the mean value operator. The denominator can be computed analytically while the nominator need to be estimated with repeated calls of the deterministic model. Computational burden of the latter rapidly increases with the number of input variables, even if Smolyak algorithm Smolyak1963 can be utilized to alleviate the curse of dimensionality problem. The regressionbased approaches are becoming popular due to their flexibility in the design of experiments and fast convergence. Assume samples are employed to train the metamodel, the vector of coefficients can be computed with equation (7):
(7) 
where is the centered response vector, is the polynomial matrix as equation (8)
(8) 
and is the MoorePenrose generalized inverse of . Equation (7) is the wellknown solution of OLSR. However, this approach is effective only if . According to equation (5), the required number of samples rapidly increases with the stochastic dimension , which is the socalled curse of dimensionality problem. On the other hand, severe multicollinearity will arise under small sample sizes, which leads to poor accuracy. Therefore it is necessary to find an advanced regression technique that can be more robust under small sample sizes.
3 Second order hierarchical partial least squares regressionpolynomial chaos expansion
This section presents a novel hierarchical regression algorithm for constructing PCE based on the PLSR technique which has been applied to deal with multicollinearity in highdimensional data sets in other science disciplines such as chemometrics
Mehmood2012 and bioinformatics Cao2008 . According to the High Dimensional Model Representation (HDMR) theory Rabitz1999b and the relationship between HDMR and PCE Sudret2008 , equation (3) can be rewritten as a summation of component functions with different interaction degrees (i.e. number of inputs in a component function) as equation (9):(9) 
where
(10) 
(11) 
and . This expression coincides with that of polynomial dimension decomposition Rahman2008 . For most practical problems, the optimal interaction degree is much lower than the number of model inputs, in other words, only order HDMR expansion is needed to approximate the inputoutput relationship with acceptable accuracy. The proposed algorithm aims to adaptively determine the optimal interaction degree and the optimal order of polynomials under each order of interaction degree.
3.1 Construction of polynomial chaos expansion
Step 1: Initialization
First, select with prior knowledge about the nonlinear intensity of the model. Then generate a set of training samples with the Sobol quasirandom sampling scheme. Next transform the sample points from unit hypercube to the parameter space with isoprobabilistic transform if necessary and run highfidelity structural model on each sample point to get the corresponding centered model output, denoted as . Meanwhile, transform the sample points from unit hypercube to standard Gaussian space and compute the centered polynomial matrix .
Step 2: Partition of polynomial matrix
The dimension of increases dramatically with the number of model inputs, leading to high computational burden for the OLSRPCE method. To deal with this issue, we divide into several groups according to their interaction degrees and nonlinearity degrees, and treat them separately and hierarchically. First, is partitioned into the first order subblocks numbered as as equation (12).
(12) 
Since for high dimensional problems, . is composed of polynomials which explicitly contains variables only. Then each is partitioned into the second order subblocks numbered as as equation (13),
(13) 
where contains polynomials with order only.
Step 3: Hierarchical regression
To determine the optimal order of polynomials which compose the first order component functions of HDMR, let the initial response and the first order hierarchical partial least squares regression (FOHPLSR) is performed between and , as illustrated in Figure 1:
Let , , the current response residual and predictor subblock as and , respectively. The latent variables of each second order subblock are extracted by performing PLSR between the current response residual and predictor subblock. The fundamental assumption of PLSR is that the behavior of the model is actually controlled by only a few factors called latent variables. First, latent variables are extracted from and , respectively. The latent variables should not only contain the information about variability of the local data sets as much as possible, but also be most correlated. Then, by building regression models among the latent variables, the relationship between and can be built. The latent variables are extracted by iteratively solving the following optimization problem:
(14) 
where and are called loadings. The explanatory and response latent variables are expressed as and
, respectively. The optimization problem can be solved with singular value decomposition algorithm. It is assumed the relationship between
and are linear:(15) 
Let , then the contribution of and are deflated from and , respectively:
(16) 
(17) 
After iterations, the regression model can be built by using the relationship as equation (18)
(18) 
where , and . In most literature of PLSR, model selection (i.e. selection of the optimal number of iteration) is performed by using the following leaveoneout cross validation error:
(19) 
When is lower than a prescribed threshold, such as , the iteration is terminated. However, for PCE, this approach is not only very timeconsuming, but also has difficulties in controlling the precision of the model. For OLSR, it can be proven that
(20) 
where is the th diagonal element of matrix . Similarly, for PLSR, let be the th diagonal element of matrix , the pseudo leaveoneout cross validation error can be obtained. This is an approximate expression since vector will change when one sample is removed from the original training set, leading to changes of matrix . can be normalized by dividing the sample variance, denoted as . However, this error may be unconventional Blatman2009 , hence the following pseudo cross validation error is introduced for model selection:
(21) 
The iteration is terminated when reaches its minimum and the corresponding pseudo cross validation error is recorded as .
Latent variables of other second order subblocks are sequentially obtained by performing PLSR between the current response residual and the predictor subblock. In the process above, latent variables are extracted from level 0 and obtained at level 1. To obtain the latent variables which capture the probabilistic characteristics of univariate polynomials, PLSR is performed sequentially between and , deriving the latent variables at level 2. Meanwhile, the corresponding pseudo cross validation error is recorded as . The optimal number of second order subblocks (i.e. the optimal nonlinearity degree) involved in the regression model is selected as the index to the minimum of , denoted as . The process to generate variables at level 2 from level 1 is called a hierarchical operation, deriving the name of the first order hierarchical partial least squares regression. The steps of FOHPLSR are summarized in Algorithm 1.
The optimal order of polynomials which compose the th order component functions of HDMR is estimated by sequentially performing FOHPLSR on the th first order subblock with . To determine the optimal interaction degree in the regression model, PLSR is performed at level 2 and the corresponding latent variables are obtained at level 3. Meanwhile, the corresponding pseudo cross validation error is recorded as . The optimal interaction degree is selected as the index to the minimum of . Finally, the expansion coefficient vector can be computed by using equation (18) from top to bottom levelbylevel. The whole process contains two levels of hierarchical operation, hence the formed method is called as second order hierarchical partial least squares regressionpolynomial chaos expansion (SOHPLSRPCE) which is summarized as Algorithm 2. The SOHPLSRPCE algorithm is capable of group selection (i.e. selecting a group of polynomials with similar importance) by extracting latent variables, which is effective for dealing with highdimensional expansions. Thus, the SOHPLSRPCE method is superior to most approaches including stepwise regression and least angle regression. Meanwhile, the latent variables can be detected onthefly (i.e. can be generated sequentially rather than simultaneously), leading to dramatic savings of computer memory.
3.2 Sensitivity analysis
After obtaining the regression coefficient vector , similar to the work of Sudret2008 , variancebased global sensitivity analysis can be performed by a simple postprocessing of , which is expressed in equations (22) and (23):
(22) 
(23) 
where and are called the main and total Sobol indices, respectively.
3.3 Reliability analysis
By retaining the input variables whose values are larger than a prescribed threshold, the PCE metamodel can be reconstructed either with the former samples using OLSR when the cardinality of polynomial set is smaller than the sample size or by using the algorithm introduced above. The failure probability can be estimated with the new metamodel as equation (24),
(24) 
where is the set of retained input variables, is the limit state function of the reconstructed metamodel and is the joint probability density function of .
4 Case Studies
Three different structures are exampled to examine the accuracy and efficiency of the SOHPLSRPCE. The finite element models of the structures are firstly built with the MATLAB finite element toolbox Kattan2008 . Then the computing capacity of the SOHPLSRPCE is compared with that of the OLSRPCE. For the latter, it seems that the largest situation is and that we can manage with our desk computer.
4.1 Simply supported beam
Figure 2
shows a simply supported beam subjected to an uniformly distributed load.
The beam has length and inertial moment . The intensity of distributed loading is . The elastic modulus of the beam is represented with a nonGaussian random field as in equation (25)
(25) 
where is a homogeneous Gaussian random field whose Pearson product moment correlation coefficient function is
(26) 
where correlation length . is discretized with the first 40 components of KarhunenLoève expansion. The mean and coefficient of variation of elastic modulus are and , respectively. The quantity of interest is the vertical midspan displacement, denoted as . The beam is discretized with 100 elements with equal length. The failure event is defined as 0.012m.
Step 1: Construction of polynomial chaos expansion
According to the steps introduced in Section 3, first, the highest order of polynomial is selected as 3, and the corresponding number of polynomial is . Define as the expanded sample ratio which is selected as 0.05, 0.2, 0.6, 1 and 2, respectively, and the corresponding sample size is 617, 2468, 7403, 14808 and 24680, respectively. For each , the metamodel is built with the OLSRPCE. To illustrate fast convergence of the SOHPLSRPCE comparing with the OLSRPCE, in another group of experiments, we define the raw sample ratio =2, 4, 6, 8, 10 and 12 (=0.0065, 0.0130, 0.0194, 0.0259, 0.0324 and 0.0389, N=80, 160, 240, 320, 400 and 480), respectively, and built the metamodel with the SOHPLSRPCE under each .
Step 2: Global sensitivity analysis
The reference solution of each Sobol index is obtained by using MCS with samples. Comparisons of the main and total Sobol indices are illustrated in Figures 3 and 4.
It can be seen from Figures 3 and 4 that the randomness of variables 1 and 3 have significant impact on the randomness of the model output while the impact of other variables can be ignored. Also, the impact of interaction of different variables is very weak. For the OLSRPCE, the main Sobol indices of the important variables is inaccurate when is very low such as 0.05. The solution can qualitatively reflect the distribution of values (i.e. reflect right ranking) only if . To accurately describe the distribution of values, must be higher than 1.2. The accuracy of the total Sobol indices increases with the sample size slower than that of the main Sobol indices because the former are more dependent on the coefficients of cross terms. Accuracy of the individual coefficients cannot be ensured for OLSRPCE under small sample ratios, leading to overestimation of the impact of cross terms, which in turn leads to wrong rankings of the importance of the input variables. On the contrary, for both main and total Sobol indices, the SOHPLSRPCE can consistently get accurate results under all sample ratios, even at very low expanded sample ratio , hence the computational efficiency is 185 times as many as the OLSRPCE. This is because the optimal interaction degree in the expansion and the corresponding nonlinearity degrees are automatically selected by the proposed method, and multicollinearity is dramatically alleviated by using PLSR. For more detailed comparison, we denote the relative error of the main and total Sobol indices as and , respectively, and compute the distributions of orders of magnitude for and under each sample ratio using the SOHPLSRPCE and under expanded sample ratio 0.6 using the OLSRPCE, respectively, as shown in Figure 5.
Because the Sobol indices of some variables are very close to zero, the results computed with the SOHPLSRPCE will not lead to wrong judgments of important variables and their rankings, although the relative errors of Sobol indices of these variables are at the level of  orders of magnitude. For the main Sobol indices, the proposed method does not have significant advantage over the traditional one since values of the latter have already reached high levels of accuracy except for variables 1 and 3. While for the total Sobol indices, by using the SOHPLSRPCE, the order of magnitudes of relative errors are 12 orders of magnitude lower than those of OLSRPCE whose computational costs are 1590 times as many as the former. Therefore, for global sensitivity analysis, the SOHPLSRPCE outperforms the traditional one in terms of computational cost and accuracy.
To uncover the hierarchical structure of the regression model in detail, the numbers of latent variables at different levels are listed in Table 1.
Number of latent variables  
(1,1,1)  (1,1,2)  (1,1,3)  (1,2,2)  (1,2,3)  (1,3,3)  (2,1)  (2,2)  (2,3)  (3)  
0.0065  7  5  3  35  1  1  6  14  1  10 
0.0130  5  3  2  58  3  0  3  21  0  11 
0.0194  4  3  2  67  1  0  3  16  0  8 
0.0259  3  3  2  86  1  1  3  16  1  9 
0.0324  3  3  2  108  1  0  2  17  0  7 
0.0389  4  3  1  138  1  0  2  19  0  7 
In this table, (1,2,3) means the latent variables extracted from the third order polynomials in the second order HDMR expansion at level 1, (2,1) means the latent variables extracted from the first order HDMR expansion at level 2, (3) means the latent variables extracted at level 3, and so on. It can be seen in Table 1 that the first order HDMR component functions can be represented with a sum of 37 linear combinations of first order polynomials, 35 linear combinations of second order univariate polynomials and 13 linear combinations of third order univariate polynomials. The total number of latent variables at level 1 is 815 while the univariate polynomials with orders 13 are numbered as 120. Similar comments can be made for the other orders of HDMR component functions. Thus, the regression model is more parsimonious with the increasing of variable level, which fits the fundamental assumption of PLSR. The optimal interaction degree is 2 or 3 and the optimal order of polynomials under each interaction degree is 3. The numbers are not deterministic due to the randomness of each experiment. The optimal numbers of latent variables at different variable levels are nearly the same except for the bivariate second order polynomials. However, it is unimportant since this part has little impact on the variability of model output.
Step 3: Reliability analysis
The threshold for screening the active random inputs is set as 0.018. Then the PCE metamodel is reconstructed with the retained variables based on the results of global sensitivity analysis. The reference result of failure probability is 0.0023, which is obtained by using MCS with samples. The relative errors under different sample ratios are illustrated in Figure 6.
It can be seen in Figure 6 that the SOHPLSRPCE can provide rather stable results with relative error less than 10% when . Since the proposed method can compute the Sobol indices with high accuracy and low computational cost, the retained variables 1 and 3 can be effectively detected, thus the effective stochastic dimension is reduced to two, and the metamodel can be easily reconstructed with OLSR. On the contrary, the OLSRbased method tends to overestimate the contribution of cross terms under small sample sizes, thus leads to overestimation of total Sobol indices, so that more unimportant variables are retained in the reconstructed metamodel, finally leads to overfitting of the metamodel. Therefore, in this case, is available to gain acceptable accuracy and stability. Hence, the SOHPLSRPCE outperforms the OLSRPCE with a computational gain factor 185.
4.2 Plane truss
Figure 7 shows a plane truss subjected to vertical loads.
Each bar has elastic modulus and diameter 20mm. All the inputs are independent random variables and their distribution parameters are listed in Table 2. The quantity of interest is the vertical displacement of node 18, denoted as . The failure event is defined as 0.210m.
Variables  Distribution type  Mean  Standard deviation 

Lognormal  
Extreme 1  
Extreme 1  
Extreme 1  
Extreme 1 
Step 1: Construction of polynomial chaos expansion
According to the steps introduced in Section 3, the highest order of polynomial is firstly selected as 3, and the corresponding number of polynomial is . First, is selected as 0.05, 0.2, 0.6, 1 and 2 (=617, 2468, 7403, 14808 and 24680), respectively. For each , the OLSRPCE is used for building the metamodel. To illustrate fast convergence of the SOHPLSRPCE comparing with the OLSRPCE, in another group of experiments, is selected as 2,4, 6, 8, 10 and 12 (=0.0065, 0.0130, 0.0194, 0.0259, 0.0324 and 0.0389, =80, 160, 240, 320, 400 and 480), respectively. The metamodel is built with the SOHPLSRPCE under each .
Step 2: Global sensitivity analysis
The reference solution of each Sobol index is obtained by using MCS with samples. Comparisons of the main and total Sobol indices are illustrated in Figures 8 and 9.
It can be seen that variables 1, 3, 5, 7, 9, 11, 13, 15, 3540 individually have significant impact on the uncertainty of model output, but their interactions have a weak impact. Since OLSRPCE tends to overestimate the contribution of cross terms, the values are generally smaller and the values are generally greater than the reference solution. The discrepancies become lower with the increasing of the sample ratio. Results with acceptable accuracy can be obtained only when . However, accuracy can be considerably improved by using the SOHPLSRPCE with noticeably smaller . As the first example, it is acceptable that large errors occur for some variables because their impact is very weak and the errors will not affect the screening and ranking of important variables. As expected, accuracy and stability of the results increase with . Only is needed to quantitatively describe the distribution of the main and total Sobol indices on the whole and only is needed to get highly accurate results. Therefore, the SOHPLSRPCE outperforms the OLSRPCE in terms of required number of model evaluations.
Step 3: Reliability analysis
The threshold for screening the important random inputs is set as 0.005. Numerical experiments show that the metamodels reconstructed under the original sample ratios cannot provide results with satisfactory stability. To get more accurate and stable results, is increased to 15, 22.5, 30, 37.5 and 45 (=0.0486, 0.0729, 0.0972, 0.1216 and 0.1459, =600, 900, 1200, 1500 and 1800), respectively. Then the failure probabilities are computed with the algorithm introduced in Section 2 under the new sample sets. The reference result of failure probability is 0.0052, which is obtained by using MCS with samples. The relative errors under different sample ratios are illustrated in Figure 10.
The global sensitivity analysis above show that the proposed method can exactly detect the important inputs with low computational cost. However, the number of important variables is around 15 which is much more than that in the first example. Therefore it is difficult to ensure the stability and accuracy of the results computed with the metamodel constructed under the original sample ratios, indicating the accuracy and stability of the SOHPLSRPCE can be improved by enriching the sample size. As illustrated in Figure 10, the errors begin to converge to less than 10% when . Although the efficiency decreases compared with the first example, it is still 12 times as many as the OLSRPCE which needs to keep accuracy.
4.3 Spatial truss
Figure 11 shows a spatial truss subjected to horizontal loads.
Each bar (with number listed in Table 3) has elastic modulus and diameter 14mm. and are set to be 1m, respectively. All the inputs are independent random variables and their distribution parameters are listed in Table 4. The quantity of interest is the maximum horizontal displacement of the top nodes, denoted as . The failure event is defined as 0.004m.
Elements  Node 1  Node 2  Elements  Node 1  Node 2 

1  1  5  19  5  9 
2  5  4  20  9  8 
3  5  2  21  9  6 
4  2  6  22  6  10 
5  6  1  23  10  5 
6  6  3  24  10  7 
7  3  7  25  7  11 
8  7  2  26  11  6 
9  7  4  27  11  8 
10  4  8  28  8  12 
11  8  3  29  12  7 
12  8  1  30  12  5 
13  5  6  31  9  10 
14  6  7  32  10  11 
15  7  8  33  11  12 
16  8  5  34  12  9 
17  5  7  35  9  11 
18  6  8  36  10  12 
Variables  Distribution type  Mean  Standard deviation 

Lognormal  
Extreme 1 
Step 1: Construction of polynomial chaos expansion
In the last example, we compare the effects of selecting different values of . Values of are selected as 2 and 3, respectively. The corresponding values of are 12340 and 860. Following the steps introduced in Section 3, in each case, metamodels are built by using OLSRPCE with = 0.05, 0.2, 0.6, 1.2, 2 (= 43, 172, 516, 1032, 1720 for =2, = 617, 2468, 7403, 14808, 24680 for =3) and SOHPLSRPCE with = 2, 4, 6, 8, 10, 12 (= 80, 160, 240, 320, 400, 480; = 0.093, 0.186, 0.2791, 0.3721, 0.4651, 0.5581 for =2, =0.0065, 0.0130, 0.0194, 0.0259, 0.0324, 0.0389 for =3), respectively.
Step 2: Global sensitivity analysis
The reference solution of each Sobol index is obtained by using MCS with samples. Comparisons of the main and total Sobol indices are illustrated in Figures 12 and 13.
It can be seen that variables 1, 3, 4, 5, 7, 9, 10, 11, 3740 individually have significant impact on the uncertainty of model output, but their interactions have a weak impact. For OLSR, considerably accurate results of both and values can be obtained under while the computational cost is only as much as that of . For SOHPLSR, satisfactory results of both and values can be obtained with which is also smaller than that of . In this example, thirdorder terms contribute little to the variance of the model output. However, to identify their contribution with acceptable accuracy, more samples are needed when . The computational efficiencies of SOPHLSR compared to that of OLSR are for and for , respectively.
Step 3: Reliability analysis
The reference result of failure probability is 0.0051, which is obtained by using MCS with samples. The threshold for screening the important random inputs is set as 0.01. Relative errors of the failure probability computed by using OLSR under and are illustrated in Figure 14.
It can been that the failure probabilities computed by OLSR under fails to converge to the reference result since tail probability distribution of the model output is more sensitive to higher order statistics. Thus, to accurately compute the failure probability, higher values of and are needed. For SOHPLSR under , three more groups of experiment (=15, 20, 25; = 600, 800, 1000; = 0.0486, 0.0648, 0.0810) are added. The results are also illustrated in Figure 14. It can be seen that the failure probabilities computed by the proposed method converge to the reference result when reaches 600, indicating the computational gain is compared to the OLSRbased counterpart.
5 Conclusions
This paper develops a novel nonintrusive method called SOHPLSRPCE for global sensitivity and reliability analyses of highdimensional models. The SOHPLSRPCE shed new light on PCE by using hierarchical modeling and latent variable extraction. A stateoftheart regression technique named PLSR is introduced to compute the latent factors that capture the most probabilistic information of polynomials at different variable levels. From the perspective of HDMR, the SOHPLSRPCE automatically estimates the optimal interaction degree and the corresponding nonlinearity degrees. Three finite element models with different structural types and effective stochastic dimensions are employed to compare the relative performance of the prosed method and the traditional counterpart. The results demonstrate that the proposed method has the following properties: (1) For models with low effective stochastic dimensions (e.g. 2), the computational efficiencies of both the global sensitivity indices and the failure probabilities can be considerably high (e.g. 2 orders of magnitude higher than the traditional counterpart) without additional computational cost; (2) For models with moderate effective stochastic dimensions (e.g. 1215), if only global sensitivity indices are needed, the computational efficiency is still rather high (e.g. tens of times higher than the traditional counterpart). When failure probabilities are needed, although more samples are required to keep accuracy, the computational efficiency is still much higher (e.g. ten times) than that of the traditional counterpart. (3) For the selection of , it is feasible to select a low value (e.g. 2) if only global sensitivity indices are needed. A higher value (e.g. 3) should be selected when reliability is to be analyzed. In summary, the proposed method not only has the potential of improving the computational efficiencies of global sensitivity and reliability analyses, but is promising for uncovering the latent hierarchical low dimensional structure of the models as well.
The proposed method has been so far applied to the models with univariate output and weak nonlinearities. Advanced methods for models with multivariate outputs and high nonlinearities are worth future investigations.
Acknowledgments
This research was supported by the National Natural Science Foundation of China (NSFC, 51308158), and the China Postdoctoral Science Foundation (CPSF, 2013M541390), which are gratefully acknowledged by the authors. The valuable suggestions provided by Professor GuangChun Zhou are also gratefully acknowledged.
References
References
 (1) G. Blatman, B. Sudret, Efficient computation of global sensitivity indices using sparse polynomial chaos expansions, Reliability Engineering & System Safety 95 (11) (2010) 1216–1229.
 (2) A. Saltelli, P. Annoni, I. Azzini, F. Campolongo, M. Ratto, S. Tarantola, Variance based sensitivity analysis of model output. design and estimator for the total sensitivity index, Computer Physics Communications 181 (2) (2010) 259–270.
 (3) X. Zhang, M. D. Pandey, An effective approximation for variancebased global sensitivity analysis, Reliability Engineering & System Safety 121 (2014) 164–174.
 (4) E. Borgonovo, A new uncertainty importance measure, Reliability Engineering & System Safety 92 (6) (2007) 771–784.
 (5) G. Greegar, C. S. Manohar, Global response sensitivity analysis using probability distance measures and generalization of sobol’s analysis, Probabilistic Engineering Mechanics 41 (2015) 21–33.
 (6) I. M. Sobol, S. Kucherenko, Derivativebased global sensitivity measures and their link with global sensitivity indices, Mathematics & Computers in Simulation 79 (10) (2009) 3009–3017.
 (7) B. Sudret, C. V. Mai, Computing derivativebased global sensitivity measures using polynomial chaos expansions, Reliability Engineering & System Safety 134 (2015) 241–250.
 (8) S. K. Au, J. L. Beck, A new adaptive importance sampling scheme for reliability calculations, Structural safety 21 (2) (1999) 135–158.
 (9) H. Dai, H. Zhang, W. Wang, A support vector densitybased importance sampling for reliability assessment, Reliability Engineering & System Safety 106 (2012) 86–93.
 (10) H. Dai, H. Zhang, W. Wang, A new maximum entropybased importance sampling for reliability analysis, Structural Safety 63 (2016) 71–80.
 (11) I. Papaioannou, W. Betz, K. Zwirglmaier, D. Straub, Mcmc algorithms for subset simulation, Probabilistic Engineering Mechanics 41 (2015) 89–103.
 (12) K. Zuev, Subset simulation method for rare event estimation: An introduction, arXiv preprint arXiv:1505.03506.
 (13) P. S. Koutsourelakis, Reliability of structures in high dimensions. part ii. theoretical validation, Probabilistic engineering mechanics 19 (4) (2004) 419–423.
 (14) P. S. Koutsourelakis, H. J. Pradlwarter, G. I. Schueller, Reliability of structures in high dimensions, part i: algorithms and applications, Probabilistic Engineering Mechanics 19 (4) (2004) 409–417.
 (15) K. Konakli, B. Sudret, Reliability analysis of highdimensional models using lowrank tensor approximations, Probabilistic Engineering Mechanics 46 (2016) 18–36.
 (16) H. Dai, H. Zhang, W. Wang, A multiwavelet neural network‐based response surface method for structural reliability analysis, Computer aided Civil and Infrastructure Engineering 30 (2) (2015) 151–162.
 (17) H. Dai, Z. Cao, A wavelet support vector machinebased neural network metamodel for structural reliability assessment, ComputerAided Civil and Infrastructure Engineering 32 (4) (2017) 344–357.
 (18) N. Wiener, The homogeneous chaos, American Journal of Mathematics 60 (4) (1938) 897–936.
 (19) R. G. Ghanem, P. D. Spanos, Stochastic finite elements: a spectral approach, Courier Corporation, 2003.
 (20) G. Blatman, Adaptive sparse polynomial chaos expansions for uncertainty propagation and sensitivity analysis, Ph.D. thesis, ClermontFerrand 2 (2009).
 (21) G. Blatman, B. Sudret, An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis, Probabilistic Engineering Mechanics 25 (2) (2010) 183–197.
 (22) S. Abraham, M. Raisee, G. Ghorbaniasl, F. Contino, C. Lacor, A robust and efficient stepwise regression method for building sparse polynomial chaos expansions, Journal of Computational Physics 332 (2017) 461–474.
 (23) Q. Liu, X. Zhang, X. Huang, A sparse surrogate model for structural reliability analysis based on the generalized polynomial chaos expansion, Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliabilitydoi:10.1177/1748006X18804047.
 (24) G. Blatman, B. Sudret, Adaptive sparse polynomial chaos expansion based on least angle regression, Journal of Computational Physics 230 (6) (2011) 2345–2367.
 (25) K. Cheng, Z. Lu, Adaptive sparse polynomial chaos expansions for global sensitivity analysis based on support vector regression, Computers & Structures 194 (2018) 86–96.
 (26) K. Cheng, Z. Lu, Sparse polynomial chaos expansion based on dmorph regression, Applied Mathematics and Computation 323 (2018) 17–30.
 (27) J. D. Jakeman, A. Narayan, T. Zhou, A generalized sampling and preconditioning scheme for sparse approximation of polynomial chaos expansions, SIAM Journal on Scientific Computing 39 (3) (2017) A1114–A1144.

(28)
R. Rosipal, N. Krämer, Overview and recent advances in partial least squares, in: Subspace, Latent Structure and Feature Selection, Springer, 2006, pp. 34–51.

(29)
R. Rosipal, Nonlinear partial least squares: An overview, Chemoinformatics and advanced machine learning perspectives: complex computational methods and collaborative techniques (2010) 169–189.
 (30) W. Zhao, L. Bu, Global sensitivity analysis with a hierarchical sparse metamodeling method, Mechanical Systems and Signal Processing 115 (2019) 769–781.
 (31) C. Soize, R. Ghanem, Physical systems with random uncertainties: chaos representations with arbitrary probability measure, SIAM Journal on Scientific Computing 26 (2) (2004) 395–410.
 (32) S. Rahman, Wiener–hermite polynomial expansion for multivariate gaussian probability measures, Journal of Mathematical Analysis and Applications 454 (1) (2017) 303–334.
 (33) R. H. Cameron, W. T. Martin, The orthogonal development of nonlinear functionals in series of fourierhermite functionals, Annals of Mathematics 48 (48) (1947) 385–392.

(34)
S. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions, in: Soviet Math. Dokl., Vol. 4, 1963, pp. 240–243.
 (35) T. Mehmood, K. H. Liland, L. Snipen, S. Sæbø, A review of variable selection methods in partial least squares regression, Chemometrics & Intelligent Laboratory Systems 118 (16) (2012) 62–69.
 (36) K.A. Lê Cao, D. Rossouw, C. RobertGranié, P. Besse, A sparse pls for variable selection when integrating omics data, Statistical applications in genetics and molecular biology 7 (1) (2008) Article 35.
 (37) H. Rabitz, Ö. F. Aliş, General foundations of highdimensional model representations, Journal of Mathematical Chemistry 25 (23) (1999) 197–233.
 (38) B. Sudret, Global sensitivity analysis using polynomial chaos expansions, Reliability Engineering & System Safety 93 (7) (2008) 964–979.
 (39) S. Rahman, A polynomial dimensional decomposition for stochastic computing, International Journal for Numerical Methods in Engineering 76 (13) (2008) 2091–2116.
 (40) P. I. Kattan, MATLAB Guide to Finite Elements, Springer Berlin Heidelberg, 2008.
Comments
There are no comments yet.