1 Introduction
Structural reliability analysis aims at computing the probability of failure of a system with respect to some performance criterion in the presence of uncertainty in its structural and operating parameters. Such uncertainty can be modelled by a random vector
with prescribed joint probability density function
. The limitstate function is defined over the support of such that defines the failure domain, while defines the safe domain. The limit state surface implicitly defined by lies at the boundary between the two domains. The probability of failure of such a system can be defined as (Melchers, 1999; Lemaire, 2009):(1) 
A straightforward approach to compute the integral in Eq. (1) is to use of Monte Carlo Simulation (MCS). However, standard MCS approaches can often not be used in the presence of complex and computationally expensive engineering models, because of the large number of samples they require to estimate small probabilities (typically in the order of for ) with acceptable accuracy. Wellknown methods based on local approximation of the limitstate function close to the failure domain (such as FORM (Hasofer and Lind, 1974) and SORM (Rackwitz and Fiessler, 1978)) can be more efficient, yet they are usually based on linearisation and tend to fail in realcase scenarios with highly nonlinear structural models.
In contrast, methods based on surrogate modelling have gradually gained momentum in the last few years. Due to the nature of the problem of estimating low probabilities, most recent methods combine activelearningbased greedy algorithms with Gaussian process surrogate models (Kriging). Among the first works to propose this approach, the earliest applications in this context were the efficient global reliability analysis method (EGRA) by Bichon et al. (2008); Bichon et al. (2011), and the activelearning reliability (AKMCS) method based on Kriging by Echard et al. (2011). More recently, Kriging has been employed to devise quasioptimal importance densities in Dubourg et al. (2013); Dubourg and Sudret (2014). Amongst other variations, polynomialchaosbased Kriging has also been used as an alternative metamodelling technique (Schöbi et al., 2016) to overcome some of the limitations of pure Krigingbased methods. Additional works on the topic of Kriging and structural reliability can be found, including extensions of the original AKMCS algorithm to more advanced sampling techniques (Echard et al., 2013; Balesdent et al., 2013), system reliability (Fauriat and Gayton, 2014) and for the exploration of multiplefailure regions (Cadini et al., 2014).
Polynomial chaos expansions (PCE) (Ghanem and Spanos, 1991) are a wellestablished tool in the context of uncertainty quantification, with applications in uncertainty propagation (Xiu and Karniadakis, 2002), sensitivity analysis (Le Gratiet et al., 2017) and, to a lesser degree, structural reliability (Sudret and Der Kiureghian, 2002). While often considered as an efficient surrogate modelling technique due to their global convergence behaviour, PCEs have been employed only seldom in reliability analysis (see, e.g. Notin et al. (2010)) due to their lack of accuracy in the tails of the model response distribution, which are essential in this field.
In addition, most activelearning approaches with surrogates require some form of local error estimate to adaptively enrich a small set of model evaluations close to the limit state surface. Krigingbased methods can rely on the Kriging variance for this task, but PCEs do not provide a natural equivalent.
In this paper, we leverage on the properties of regressionbased sparsePCE (Blatman and Sudret, 2011) to derive a local error estimator based on bootstrap resampling. We then use this estimator to construct an activelearning strategy that adaptively approximates the limitstate function with PCE by minimizing a misclassification probabilitybased learning function at every iteration. The method is then showcased on a standard benchmark functions representing a series system and on a realistic structural frame engineering example.
2 Methodology
2.1 Polynomial Chaos Expansions
Consider a finite variance model representing the response of some quantity of interest (QoI) to the random input parameters
, modelled by a joint probability distribution function (PDF)
. Also consider the functional inner product defined by:(2) 
where represents the input domain. Under the assumption of independence of the input variables, that is , one can represent as the following generalised polynomial chaos expansion (see, e.g. Ghanem and Spanos (1991); Xiu and Karniadakis (2002)):
(3) 
where the are real coefficients and is a multiindex that identifies the degree of the multivariate polynomial in each of the input variables :
(4) 
Here is a polynomial of degree that belongs to the family of orthogonal polynomials w.r.t. the marginal PDF . For more details on the construction of such polynomials for both standard and arbitrary distributions, the reader is referred to Xiu and Karniadakis (2002).
In the presence of a complex dependence structure between the input variables, it is always possible to construct isoprobabilistic transforms (e.g. Rosenblatt or Nataf transforms, see e.g. Lebrun and Dutfoy (2009)) to decorrelate the input variables prior to the expansion, even in the case of complex dependence modelled by vine copulas (Torre et al., 2017). For the sake of notational simplicity and without loss of generality, we will hereafter assume independent input variables.
In practical applications, the series expansion in Eq. (3) is traditionally truncated based on the maximal degree of the expansion, thus yielding a set of basis elements identified by the multiindices , with , or using more advanced truncation schemes that favour sparsity, e.g. hyperbolic truncation (Blatman and Sudret, 2010a). The corresponding expansion coefficients can then be calculated efficiently via leastsquare analysis based on an existing sample of the input random vector , known as the experimental design (ED), and the corresponding model responses as follows:
(5) 
When the number of unknown coefficients is high (e.g. for highdimensional inputs or highdegree expansions), regression strategies that favour sparsity are needed to avoid overfitting in the presence of a limitedsize experimental design and to make the analysis at all feasible with a reasonable sample size . Amongst them, least angle regression (LARS, Efron et al. (2004)), based on a regularized version of Eq. (5), has proven to be very effective in tackling realistic engineering problems even in relatively high dimensions (i.e. ). In this paper, we adopt the full degreeadaptive, sparse PCE based on hybridLARS introduced in Blatman and Sudret (2011)), as implemented in the UQLab Matlab software ((Marelli and Sudret, 2014, 2017)).
2.2 Bootstrapbased local error estimation in PCE
2.2.1 Bootstrap in leastsquare regression
Adopting a leastsquare regression strategy to calculate the coefficients in Eq. (5) allows one to use the bootstrap resampling method (Efron, 1982) to obtain information on the variability in the estimated coefficients due to the finite size of the experimental design. Suppose that a set of estimators is a function of a finitesize sample drawn from the random vector . Then the bootstrap method consists in drawing new sample sets from the original by resampling with substitution. This is achieved by randomly assembling times realizations , possibly including repeatedly the same realization multiple times within each sample. The set of estimated quantities can then be recalculated from each of the samples, thus yielding a set of estimators . This set of estimators can then be used to directly assess the variability of due to the finite size of the experimental design , at no additional costs, e.g. by calculating statistics, or directly using each realization separately. Application of the bootstrap method combined with PCE to provide confidence bounds in the estimated in structural reliability applications can be found in e.g. Notin et al. (2010); Picheny et al. (2010).
2.2.2 BootstrapPCE
We propose to use the bootstrap technique to provide local error estimates to the PCE predictions. The rationale is the following: the PCE coefficients in Eq. (5) are estimated from the experimental design , therefore they can be resampled through bootstrap. This can be achieved by first generating a set of bootstrapresampled experimental designs . For each of the generated designs, one can calculate a corresponding set of coefficients , effectively resulting in a set of different PCEs. Correspondingly, the response of each PCE can be evaluated at a point as follows:
(6) 
thus yielding a full response sample at each point
. Therefore, empirical quantiles can be employed to provide local error bounds on the PCE prediction at each point, as well as to any derived quantity (
e.g. or sensitivity indices, see e.g. Picheny et al. (2010); Dubreuil et al. (2014)).This bootstrapresampling strategy in Eq. (6) yields in fact a family of surrogate models that can be interpreted as trajectories. Figure 1 showcases how such trajectories can be directly employed to assess confidence bounds on pointwise predictions on a simple 1D test function given by:
(7) 
where the single random variable is assumed to be uniformly distributed within the bounds
, and where bootstrap samples have been used.This process of bootstrapbased trajectory resampling to provide better estimates of pointwise confidence bounds has been recently explored in the Gaussian process modelling literature, see e.g., den Hertog et al. (2006); van Beers and Kleijnen (2008).
We refer to this approach as to bootstrapPCE, or bPCE in short.
2.2.3 Fast bPCE
Because the training of a PCE model with sparse leastsquare analysis may be time consuming, especially in high dimension and/or when an already large experimental design is available (i.e. ), and because in this particular application we do not need very accurate estimates on the bounds of the derived quantities, we adopt a fast bPCE approach. In this approach, the sparse polynomial basis identified by the LARS algorithm during calibration is calculated only once from the available full experimental design , and bootstrapping is applied only to the final hybrid
step, which consists in a classic ordinary leastsquare regression on the sparse basis
(Blatman and Sudret, 2011).In the presence of a very expensive model, however (i.e. requiring several hours for a single model run), we recommended to adopt full bootstrapping, including the estimation of the sparse PCE basis for each of the bootstrapped experimental designs .
2.3 Active bPCEbased reliability analysis
In this section we present an adaptation of the Adaptive PCKriging MCS algorithm in Schöbi et al. (2016) (based in turn on the original AKMCS algorithm by Echard et al. (2011)), that makes use of the bPCE just introduced. Consistently with Echard et al. (2011); Schöbi et al. (2016), in the following we will refer to this algorithm as active bootstrappolynomialchaos MonteCarlo simulation (AbPCE). We follow the original idea of adaptively building a surrogate of the limitstate function starting from a small initial experimental design and subsequently refining it to optimize the surrogate performance for structural reliability. The ultimate goal of the adaptation is to retrieve an estimate of that is comparable to that of a direct Monte Carlo simulation (MCS) using a large sample set with a much smaller experimental design. The algorithm is summarized as follows:

Initialization:

Generate a large reference MCS sample of size (e.g. ). A discussion on the choice of a suitable MCS sample is given in Section 2.3.2).

Calculate a set of MCS estimators of the probability of failure: with the current bPCE surrogate.

Update the bPCE surrogate on the new ED and return to Step 1

Algorithm termination: return the resulting from the PCE on the current ED, as well as the error bounds derived e.g. from the extremes or the empirical quantiles of the current set.
A detailed description of each step of the algorithm is given in the following sections.
2.3.1 Initial experimental design
The initial experimental design is usually generated by spacefilling sampling techniques of the random vector , such as Latin hypercube sampling (LHS) or pseudorandom sequences (e.g. Sobol’ sequence). Alternative sampling techniques, such as the uniform sampling of a ball, have also proven effective in the context of structural reliability when low probabilities of failure are expected Dubourg (2011). Note that this initial set of model evaluations does not need to be a subset of the reference sample used later to evaluate the estimates during the iterations of the algorithm.
2.3.2 Inner MCSbased estimate of
While the estimation of the via MCS is trivial, as it simply entails counting the number of samples that belong to the failure domain, some discussion about the number of samples in this step is needed. Throughout this paper, we opted to choose a single MCS sample large enough to ensure a relatively small CoV for the estimate at every iteration. This is by no means a requirement of this algorithm, but it simplifies significantly the notation (because becomes independent on the current iteration) and in some cases (as noted in both Echard et al. (2011) and Schöbi et al. (2016)) it can result in stabler convergence, due to the lowered MCS noise in the estimation of during each iteration. This technique is known as common random numbers in the context of repeated reliability analysis .e.g. in reliabilitybased design optimization (Taflanidis and Beck, 2008). It is entirely possible to redraw the during every iteration, possibly each time with a different number of samples .
The choice of ensures that the CoV estimated probabilities of failure in the order of is always smaller than , which we found suitable in our application examples. The choice of a single MCS sample drawn during the algorithm initialization also allows us to use the application examples to focus more on the convergence of the active learning part of AbPCE, which is the focus of this paper.
In more general applications, the order of magnitude of may be unknown. In this case, it is recommended instead to set a target desired CoV for the estimation of at each iteration (as proposed in the original AKMCS algorithm in Echard et al. (2011)), and gradually add samples to until it is reached.
2.3.3 Convergence criteria
The proposed convergence criterion of choice is directly inspired by Schöbi et al. (2016); Notin et al. (2010) and it depends on the stability of the estimate at the current iteration. Let us define:
(8) 
Convergence is achieved when the following condition is satisfied for at least two consecutive iterations of the algorithm:
(9) 
with in typical usage scenarios.
2.3.4 Learning function
A learning function is a function that allows one to rank a set of candidate points based on some utility criterion that depends on the desired application. In this case, we adopt the same heuristic approach proposed in
Schöbi et al. (2016), by focusing on the probability of misclassification of the bPCE model on the candidate set given by .Due to the availability of the bootstrap response samples , it is straightforward to define a measure of the misclassification probability (where the subscript FBR stands for failed bootstrap replicates) at each point as follows:
(10) 
where and are the number of safe (resp. failed) bPCE replicate predictions at point (with ). When all the
replicates consistently classify
in the safe or in the failure domain, (minimum misclassification probability). In contrast, corresponds to the case when the replicates are equally distributed between the two domains. In the latter case, 50% of the bootstrap PCEs predict that is in the safe domain, while the other 50% predicts that belongs to the failure domain. Therefore, maximum epistemic uncertainty on the classification of a point is attained when is minimum.2.3.5 Enrichment of the experimental design
The aim of the iterative algorithm described in Section 2.3 is to obtain a surrogate model that minimizes the misclassification probability. As a consequence, the learning function in Eq. (10) can be directly used to obtain a singlepoint enrichment criterion. The next best candidate point for the ED is given by:
(11) 
Due to the global character of regressionbased PCE, it can be beneficial to add multiple points in each iteration to sample several interesting regions of the parameter space simultaneously. The criterion in Eq. (11) can be extended to include distinct points simultaneously by following the approach in Schöbi et al. (2016). A limit state margin region is first defined as the set of points such that (i.e. those point with nonzero misclassification probability at the current iteration). Subsequently, means clustering techniques (see, e.g., Zaki and Meira (2014)) can be used at each iteration to identify disjoint regions in the limitstate margin. Then, Eq. (11) can be directly applied to each of the subregions to obtain different enrichment points:
(12) 
where is the th enrichment sample and is the learning function evaluated on the th region of the parameter space.
Note that this approach is also convenient when parallel computing facilities are available and in the presence of computationally expensive objective functions, as the evaluation of the enrichment points can be carried out simultaneously.
3 Results on benchmark applications
All the algorithm development and the final calculations presented in this section were performed with the polynomial chaos expansions and reliability analysis modules of the UQLab software for uncertainty quantification (Marelli and Sudret, 2014, 2017; Marelli et al., 2017).
3.1 Series system
A common benchmark for reliability analysis functions is given by the fourbranch function, originally proposed in Waarts (2000), that represents a series system comprising four components with different failure criteria. Although it is a simple analytical function, it shows multiple failure regions and a composite limitstate surface. Its twodimensional limit state function reads:
(13) 
where the two random input variables and are modelled as independent standard normals. Failure occurs when .
Due to the multifailure shape of the limitstate surface (represented as a solid black line in Figure 2), classic methods like FORM/SORM and importance sampling tend to fail with this benchmark problem. The reference failure probability of is obtained through an extremely large MCS ().
The initial experimental design for the AbPCE algorithm was obtained with a spacefilling LHS sample consisting of points drawn from the input distributions (black dots in Figure 2). Three points at a time were added to the experimental design during the enrichment phase of the algorithm. The number of replications for the AbPCE algorithm is set to . After extensive testing, the algorithm was found to be very weakly dependent on the number of bootstrap replications, provided a minimum of was provided. Indeed, the boostrap samples are used to identify areas of relatively large prediction variability, but an accurate estimate of such variability is never really needed by the algorithm. Degree adaptive sparse PCE (with maximum degree in the range ) based on LARS (Blatman and Sudret, 2011) was used to calibrate the PCE metamodel at each iteration. For validation and comparison purposes, a similar analysis was performed on the same initial ED with the AKMCS module of UQLab, with an anisotropic Matérn 5/2 ellipsoidal multivariate Kriging correlation function (Rasmussen and Williams, 2006; Marelli et al., 2017; Lataniotis et al., 2017). The convergence criterion in Eq. (9) was set to for both the AKMCS and AbPCE algorithms.
Convergence was achieved after 49 iterations, resulting in a total cost (including the initial experimental design) of model evaluations. The experimental design points added during the iterations are marked by red crosses on panel (b) of Figure 2. As expected, the adaptive algorithm tends to enrich the experimental design close to the limit state surface as it is adaptively learned during the iterations. A graphical representation of the convergence of the algorithm is shown in Figure 3, where the estimated is plotted against the total number of model evaluations . The shaded area represents the confidence bounds based on the empirical quantiles as estimated from the bootstrap sample.
The final results of the analysis are summarized in Table 1, where the generalised reliability index is also given for reference. For comparison, the reference MCS probability as well as an estimate from AKMCS are also given. The latter converged to a comparably accurate estimate of , at the cost of a slightly higher number of model evaluations. Note that for engineering purposes, the algorithm could have been stopped earlier, i.e. when a accuracy on the generalized reliability index is attained. In this case, the algorithm would have converged to a comparable result () with only 50 runs of the model. The final sparse PCE model after enrichment contained a total of basis elements of degree up to .
Algorithm  

MCS (ref.)      
AKMCS  
AbPCE 
3.2 Twodimensional truss structure
To test the algorithm on a more realistic engineering benchmark, consider the twodimensional truss structure sketched in Figure 4. This structure has been previously analysed in several works, see e.g. (Blatman and Sudret, 2011, 2010b; Schöbi et al., 2016). The truss comprises 23 bars and 13 nodes, with deterministic geometry yet with uncertain material properties and random loads. The components of the input random vector include the crosssection and the Young’s modulus of the horizontal bars, the crosssection and the Young’s modulus of the diagonal bars and the six random loads . They are considered mutually independent and their distributions are given in Table 2. An inhouse developed Matlabbased finiteelement solver is used to calculate the displacement at midspan , counted positively downwards.
Variable  Distribution  Mean  Standard Deviation 

, (Pa)  Lognormal  
(m^{2})  Lognormal  
(m^{2})  Lognormal  
(N)  Gumbel 
This structure operates in the nominal range as long as the midspan displacement is smaller than a critical threshold cm, which can be cast as the following limitstate function:
(14) 
where if the system is in a failure state.
Because the FEM computational model is relatively cheap to evaluate, we could run a direct MCSanalysis with samples to provide the reference for validation purposes. Additionally, standard FORM and SORM analyses were run to estimate the nonlinearity of the limitstate surface. FORM underestimated the failure probability of a factor of almost 2 and a cost of model runs, while SORM achieved a good accuracy at a cost of
model runs, which suggests that the underlying problem is nonlinear. Neither of the two methods, however, provides confidence interval on their estimates.
The AbPCE algorithm was initialized with an experimental design consisting in a uniform sampling of a ball (for details, see e.g. Dubourg (2011)) of size , while the sparse adaptive PCE was given a polynomial degree range , hyperbolic truncation with qnorm (Blatman and Sudret, 2011) and maximum allowed interaction (Marelli and Sudret, 2017). The internal MCS sample size was and the algorithm was set to add new samples per iteration. The stopping criterion in Eq. (9) was set to . For comparison purposes, we also ran a standard AKMCS analysis with the same initial experimental design and convergence criterion. The covariance family of choice for the underlying Kriging model was chosen as Gaussian.
Algorithm  

MCS (ref.)      
FORM      
SORM      
AKMCS  
AbPCE 
Table 3 presents a comparison of the estimated with the aforementioned analyses. Both AKMCS and AbPCE estimates of include the reference value within the confidence bounds set by the convergence criterion. However, for this particular example and choice of convergence criterion, AbPCE achieved convergence significantly faster than AKMCS, with a total cost of model evaluations, as compared to the required by AKMCS, resulting in a final PCE of degree with basis elements.
Overall, AbPCE provides a stable estimate of the failure probability and confidence intervals at a cost that is lower than FORM for this example.
3.3 Topfloor displacement of a structural frame
Figure 5 shows a well known, high dimensional benchmark in structural reliability applications (Liu and Der Kiureghian, 1991; Blatman and Sudret, 2010a). It consists on a threespan, five story frame structure that is subject to horizontal loads. Both the loads and the properties of the elements of the frame (see Table 4) are uncertain. Of interest is the topfloor horizontal displacement at the top right corner .
Element  Young’s modulus  Moment of inertia  Crosssectional area 
The uncertainties on the applied loads , the Young’s moduli and , the moments of inertia and the cross sections are modelled by a 21dimensional joint random vector with marginal distributions given in Table 5.
Variable  Distribution  Mean  Standard Deviation 

(kN) 
Lognormal  
(kN) 
Lognormal  
(kN) 
Lognormal  
(kN/m^{2}) 
Truncated Gaussian^{*}  
(kN/m^{2}) 
Truncated Gaussian^{*}  
(m^{4}) 
Truncated Gaussian^{*}  
(m^{4}) 
Truncated Gaussian^{*}  
(m^{4}) 
Truncated Gaussian^{*}  
(m^{4}) 
Truncated Gaussian^{*}  
(m^{4}) 
Truncated Gaussian^{*}  
(m^{4}) 
Truncated Gaussian^{*}  
(m^{4}) 
Truncated Gaussian^{*}  
(m^{4}) 
Truncated Gaussian^{*}  
(m^{2}) 
Truncated Gaussian^{*}  
(m^{2}) 
Truncated Gaussian^{*}  
(m^{2}) 
Truncated Gaussian^{*}  
(m^{2}) 
Truncated Gaussian^{*}  
(m^{2}) 
Truncated Gaussian^{*}  
(m^{2}) 
Truncated Gaussian^{*}  
(m^{2}) 
Truncated Gaussian^{*}  
(m^{2}) 
Truncated Gaussian^{*}  


Truncated in the domain
. The quoted moments refer to the full, untruncated Gaussian distributions.
Additionally, a Gaussian copula (Lebrun and Dutfoy, 2009) is used to model dependence between the variables. The elements of the Gaussian copula correlation matrix are given as:

– the two Young’s moduli are highly correlated;

– each element’s crosssectional area is highly correlated to the corresponding moment of inertia;

– the correlation between the properties of different elements is much lower;

All the remaining elements of are set to .
A critical displacement of cm is identified as the maximum admissible threshold for the displacement , hence resulting in the limitstate function:
(15) 
where is the displacement on the top right corner calculated with an inhouse FEM code. Due to the associated computational costs, the maximum available budget for the calculation of a reference solution is in this case limited to . Therefore, the reference solution is calculated with standard importance sampling (IS) (Melchers, 1999) instead of direct MCS. In addition to Importance sampling, we also ran FORM and SORM. Due to the nonlinearity of the problem, FORM significantly underestimated , while SORM provided an accurate estimate. However, due to the high dimensionality of the input space, the associated cost in terms of model evaluation was relatively high, with model runs, since all the gradients of the limitstate function are computed using finitedifferences.
The AbPCE algorithm was initialized with an experimental design consisting of an LHS sampling of the input random vector of size . Sparse PCE was carried out with a qnorm truncation with and maximum allowed interaction . Note that the initialization is essentially the same as for the truss structure in the previous application. The internal MCS sample size was , with single point enrichment per iteration. The stopping criterion in Eq. (9) was in this case set to . For comparison purposes, an AKMCS analysis was also run on the same initial design, with similar settings and a Gaussian covariance family.
A comparison of the results is gathered in Table 6. Due to the different estimation method between the reference probability (importance sampling) and the active learningbased methods (which rely on an inner MCS), no direct comparison of the results is possible as in the previous cases. Indeed, even fixing the same random seeds would result in different estimates due to the different methodologies. Therefore, confidence bounds are given for all the three methods: 95% confidence bounds for IS (Melchers, 1999), and for both AKMCS and AbPCE. The three methods give comparable results, albeit with significant differences in the convergence behaviour. In particular, both AK and AbPCE resulted in a slight underestimation of the probability of failure w.r.t. the reference solution by IS, which in turn is slightly overestimated with respect to the reference result quoted in the literature (Blatman and Sudret, 2010a). However, AKMCS did not converge in the allotted maximum number of model evaluations, and its confidence bounds remained remarkably large with respect to AbPCE. AbPCE converged instead at a total cost of approximately model evaluations to the target , with a final sparse PCE of degree 2, counting nonzero coefficients. For both activelearningbased methods, the reference solution lies within the given confidence bounds. Moreover, the confidence bounds on the reliability index show that the results are stable to within of the calculated values.
Finally, it is interesting to mention that for this example the costs of FORM and AbPCE were comparable, but the latter provides a much less biased estimate, and includes confidence bounds.
Algorithm  

IS (ref.)  
FORM 
  
SORM 
  
AKMCS 

AbPCE 
4 Conclusions and outlook
A novel approach to solving reliability problems with polynomial chaos expansions has been proposed. The combination of the bootstrap method and sparse regression enabled us to introduce local error estimation in the standard PCE predictor. In turn, this allows one to construct active learning algorithms similar to AKMCS to greedily enrich a relatively small initial experimental design so as to efficiently estimate the probability of failure of complex systems.
This approach has shown comparable performance w.r.t. to the well established AKMCS method on both a simple analytical benchmark function and in two highdimensional engineering applications of increasing complexity.
Extensions of this approach can be envisioned in two main directions:

the simulationbased reliability analysis method can be extended beyond simple MCS (e.g. by using importance sampling (Dubourg et al., 2013), line sampling (Pradlwarter et al., 2007) or subset simulation (Dubourg et al., 2011)) to achieve better estimates at each iteration, especially for very low probabilities of failure;

remote parallel computing facilities may be used during the enrichment phase of the algorithms with expensive computational models when adding more than one point at a time;
Additionally, the bPCE approach itself introduced in this work can be used also outside of a pure reliability analysis context, as it provides an effective local error estimate for PCE. It has been used, e.g. in the context of reliabilitybased design optimization in Moustapha and Sudret (2017). Indeed the lack of this feature (as opposed to Kriging) has somewhat hindered its usage in more advanced activelearning applications.
References
References
 Balesdent et al. (2013) Balesdent, M., J. Morio, and J. Marzat (2013). Krigingbased adaptive importance sampling algorithms for rare event estimation. Structural Safety 44, 1–10.
 Bichon et al. (2008) Bichon, B., M. Eldred, L. Swiler, S. Mahadevan, and J. McFarland (2008). Efficient global reliability analysis for nonlinear implicit performance functions. AIAA Journal 46(10), 2459–2468.
 Bichon et al. (2011) Bichon, B., J. McFarland, and S. Mahadevan (2011). Efficient surrogate models for reliability analysis of systems with multiple failure modes. Reliab. Eng. Sys. Safety 96(10), 1386–1395.
 Blatman and Sudret (2010a) Blatman, G. and B. Sudret (2010a). An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Prob. Eng. Mech. 25, 183–197.
 Blatman and Sudret (2010b) Blatman, G. and B. Sudret (2010b). Efficient computation of global sensitivity indices using sparse polynomial chaos expansions. Reliab. Eng. Sys. Safety 95, 1216–1229.
 Blatman and Sudret (2011) Blatman, G. and B. Sudret (2011). Adaptive sparse polynomial chaos expansion based on Least Angle Regression. J. Comput. Phys 230, 2345–2367.
 Cadini et al. (2014) Cadini, F., F. Santos, and E. Zio (2014). An improved adaptive Krigingbased importance technique for sampling multiple failure regions of low probability. Reliab. Eng. Sys. Safety 131, 109–117.
 den Hertog et al. (2006) den Hertog, D., J. P. C. Kleijnen, and A. Y. D. Siem (2006). The correct Kriging variance estimated by bootstrapping. Journal of the Operational Research Society 57(4), 400–409.
 Dubourg (2011) Dubourg, V. (2011). Adaptive surrogate models for reliability analysis and reliabilitybased design optimization. Ph. D. thesis, Université Blaise Pascal, ClermontFerrand, France.
 Dubourg and Sudret (2014) Dubourg, V. and B. Sudret (2014). Metamodelbased importance sampling for reliability sensitivity analysis. Structural Safety 49, 27–36.
 Dubourg et al. (2011) Dubourg, V., B. Sudret, and J.M. Bourinet (2011). Reliabilitybased design optimization using Kriging and subset simulation. Struct. Multidisc. Optim. 44(5), 673–690.
 Dubourg et al. (2013) Dubourg, V., B. Sudret, and F. Deheeger (2013). Metamodelbased importance sampling for structural reliability analysis. Prob. Eng. Mech. 33, 47–57.
 Dubreuil et al. (2014) Dubreuil, S., M. Berveiller, F. Petitjean, and M. Salaün (2014). Construction of bootstrap confidence intervals on sensitivity indices computed by polynomial chaos expansion. Reliab. Eng. Sys. Safety 121, 263–275.
 Echard et al. (2011) Echard, B., N. Gayton, and M. Lemaire (2011). AKMCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 33(2), 145–154.
 Echard et al. (2013) Echard, B., N. Gayton, M. Lemaire, and N. Relun (2013). A combined importance sampling and Kriging reliability method for small failure probabilities with timedemanding numerical models. Reliab. Eng. Syst. Safety 111, 232–240.
 Efron (1982) Efron, B. (1982). The jackknife, the bootstrap and other resampling plans, Volume 38. SIAM.
 Efron et al. (2004) Efron, B., T. Hastie, I. Johnstone, and R. Tibshirani (2004). Least angle regression. Annals of Statistics 32, 407–499.
 Fauriat and Gayton (2014) Fauriat, W. and N. Gayton (2014). AKSYS: An adaptation of the AKMCS method for system reliability. Reliab. Eng. and Sys. Safety, 137–144.
 Ghanem and Spanos (1991) Ghanem, R. and P. Spanos (1991). Stochastic finite elements – A spectral approach. Springer Verlag, New York. (Reedited by Dover Publications, Mineola, 2003).
 Hasofer and Lind (1974) Hasofer, A.M. and N.C. Lind (1974). Exact and invariant second moment code format. J. Eng. Mech. 100(1), 111–121.
 Konakli and Sudret (2016) Konakli, K. and B. Sudret (2016). Reliability analysis of highdimensional models using lowrank tensor approximations. Prob. Eng. Mech. 46, 18–36.
 Lataniotis et al. (2017) Lataniotis, C., S. Marelli, and B. Sudret (2017). UQLab user manual – Kriging. Technical report, Chair of Risk, Safety & Uncertainty Quantification, ETH Zurich. Report UQLabV1.0105.
 Le Gratiet et al. (2017) Le Gratiet, L., S. Marelli, and B. Sudret (2017). Metamodelbased sensitivity analysis: polynomial chaos expansions and Gaussian processes, Chapter 8. Handbook on Uncertainty Quantification, Springer.
 Lebrun and Dutfoy (2009) Lebrun, R. and A. Dutfoy (2009). Do Rosenblatt and Nataf isoprobabilistic transformations really differ? Prob. Eng. Mech. 24(4), 577–584.
 Lemaire (2009) Lemaire, M. (2009). Structural reliability. Wiley.
 Liu and Der Kiureghian (1991) Liu, P.L. and A. Der Kiureghian (1991). Optimization algorithms for structural reliability. Structural Safety 9, 161–177.
 Marelli et al. (2017) Marelli, S., R. Schöbi, and B. Sudret (2017). UQLab user manual – Reliability analysis. Technical report, Chair of Risk, Safety & Uncertainty Quantification, ETH Zurich. Report UQLabV1.0107.
 Marelli and Sudret (2014) Marelli, S. and B. Sudret (2014). UQLab: A framework for uncertainty quantification in Matlab. In Vulnerability, Uncertainty, and Risk (Proc. 2nd Int. Conf. on Vulnerability, Risk Analysis and Management (ICVRAM2014), Liverpool, United Kingdom), pp. 2554–2563.
 Marelli and Sudret (2017) Marelli, S. and B. Sudret (2017). UQLab user manual – Polynomial chaos expansions. Technical report, Chair of Risk, Safety & Uncertainty Quantification, ETH Zurich. Report UQLabV1.0104.
 Melchers (1999) Melchers, R.E. (1999). Structural reliability analysis and prediction. John Wiley & Sons.
 Moustapha and Sudret (2017) Moustapha, M. and B. Sudret (2017). Quantilebased optimization under uncertainties using bootstrap polynomial chaos expansions. In Proc. 12th Internatinoal Conference on Structural Safety and Reliability (ICOSSAR), August 610, 2017, Vienna, Austria.
 Notin et al. (2010) Notin, A., N. Gayton, J. L. Dulong, M. Lemaire, P. Villon, and H. Jaffal (2010). RPCM: a strategy to perform reliability analysis using polynomial chaos and resampling. European Journal of Computational Mechanics 19(8), 795–830.
 Picheny et al. (2010) Picheny, V., N. Kim, and R. Haftka (2010). Application of bootstrap method in conservative estimation of reliability with limited samples. Struct. Multidisc. Optim. 41(2), 205–217.
 Pradlwarter et al. (2007) Pradlwarter, H., G. Schuëller, P. Koutsourelakis, and D. Charmpis (2007). Application of line sampling simulation method to reliability benchmark problems. Structural Safety 29(3), 208 – 221.
 Rackwitz and Fiessler (1978) Rackwitz, R. and B. Fiessler (1978). Structural reliability under combined load sequences. Computers & Structures 9, 489–494.

Rasmussen and
Williams (2006)
Rasmussen, C. and C. Williams (2006).
Gaussian processes for machine learning
(Internet ed.). Adaptive computation and machine learning. Cambridge, Massachusetts: MIT Press.  Schöbi et al. (2016) Schöbi, R., B. Sudret, and S. Marelli (2016). Rare event estimation using PolynomialChaosKriging. ASCEASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.. D4016002.
 Sudret and Der Kiureghian (2002) Sudret, B. and A. Der Kiureghian (2002). Comparison of finite element reliability methods. Prob. Eng. Mech. 17, 337–348.
 Taflanidis and Beck (2008) Taflanidis, A. A. and J. L. Beck (2008). Stochastic subset optimization for optimal reliability problems. Prob. Eng. Mech 23, 324–338.
 Torre et al. (2017) Torre, E., S. Marelli, P. Embrechts, and B. Sudret (2017). A general framework for datadriven uncertainty quantification under complex input dependencies using vine copulas. arXiv 1709.08626. Under revision in Probabilistic Engineering Mechanics.
 van Beers and Kleijnen (2008) van Beers, W. C. and J. P. Kleijnen (2008). Customized sequential designs for random simulation experiments: Kriging metamodeling and bootstrapping. European Journal of Operational Research 186(3), 1099 – 1113.
 Waarts (2000) Waarts, P.H. (2000). Structural reliability using finite element methods: an appraisal of DARS: Directional Adaptive Response Surface Sampling. Ph. D. thesis, Technical University of Delft, The Netherlands.
 Xiu and Karniadakis (2002) Xiu, D. and G. Karniadakis (2002). The WienerAskey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 24(2), 619–644.
 Zaki and Meira (2014) Zaki, M. and W. Meira (2014). Data Mining and Analysis: fundamental Concepts and Algorithms. Cambridge University Press.
Comments
There are no comments yet.