1 Introduction
There are two types of computer simulation models, deterministic simulation models and stochastic simulation models. In a deterministic simulation model, the simulator will produce a fixed output for the same input values. In this paper, we focus on stochastic simulation models. Due to extra stochastic elements in the simulation model, given a fixed input value, the output is a random variable representing the outcome of the simulation. Despite its flexible modeling capacity, stochastic simulation models can be more complex and computationally costly to generate a simulation outcome.
One of the applications of Monte Carlo simulation is to compute an estimator of a system’s expected output. In Monte Carlo simulation, one draws inputs from a suitable probability distribution, calculates the output, and repeats the process multiple times to obtain different outputs. The estimate of the expected output can be the average of those simulated outputs. Monte Carlo simulation is widely used with both deterministic simulation models and stochastic simulation models. As the computing power advances over the past decades, the reliance on Monte Carlo simulation has increased in various engineering disciplines. Particularly in reliability engineering, Monte Carlo simulation is widely used to study the behavior and reliability of a system, often through estimating its failure probability.
However, the reliability evaluation based on Monte Carlo simulation remains challenging. The more closely the simulation model mimics the real system, the more computationally expensive each simulation run is. Moreover, highly reliable systems such as wind turbines or nuclear reactors require many simulation replications to encounter a rare event of interest such as a system failure, which translates to further demand for computational resources BOURINET2016210 . These challenges are exacerbated with stochastic simulation models, which often have to be run multiple times at the same input value. These challenges highlight the importance of improving computational efficiency of the reliability study with stochastic simulation models.
Among techniques to speed up Monte Carlo simulation in rare event probability estimation, importance sampling (IS) is one of the most promising methods kroese:2011handbook ; CADINI2014109 , especially with deterministic simulation models kahn1953 . IS can boost the simulation efficiency by reducing the number of required simulation runs to achieve a target variance of failure probability estimator mcap2017 ; DAI201286 . IS methods are also used with stochastic simulation models in various applications, such as finance glasserman2005 , insurance asmussen2000 , reliability heidelberger1995 ; balesdent2016rare ; GONG2018199 , communication networks chang1994 , and queueing operations sadowsky1991 ; blanchet2009 ; blanchet2014 .
Theoretically, if the inputs are drawn from the optimal IS density, the variance of the probability estimator will be minimized and we can save the most computational cost. However, the theoretically optimal IS density is generally not implementable in practice and often necessitates some approximations such as a metamodelbased method Choe2015 ; ECHARD2013232 ; CADINI2015188 or the crossentropy (CE) method rubinstein1999 ; CHOI2017999 . Metamodelbased IS aims to represent the original high fidelity but blackbox simulation model by a surrogate model of which outputs can be more efficiently computed. From the surrogate model, the optimal IS density can be approximated to reduce the variance of the failure probability estimator DUBOURG2013
. From another perspective, CEbased IS aims to find an IS density that is as close as possible to the theoretically optimal IS density, where closeness is measured by the KullbackLeibler divergence
rubinstein1999 . The CE method has been widely used for estimating the optimal IS density with deterministic simulation models. However, to the best of our knowledge, there is no literature exploring the CE method’s application to stochastic simulation models. Thus, this paper proposes a novel method called the crossentropy based stochastic importance sampling (CESIS) to approximate the optimal IS density for stochastic simulation models.One challenge of the CEbased IS is the dilemma between parametric and nonparametric representation of the candidate optimal IS density. In the standard CE method for deterministic simulation models, the candidate IS density is confined to a parametric family. Some of the popular parametric IS densities are multivariate Bernoulli and multivariate Gaussian, thanks to their simple random variate generation and convenient updating formulae for CE minimization. However, parametric IS can be too rigid to capture the complicated important region and it is often difficult to validate the parametric model assumptions
botev2013 . Nonparametric approach can offer flexibility to overcome such limitations. One of the most popular nonparametric density approximation methods is the kernel approach MORIO2011178 . Besides its flexibility, the kernel approach also has its own drawback. The probability density model is not as parsimonious as (a mixture of) parametric density models. This lack of parsimony often makes subsequent inference and analysis of the resulting model computationally intensive rubinstein2005 ; botev2007 . For stochastic simulation models, the parametric IS approach needs a strong assumption but the variance of the IS estimator converges to the optimal variance faster than the nonparametric IS approach chen2017oracle . The nonparametric IS approach requires weak assumptions but the variance reduction rate is not as fast as the parametric IS approach chen2017oracle. To achieve a good balance between flexibility and efficiency in the CESIS method, we express the candidate IS density for stochastic simulation models using a mixture of parametric distributions. We particularly focus our study on the Gaussian mixture model due to its flexibility to model a smooth distribution.
The proposed method is validated with a numerical example and a case study on wind turbine reliability evaluation. In the case study, our method is benchmarked against a stateoftheart metamodelbased IS method in the literature. Approximating the optimal IS density automatically without relying on expert domain knowledge (which is often necessary for metamodel construction), the CESIS method demonstrates comparable results. This shows the advantage of the CESIS method in situations when building a highquality metamodel is difficult or timeconsuming; the CESIS method would conveniently provide substantial computational saving potentially more than metamodelbased methods.
2 Background
The crude Monte Carlo (CMC) method samples the input,
, from a known probability density function,
kroese:2011handbook . From the input , a simulator generates the output, . If the simulation model is deterministic, is a deterministic function of , i.e., . For the stochastic simulation model, is stochastic for a given. Due to the random vector
within the simulation model, it generates different outputs even at a fixed input .In reliability engineering, a quantity of interest is the socalled failure probability, , where is a prespecified threshold on a system that fails if exceeds . Note that any failure event set can be expressed as using a transformation. To estimate the failure probability, the following CMC estimator is most commonly used,
(1) 
where is the number of total simulation replications. The estimator in (1
) is an unbiased estimator for
. For the deterministic simulation model, the failure probability is equivalent to , where is the indicator function and the subscript, , appended to the expectation operator, , denotes that the expectation is taken with respect to , the density from which is drawn. For the stochastic simulation model, the failure probability is the expectation of the conditional failure probability, expressed as .In a highly reliable system, the failure probability can be very low. It may take many simulation runs until a failure occurs. In some cases, each simulation run can be computationally very expensive. In order to obtain a good estimator of the failure probability, the number of simulation runs can be large. To save the computational resource, IS changes the sampling distribution of from to another distribution that makes the failure events more likely. Sampling from makes the estimator in (1) no longer unbiased.
For the deterministic simulation model, to make the failure probability estimator unbiased, the IS estimator becomes
(2) 
where , , is sampled from instead of kroese:2011handbook . is the output from the simulator corresponding to the input . The variance of is minimized if is sampled from the optimal IS density kahn1953
(3) 
Here, the indicator function can be evaluated only by running the simulator since the function is unknown. The denominator is the unknown quantity that we want to estimate. Thus, is not implementable in practice, and approximation of is required. As mentioned above, for the deterministic simulation model, the optimal IS density can be approximated using methods such as a metamodelbased method or the CE method. Hereafter, we call the IS method for a deterministic simulation model DIS.
For the stochastic simulation model, to capture the extra randomness, the unbiased IS estimator becomes
(4) 
where is the number of distinct values and is the th output from replications at the distinct input, . Such replications at the same value capture the additional randomness within the simulation model. Hereafter, we call the IS method for a stochastic simulation model SIS.
Similar to DIS, the variance of is desired to be as small as possible. Minimizing the variance requires two steps Choe2015 :
(a) sampling from
(5) 
where is and is the normalizing constant;
(b) allocating replications to by
(6) 
Since the conditional probability, , is unknown in practice, the IS method for a stochastic simulation model requires the approximation of , , and . This paper will focus on how the CE method can be extended to approximate . In the next subsection, we first review how the CE method is applied to approximate .
2.1 CE Method for DIS
The CE method is originally developed to find the density that best approximates the optimal density of DIS rubinstein1999 . The standard parametric CE method limits the search space for the optimal IS density to a prespecified parametric family (e.g., Gaussian, Poisson, gamma, etc.), , and seeks the density that is closest to the optimal density . The closeness is measured by the KullbackLeibler divergence rubinstein1999 ,
(7) 
This quantity is always nonnegative and takes zero if and only if almost everywhere. Thus, minimizing over leads to if belongs to the same parametric family as .
Minimizing in (7) over is equivalent to minimizing its second term, known as the crossentropy (CE)
(8) 
because the first term in (7) is a constant over . As shown in (3), the optimal IS density can be expressed as , where is for DIS. The CE method aims to equivalently minimize
(9)  
(10) 
where , , and is the original density of . The likelihood ratio is denoted by . Note that the equality in (9) holds under the condition that implies . This condition can be easily satisfied by ensuring that the support of includes the support of even if is unknown. The condition is always satisfied by the Gaussian mixture model. In practice, the CE method finds that minimizes the following IS estimator of (10),
(11) 
where , is sampled from . Note that by writing , we surpress the notation for dependence of on in (11). Using this IS estimator, the CE method minimizes the CE iteratively:

Sample , from . At the first iteration, can be flexible (e.g., is commonly used).

Find , where is in (11).

Set and start the next iteration from Step 1 until some stopping criterion is met.
This procedure iteratively refines . However, the refinement is limited, as the parameter search space, , is restricted by a predefined parametric family.
Some studies rubinstein2005 ; botev2007 explore nonparametric approaches to allow greater flexibility on the candidate IS density than the standard CE method. However, as mentioned before, the flexibility comes with costs: finding the optimal density botev2007 or sampling from the optimized density rubinstein2005 is computationally challenging. Bridging between the two extremes of the spectrum (a parametric density with or a nonparametric density with , where is the number of parameters in a candidate IS density and is the number of simulation replications), a few studies botev2013 ; Wang2015 ; kurtz2013 recently consider the mixture of parametric distributions, where can vary between and . This approach is particularly desirable for engineering applications because (a) it can be as flexible as we want; (b) it is easy and fast to sample from the mixture of parametric distributions; and (c) the mixture IS density provides an insight into the engineering system (e.g., means of mixture components often coincide with the socalled ‘hot spots’, where the system likely fails.). Particularly in (Choe2017, ), the candidate distribution is expressed by the Gaussian mixture model (GMM) and an expectation–maximization (EM) algorithm is used to minimize the crossentropy estimator in (11).
2.2 Gaussian Mixture Model and EM Algorithm
The GMM of mixture components takes the following form:
(12) 
where the component weights, , , are positive and sum to one. The th Gaussian component density, , is specified by the mean, , and the covariance . Thus, the parameter vector of GMM denotes . Since is a function of , the GMM model can be explicitly written as . For simplicity, we will write the GMM as unless we should express different models in terms of .
To minimize (11), the gradient of (11) with respect to is set to zero:
(13) 
This leads to the updating equations as derived in kurtz2013 :
(14)  
(15)  
(16) 
where
(17) 
As the name suggests, the righthand sides of the ‘updating’ equations (14), (15), (16) involve either explicitly or implicitly through . As such, the updating equations are interlocking with each other and cannot be solved analytically. Thus, by starting with an initial value for on the righthand sides of the updating equations, the lefthand sides are computed and plugged back to the righthand sides iteratively until the convergence is reached. This optimization procedure is a version of the EM algorithm that alternates between the expectation step (computing ) and the maximization step (updating ).
A common challenge in using GMM to approximate the IS density is balancing between the estimated model’s KL divergence (a closeness measure between the approximated IS density and the optimal one) and the model complexity. The studies botev2013 ; Wang2015 ; kurtz2013 that consider mixture models point out the difficulty associated with the choice of the number of mixture components, . They either assume that is given botev2013 ; kurtz2013 or follow a rule of thumb based on “some understanding of the structure of the problem at hand” Wang2015 . Overcoming the need of picking a priori when using GMM to approximate the optimal DIS density, Choe (Choe2017, ) proposes the crossentropy information criterion (CIC) to select automatically.
2.3 CrossEntropy Information Criterion
In order to balance between the crossentropy estimate and the model complexity, the CE estimator is minimized and the model complexity is concurrently penalized. CIC for deterministic simulation models takes the following form:
(18) 
where the CE estimator and are calculated using all the aggregated data up to iteration :
(19) 
(20) 
Note that is the dimension of and proportional to , where denotes the dimension of the input . The second term of the CIC in (18) penalizes the model complexity by being linearly proportional to . The CIC is an asymptotically unbiased estimator of the crossentropy (up to a multiplicative constant) under one key assumption that there exists such that , and several other regularity conditions (see (Choe2017, ) for details) that are similarly required for the Akaike information criterion (AIC) (akaike1974, ) to be an asymptotically unbiased estimator of the true loglikelihood. Since for stochastic simulation models, is unknown, it has to be estimated through . We will present the CIC formulation for SIS in the next section.
3 Methodology
This section proposes a model selection solution to the estimation of the optimal IS density for stochastic simulation models. We use the GMM to express candidates for the optimal density. The CIC will automatically determine the mixture order of the GMM and control the model complexity. Among different mixture models, we choose the GMM since it is the most widely used mixture model thanks to its ability to model any smooth distribution. In general, the proposed CE method is not limited to just the GMM. It can be extended to other mixture models as long as we can minimize a CE estimator. In the convenient case of the GMM, an EM algorithm can be applied to estimate the GMM parameters. Moreover, it is computationally efficient to sample from the GMM.
It is natural to apply the concept of CIC in DIS to SIS. The expression of CIC involves , which is known in DIS but not in SIS. Hence we use the estimator which is a function of , the estimated probability of failure at a particular value.
3.1 Approximations Necessary for Implementation
To implement the proposed method, we need to know for computing CIC, evaluating the EM algorithm equations (14)(17) to solve for , and minimizing the CE estimator. For SIS, needs to be estimated because is unknown. We estimate by
(21) 
and then estimate by plugging in :
(22) 
As the sample size grows large, asymptotically converges to . Specifically, if increases faster than
, then the convergence holds by the law of large numbers and the continuous mapping theorem.
3.2 Aggregated Failure Probability Estimation
We aggregately use the samples obtained in all of the CE iterations. Instead of in (4), we compute the failure probability estimator using all the aggregated data up to iteration :
(23) 
where is the random input at iteration , is the number of drawn at iteration , and is the number of simulation runs at each .
The idea of data aggregation leads to the aggregated CIC formulation for stochastic simulation models as follows:
(24) 
where the CE estimator and are calculated using all the aggregated data up to iteration :
(25) 
Noting that in (18) is an unbiased DIS estimator of the failure probability, we similarly set
(26) 
3.3 Summary of the Proposed Method (CESIS)
For the EM initialization, the grid search of the optimal number of mixture components , the stopping criteria of the EM algorithm to minimize the quantity in (25), and whole division of the replication allocation , we follow the same procedures in (Choe2017, ). The details of implementation are provided in Appendix 2.
We propose the CESIS pseudocode in Algorithm 1:
INPUT:

Initialize the GMM for the first iteration (e.g., GMM with and the identity covariance matrix).

Given a simulation budget, , we pick the actual simulation runs at each iteration, , such that , where denotes the number of total CE iterations.

Pick the maximum number of components of the GMM, , for each iteration t. can depend on since it is bounded by a function of the available sample size.
4 Numerical Example
In this section, we use a numerical example to evaluate the performance of the CESIS algorithm presented above. Assuming that the data generating process is unknown, we compare the performance of the CESIS algorithm with the stateoftheart metamodelbased approach in mcap2017 .
Cannamela et al. cannamela2008 originally develop a deterministic simulation model example, which is later modified in Choe2015 into a stochastic simulation model example. We test the CESIS algorithm with the latter numerical example. Its data generating structure is as follows:
where the mean and the standard deviation are:
To approximate the optimal density in (5) and the allocation in (6), the metamodelbased method in mcap2017 also needs to estimate the conditional probability using a metamodel. The metamodel for the conditional distribution
is set as the normal distribution with the parameters, mean and standard deviation, modeled as cubic spline functions in the generalized additive model for location, scale and shape (GAMLSS) framework
rigby2005 . The metamodel is built using a pilot dataset of 600 simulation replications and the IS experiment is based on 1000 simulation replications.To ensure that both methods use the same simulation budget, for the CESIS method, we use and for . The experiment is repeated times to obtain the results in Table 1. Since we have full knowledge about the underlying process, we also show in Table 1 the result obtained from the optimal SIS density that uses the true function.
The performance metric we use is the CMC ratio, which is defined as
where is the total number of simulation replications (e.g., 600 + 1000 for metamodelbased SIS, 600 + 1000 for CESIS, and 1000 for the optimal SIS) and
is the total number of simulation replications required for the crude Monte Carlo simulation to achieve the same standard error as each method, which is calculated as
Note that is the standard error of each method. is the probability estimate from the optimal SIS, which is most accurate, i.e., 0.00996. In Table 1, the smaller CMC ratio translates to a smaller number of replications used in each row’s method divided by the number of replications necessary for CMC in (1) to achieve the standard error in the row, which means better computational saving for the same accuracy level.
It appears that the metamodelbased approach is slightly better than the CESIS method. The result is not surprising in the following context. First, the metamodel represents one of the best metamodels that can be built in practice since the correct distribution family (i.e., Gaussian) is used for for building the metamodel. In reality, the underlying data generating distribution is unknown and likely much more complex, rendering the construction of a good metamodel difficult, especially when the pilot dataset is small. In fact, the main advantage of the CESIS method is the ability to automatically come up with the best approximated importance sampling density. In other words, scalability is the main advantage of CESIS method, especially when we want to develop an offtheshelf software package for SIS. Second, the difference between the standard errors of two methods, 0.00005, is small (i.e., 0.5% of the failure probability), which means the CESIS method can achieve performance close to the best possible models.
Method  Mean  Standard Error  CMC Ratio 

CESIS  0.00972  0.00073  8.65% 
Metamodel  0.01003  0.00068  7.50% 
Optimal SIS  0.00996  0.00052  2.74% 
5 Case Study
This section presents an application of the CESIS method to the case study on wind turbine reliability evaluation. We compare the performance between the CESIS method and the metamodelbased method Choe2015 , using the same CMC ratio metric in Section 4.
The reliability of a wind turbine is subject to stochastic weather (Byon2010a, ). To incorporate the randomness into the reliability evaluation of a turbine design, the international standard, IEC 614001 (IEC2005, )
, requires the turbine designer to use stochastic simulations. We evaluate the reliability of a wind turbine using computationally expensive aerodynamic simulators (each simulation run takes roughly 1 minute on a typical PC available nowadays) developed by the U.S. National Renewable Energy Laboratory. Specifically, TurbSim is used to generate a 3dimensional timemarching wind profile, and FAST is used to simulate a turbine’s structural bending moment responses
Choe2015 ; Jonkman2005 ; Jonkman2009 . Each simulation represents a 10min simulated operation of the turbine. Our simulation setup is identical with the benchmark setting used in moriarty2008 .We are interested in estimating the probability of a failure event which is defined as the bending moment at a turbine’s blade root exceeding a specified threshold. In particular, we study two bending moment types: edgewise and flapwise. We estimate the probability that a bending moment exceeds a threshold , where = 8,600 kNm for edgewise bending moment and = 13,800 kNm for flapwise bending moment, both of which occur with around 5% probability.
The input wind speed, , is drawn from the truncated Rayleigh density Choe2015 ; choe2015extload defined as:
where is the untruncated Raleigh density with the shape parameter, , and is the corresponding cummulative distribution function. The cutin and cutout wind speed are = 3 m/s and = 25 m/s, respectively, denoting the range of wind speeds for which a wind turbine operates.
In the metamodelbased method in Choe2015 , the conditional probability is approximated using a nonhomogenous Generalized Extreme Value (GEV) distribution:
(27) 
where and are the location and scale parameter functions, respectively, modeled with cubic smoothing spline functions under the GAMLSS framework. The shape parameter is fixed as a constant, with estimated value for the edgewise (flapwise) case. The goodnessoffit of the GEV distribution is tested using the KolmogorovSmirnov test. The metamodel is built using a pilot sample of 600 simulation replications and in addition, the metamodelbased method uses 1000 (2000) replications for the failure probability estimation for edgewise (flapwise) bending moment.
In the CESIS method, we allocate the same simulation budget as the metamodelbased method. We use and (200) for for the edgewise (flapwise) case.
Table 2 compares the results based on 50 repetitions of each method. The CESIS method has slightly smaller (larger) standard error than the metamodelbased method for the edgewise (flapwise) bending moment. Accordingly, both methods save the similar level of computational resource compared to CMC, as indicated by the CMC Ratio. In the metamodelbased method, the metamodel is carefully built by fitting a nonhomogeneous GEV distribution to the pilot data. We can see that the performance of the CESIS method is comparable to that of the metamodelbased SIS with a high quality metamodel. Note that since CESIS is an automated method, it can be particularly helpful when building a metamodel is difficult. We also observe that the computational saving in the edgewise case is more substantial than the flapwise case. This is because the approximated SIS density is not very different from the original density in the flapwise case (this phenomenon is first observed and discussed extensively in Choe2015 , to which an interested reader is referred). Nevertheless, we see that the CESIS method, without requiring domain knowledge about the underlying process, can still capture this information satisfactorily.
Response  Method  Mean  Standard Error  CMC Ratio 

Edgewise  Metamodel  0.0486  0.0018  7.0% 
CESIS  0.0486  0.0015  4.9%  
Flapwise  Metamodel  0.0514  0.0028  32% 
CESIS  0.0535  0.0030  37% 
6 Conclusion
We propose a method called the crossentropy based stochastic importance sampling (CESIS) which can efficiently construct an importance sampling density for a stochastic simulation model. The CESIS method uses an EM algorithm to minimize the estimated crossentropy from a candidate IS density to the optimal IS density while penalizing the model complexity concurrently. The method automatically refines the estimated IS density, thus not requiring the specific domain knowledge for building a metamodel, which is often difficult or timeconsuming in practice. We focus on a candidate IS density expressed as a Gaussian mixture model, which is both flexible and computationally efficient, while the extension to other mixture models is possible. The application of the crossentropy information criterion allows a sound choice of the mixture model complexity for the CESIS method. By aggregating all the data from previous iterations and effectively allocating the number of replications at each input value, the CESIS method utilizes the available information efficiently under a given simulation budget. The numerical example and case study show the advantage of the CESIS over the metamodelbased SIS and crude Monte Carlo simulation.
Appendix 1: Approximation of
We seek an asymptotic approximation of the optimal allocation size,
First, we show . For a large , we can approximate
for any . This asymptotic approximation may be not good for some if is close to zero. However, in that case, is small too, and such is unlikely to be sampled in the first place. Therefore, we can approximate
where the second approximation is based on the fact that the estimated IS density approximates the optimal density.
Furthermore, for a large , we can also approximate
where is the support of . Thus, it follows that
Therefore, for a large ,
Although it does not happen frequently, if , then we set the corresponding as , the smallest allocation possible to maintain the unbiasedness of the SIS estimator.
Appendix 2: Implementation details of the CESIS algorithm
Since the EM algorithm will lead to a local optimum for nonconvex problems, we use 10 random initializations of and choose the best minimizer of in (25) to reduce the impact of initial guess of on the algorithm’s performance (figueiredo2002, ).
By monitoring the condition numbers of the Gaussian components’ covariances (figueiredo2002, ), the number of components can also be controlled to prevent overfitting issue within the EM algorithm. We choose a singularity threshold of
for the covariance matrix. Exceeding the threshold will signify an illconditioned matrix. If more than a half of the 10 initializations observe illconditions, we stop the EM algorithm and consider
is reached for that iteration.Checking the convergence of the EM algorithm is through monitoring the reduction of in (25). In the numerical study in Section 4, iterating the updating equations in the EM algorithm is stopped if the reduction of is less than 1% or a specified maximum number of iterations is reached.
For the allocation at each iteration , we round the numerical to the nearest integer. The difference between and due to rounding is spread across the largest ’s equally to make sure we strictly stay within the simulation budget. The purpose of spreading across the largest ’s (as opposed to, say, the smallest ones) is to minimize the deviations from the prescribed ’s. Note that SIS is generally insensitive to variations of ’s Choe2015 .
References
References
 (1) J. M. Bourinet, Rareevent probability estimation with adaptive support vector regression surrogates, Reliability Engineering & System Safety 150 (2016) 210 – 221.
 (2) D. P. Kroese, T. Taimre, Z. I. Botev, Handbook of Monte Carlo Methods, New York: John Wiley and Sons., 2011.
 (3) F. Cadini, F. Santos, E. Zio, An improved adaptive krigingbased importance technique for sampling multiple failure regions of low probability, Reliability Engineering & System Safety 131 (2014) 109 – 117.
 (4) H. Kahn, A. W. Marshall, Methods of reducing sample size in Monte Carlo computations, Journal of the Operations Research Society of America 1 (5) (1953) 263–278.
 (5) Y. Choe, H. Lam, E. Byon, Uncertainty quantification of stochastic simulation for blackbox computer experiments, Methodology and Computing in Applied Probability.
 (6) H. Dai, H. Zhang, W. Wang, A support vector densitybased importance sampling for reliability assessment, Reliability Engineering & System Safety 106 (2012) 86 – 93.
 (7) P. Glasserman, J. Li, Importance sampling for portfolio credit risk, Management Science 51 (11) (2005) 1643–1656.
 (8) S. Asmussen, K. Binswanger, B. Højgaard, Rare events simulation for heavytailed distributions, Bernoulli 6 (2) (2000) 303–322.
 (9) P. Heidelberger, Fast simulation of rare events in queueing and reliability models, ACM Transactions on Modeling and Computer Simulation (TOMACS) 5 (1) (1995) 43–85.
 (10) M. Balesdent, J. Morio, L. Brevault, Rare event probability estimation in the presence of epistemic uncertainty on input probability distribution parameters, Methodology and Computing in Applied Probability 18 (1) (2016) 197–216.
 (11) C. Gong, W. Zhou, Importance samplingbased system reliability analysis of corroding pipelines considering multiple failure modes, Reliability Engineering & System Safety 169 (2018) 199 – 208.
 (12) C. Chang, P. Heidelberger, S. Juneja, P. Shahabuddin, Effective bandwidth and fast simulation of ATM intree networks, Performance Evaluation 20 (1) (1994) 45–65.
 (13) J. S. Sadowsky, Large deviations theory and efficient simulation of excessive backlogs in a GI/GI/m queue, IEEE Transactions on Automatic Control 36 (12) (1991) 1383–1394.
 (14) J. Blanchet, P. Glynn, H. Lam, Rare event simulation for a slotted time M/G/s model, Queueing Systems 63 (14) (2009) 33–57.
 (15) J. Blanchet, H. Lam, Rareevent simulation for manyserver queues, Mathematics of Operations Research 39 (4) (2014) 1142–1178.
 (16) Y. Choe, E. Byon, N. Chen, Importance sampling for reliability evaluation with stochastic simulation models, Technometrics 57 (3) (2015) 351–361.
 (17) B. Echard, N. Gayton, M. Lemaire, N. Relun, A combined importance sampling and kriging reliability method for small failure probabilities with timedemanding numerical models, Reliability Engineering & System Safety 111 (2013) 232 – 240.
 (18) F. Cadini, A. Gioletta, E. Zio, Improved metamodelbased importance sampling for the performance assessment of radioactive waste repositories, Reliability Engineering & System Safety 134 (2015) 188 – 197.
 (19) R. Rubinstein, The crossentropy method for combinatorial and continuous optimization, Methodology and Computing in Applied Probability 1 (2) (1999) 127–190.
 (20) B.S. Choi, J. Song, Crossentropybased adaptive importance sampling for probabilistic seismic risk assessment of lifeline networks considering spatial correlation, Procedia Engineering 198 (2017) 999 – 1006, urban Transitions Conference, Shanghai, September 2016.
 (21) V. Dubourg, B. Sudret, F. Deheeger, Metamodelbased importance sampling for structural reliability analysis, Probabilistic Engineering Mechanics 33 (2013) 47–57.

(22)
Z. I. Botev, D. P. Kroese, R. Y. Rubinstein, P. L’Ecuyer, The crossentropy method for optimization, Machine Learning: Theory and Applications, V. Govindaraju and C. R. Rao, Eds, Chennai: Elsevier 31 (2013) 35–59.
 (23) J. Morio, Nonparametric adaptive importance sampling for the probability estimation of a launcher impact position, Reliability Engineering & System Safety 96 (1) (2011) 178 – 183, special Issue on Safecomp 2008.

(24)
R. Rubinstein, A stochastic minimum crossentropy method for combinatorial optimization and rareevent estimation, Methodology and Computing in Applied Probability 7 (1) (2005) 5–50.
 (25) Z. I. Botev, D. P. Kroese, T. Taimre, Generalized crossentropy methods with applications to rareevent simulation and optimization, Simulation 83 (11) (2007) 785–806.
 (26) Y. Chen, Y. Choe, Oracle importance sampling for stochastic simulation models, arXiv preprint arXiv:1710.00473.
 (27) H. Wang, X. Zhou, A crossentropy scheme for mixtures, ACM Transactions on Modeling and Computer Simulation 25 (1) (2015) 6:1–6:20.
 (28) N. Kurtz, J. Song, Crossentropybased adaptive importance sampling using gaussian mixture, Structural Safety 42 (2013) 35–44.
 (29) Y. Choe, Information criterion for minimum crossentropy model selection, arXiv preprint arXiv:1704.04315.
 (30) H. Akaike, A new look at the statistical model identification, IEEE Transactions on Automatic Control 19 (6) (1974) 716–723.

(31)
C. Cannamela, J. Garnier, B. Iooss, Controlled stratification for quantile estimation, The Annals of Applied Statistics 2 (4) (2008) 1554–1580.
 (32) R. A. Rigby, D. M. Stasinopoulos, Generalized additive models for location, scale and shape, Journal of the Royal Statistical Society: Series C (Applied Statistics) 54 (3) (2005) 507–554.
 (33) E. Byon, L. Ntaimo, Y. Ding, Optimal maintenance strategies for wind power systems under stochastic weather conditions, IEEE Transactions on Reliability 59 (2) (2010) 393–404.
 (34) International Electrotechnical Commission, IEC/TC88, 614001 ed. 3, Wind Turbines  Part 1: Design Requirements. (2005).
 (35) J. M. Jonkman, M. L. Buhl Jr., FAST User’s Guide, Tech. Rep. NREL/EL50038230, National Renewable Energy Laboratory, Golden, Colorado (2005).
 (36) B. J. Jonkman, TurbSim user’s guide: version 1.50, Tech. Rep. NREL/TP50046198, National Renewable Energy Laboratory, Golden, Colorado (2009).
 (37) P. Moriarty, Database for validation of design load extrapolation techniques, Wind Energy 11 (6) (2008) 559–576.
 (38) Y. Choe, Q. Pan, E. Byon, Computationally efficient uncertainty minimization in wind turbine extreme load assessments, ASME Journal of Solar Energy Engineering 138 (4) (2016) 041012–041012–8.

(39)
M. A. Figueiredo, A. K. Jain, Unsupervised learning of finite mixture models, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (3) (2002) 381–396.