Adaptive importance sampling for seismic fragility curve estimation

As part of Probabilistic Risk Assessment studies, it is necessary to study the fragility of mechanical and civil engineered structures when subjected to seismic loads. This risk can be measured with fragility curves, which express the probability of failure of the structure conditionally to a seismic intensity measure. The estimation of fragility curves relies on time-consuming numerical simulations, so that careful experimental design is required in order to gain the maximum information on the structure's fragility with a limited number of code evaluations. We propose and implement an active learning methodology based on adaptive importance sampling in order to reduce the variance of the training loss. The efficiency of the proposed method in terms of bias, standard deviation and prediction interval coverage are theoretically and numerically characterized.

Authors

• 2 publications
• 2 publications
• 6 publications
• Efficient Seismic fragility curve estimation by Active Learning on Support Vector Machines

Fragility curves which express the failure probability of a structure, o...
09/25/2018 ∙ by Rémi Sainct, et al. ∙ 0

• Nonasymptotic bounds for suboptimal importance sampling

Importance sampling is a popular variance reduction method for Monte Car...
02/18/2021 ∙ by Carsten Hartmann, et al. ∙ 0

• The square root rule for adaptive importance sampling

In adaptive importance sampling, and other contexts, we have unbiased an...
01/10/2019 ∙ by Art B. Owen, et al. ∙ 0

• Adaptive Importance Sampling based Neural Network framework for Reliability and Sensitivity Prediction for Variable Stiffness Composite Laminates with hybrid uncertainties

In this work, we propose to leverage the advantages of both the Artifici...
01/27/2020 ∙ by Tittu Varghese Mathew, et al. ∙ 0

• Amortized Rejection Sampling in Universal Probabilistic Programming

Existing approaches to amortized inference in probabilistic programs wit...
10/20/2019 ∙ by Saeid Naderiparizi, et al. ∙ 11

• Optimal projection to improve parametric importance sampling in high dimension

In this paper we propose a dimension-reduction strategy in order to impr...
07/13/2021 ∙ by Maxime ElMasri, et al. ∙ 0

• Efficient Black-Box Importance Sampling for VaR and CVaR Estimation

This paper considers Importance Sampling (IS) for the estimation of tail...
06/16/2021 ∙ by Anand Deo, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The notion of fragility curve was developed in the early 80s in the context of seismic probabilistic risk assesment (SPRA) Kennedy1980 ; KENNEDY198447 or performance based earthquake engineering (PBEE) GHOBARAH2001878 . Fragility curve expresses the probability of failure of a mechanical structure subjected to a seism conditionally to a specific seismic intensity measure (IM) such as the peak ground acceleration of the seism. Fragility curves are used in several domains: nuclear safety evaluation RICHARDSON198015 , estimation of the collapse risk of structures in seismic regions Eads13 , design checking process Padgett07) . Nonetheless, the use of fragility curves is not limited to seismic load but is extended to other loading sources such as wind and waves Quilligan12 . For complex structures, fragility curve estimation requires a large number of numerical mechanical simulations, involving in most cases non linear computationally expensive calculations. Moreover, they should account for both the uncertainties due to the seismic demand and due to the lack of knowledge on the system itself, respectively called random and epistemic uncertainties ELLINGWOOD2009 ; KIUREGHIAN2009105 . As failure for a typical and reliable mechanical structure is a rare event, the crude Monte Carlo method cannot be applied because it would require too many numerical simulations to produce a sufficiently large number of failed states (Rubinstein16, , p.27)

. The computational burden motivates the use of parametric models for fragility curves.

A typical approach is to assume a lognormal form as first proposed by Kennedy et al. Kennedy1980 and then widely used (see e.g. Shinozuka00 ; ELLINGWOOD2001251 ; MANDAL201611 ; WangF2020 ). Another way to reduce the computational cost is to fit statistical metamodels to the structure response, such as kriging Gidaris15 Sainct20 , or polynomial chaos expansion Mai16 .

Nonetheless, for the fragility curves to be useful decision-making tools in an industrial context, their estimations must require a number of numerical code calls as small as possible while having theoretical guarantees on the convergence of the estimates and their confidence intervals. Statistical analysis regarding fragility curves estimations - including hypothesis testing and confidence intervals estimation - has been firstly introduced by Shinozuka et al. in

Shinozuka00 . Then, several works have addressed these issues from a non-theoretical point of view (see for e.g. ZENTNER20101614 ; Gehl2015 ; TREVLOPOULOS2019 ; Sainct20 ).

In this work, we propose and implement a methodology based on Adaptive Importance Sampling (AIS) Chu11 ; Deylon18 in a statistical learning context hastie_09_elements-of.statistical-learning . Such methods have already been discussed, implemented and tested for probability estimation of rare event (e.g. failure state) in reliability analysis GONG2018199 ; Papaioannou2019

. We show by asymptotic analysis and numerical simulations that they allow for fast convergence of the estimated fragility curve to the true (unknown) fragility curve and that the uncertainty of the estimation can be rigorously quantified. The uncertainty quantification involves asymptotic confidence intervals and ellipsoids for the quantities of interest and statistical tests to determine whether the asymptotic regime has been reached and whether asymptotic confidence intervals and ellipsoids can be used.

The methodology is applied to different test cases in the application section 5

and compared with more traditional approaches. For the test cases, the stochastic model of modulated and filtered white-noise process defined in

Rezaeian10 is used in order to enrich a set of real ground motion records selected in a database using magnitude and distance criteria. This stochastic model is chosen because it encompasses well both temporal and spectral nonstationarities of real seismic signals and has already been used in Mai2017 ; Kwong15 ; Sainct20 . As in Sainct20 , 97 acceleration records selected from the European Strong Motion Database (ESMD, ) in the domain and - where is the magnitude and the distance from the epicenter - are considered in order to identify the parameters of the ground-motion model. realizations of synthetic signals are then generated. Regarding the parametric approximations of the fragility curves, the current lognormal model is used, taking into account only the variability of the ground motion. Finally, Peak Ground Acceleration (PGA) and the Pseudo-Spectral Acceleration (PSA) are considered to be IMs.

The proposed methodology relies on parametric approximations of fragility curves for any IM of interest. Although the validity of parametric models is both questionable and difficult to assess (see e.g. MANDAL201611 ; Mai2017 ; ZENTNER201754 ; TREVLOPOULOS2019

), some numerical experiments based on the seismic responses of simple mechanical systems - i.e few degrees of freedom systems - suggest that the choice of an appropriate IM makes it possible to reduce the potential biases between reference fragility curves - that can be obtained by massive Monte Carlo simulations - and their parametric approximations. This point will be illustrated in the application section

5 of this paper. In practice, the selection of an optimal IM is not a trivial matter (see e.g. HARIRIARDEBILI201667 ; Ciano2020

) and Machine Learning techniques can be used for this purpose (e.g.

Sainct20 ), knowing that the references LucoCornell07 and Padgett08 give optimality criteria for selection of such IM.

The paper is organized as follows. In Section 2 the estimation problem is defined for generic parametric fragility curves and any IM. Section 3 is dedicated to AIS algorithm for seismic fragility curve estimation, including theoretical properties and a stopping criterion to check AIS convergence. Section 4 provides the reader with a practical implementation of AIS for seismic fragility curve estimation. Finally, in Section 5, AIS performance is assessed on analytical and industrial test cases of increasing complexity.

2 Seismic fragility curve estimation: a statistical learning framework

We consider the following situation. Let be a compact set of , a

-valued random variable and

a random label. In seismic probabilistic risk assessment, a structure is subjected to a seismic load, is the intensity measure of the seismic load, (we could have where is another increasing function of , such as a Box-Cox transform boxcox ), and is the indicator variable of the failure of the structure. The pair

has the probability distribution

over :

 P(dx,ds)=[μ(x)δ1(ds)+(1−μ(x))δ0(ds)]p(x)dx , (1)

where

is the marginal probability density function (pdf) of

and the fragility curve is the conditional expectation of (conditional probability of failure or fragility curve):

 μ(x)=E[S|X=x] .

The aim of the paper is to estimate from datapoints that may be independent and identically distributed with the distribution or that may be selected by a more appropriate scheme. There are numerous methods for estimating fragility curves, the interested reader is referred to Zentner2017 ; Lallemant2015 for a review of the usual existing methods. It is a classical assumption to use a parametric form for the fragility curve to tackle the need for time consuming mechanical simulations, we thus consider the space of functions , where is a function from to for any and . The goal is to minimize the quadratic risk:

 g(θ)=E[(μ(X)−fθ(X))2],

in order to find (provided it exists and is unique):

 θ∗=argminθ∈Θg(θ) .

Unfortunately, the observable data are , we do not observe directly . But considering that:

 E[(S−fθ(X))2]=E[μ(X)(1−μ(X))]+E[(μ(X)−fθ(X))2], (2)

we can observe that the minimization of is equivalent to the minimization of . Hence, we will consider the quadratic risk

 r(θ)=E[(S−fθ(X))2] .

In the context of classical learning, when we observe datapoints drawn independently from the probability distribution over , the expectation can be approximated by the empirical mean:

 ˆRn(θ)=1nn∑i=1(Si−fθ(Xi))2. (3)

The corresponding passive estimator (the term passive is used to highlight the absence of any particular sampling strategy) is then:

 ˆθn=argminθ∈ΘˆRn(θ). (4)

As part of industrial applications, the label is obtained by a computationally expensive numerical simulation of a mechanical structure subjected to a random seismic load and is the intensity measure of the seismic load. The seismic load is an inexpensive artificial seismic signal to generate while the numerical mechanical simulations are computationally expensive.

Conversely to classical learning, active learning aims at selecting the most useful numerical experiments to be carried out in order to form the learning set. In the passive strategy, the datapoints are sampled from the original probability distribution with density drawn from the artificial realizations of the stochastic ground-motion model. Our active learning strategy, called Adaptative Importance Sampling (AIS), consists to draw the datapoints from an instrumental probability distribution with pdf that is chosen in an adaptive way. In our context, it is straightforward to use a rejection method to generate seismic loads with a desired intensity measure distribution.

The main heuristic of this procedure is to reduce the variance implied by the empirical approximation of the quadratic risk

. Importance Sampling is a classical variance reduction technique for Monte Carlo estimation used in structural reliability Zuniga2021 ; Papaioannou2019 . If the are sampled with the pdf and are the labels obtained from calls to the mechanical model of the structure subjected to the seismic loads with intensity measures , then the importance sampling estimator of the empirical quadratic risk is:

 ˆRn,q(θ)=1nn∑i=1p(Xi)q(Xi)(Si−fθ(Xi))2. (5)

In the rest of the paper, we will denote by with

a positive loss function for the sake of generalization. For the numerical applications, only the case of the quadratic loss

will be considered. The next section will focus on choosing an optimal density in an adaptive way.

3.1 Principle

The heuristic used to find a good instrumental probability distribution family is presented in Chu2011 . The first idea would be to minimize the variance of the importance sampling risk estimator (5):

 (6)

with respect to within the set of all pdfs. If we denote by the loss averaged on :

 ~ℓθ(x)=μ(x)ℓθ(x,1)2+(1−μ(x))ℓθ(x,0)2 , (7)

the variance of the importance sampling risk estimator (5) can be expressed as

and we look for

 q∗θ=argminq∫Xp(x)2q(x)~ℓθ(x)dx . (8)

Using Lagrange multipliers, we can solve the optimization problem and we can find that the optimal sampling pdf is of the form

 q∗θ(x)∝√~ℓθ(x)p(x) ,

which depends on because depends on [Here and below means equality up to a multiplicative constant]. Hence an approximation step is made by replacing by in (7):

 ~ℓθ(x)≈fθ(x)ℓθ(x,1)2+(1−fθ(x))ℓθ(x,0)2 . (9)

Hence the instrumental density becomes:

 qθ(x)∝p(x)√fθ(x)ℓθ(x,1)2+(1−fθ(x))ℓθ(x,0)2 . (10)

Note that the instrumental distribution depends on , the parameter we aim to estimate. The procedure for computing the AIS estimator is described in Algorithm 1. Note also that the algorithm needs to start from a certain parameter value . We will discuss this issue in section 4.2.

3.2 Theoretical results

We derive some theoretical properties for the estimator , consisting in its consistency and asymptotic normality. Detailed proofs of the following theorems are given in the Appendix. In the following, if is a function, then , resp. ,

, stands for the gradient, resp. the Hessian matrix, the tensor of the third-order derivatives, of

.

Theorem 1.

[Consistency of ] Assume exists and is unique. Set . Assume that is a compact set and that:

 ∬L(x,s)P(dx,ds)<+∞,
 supθ∈Θ∬L(x,s)2p(x)qθ(x)P(dx,ds)<+∞,
 ∀θ≠θ∗,∬ℓθ(x,s)P(dx,ds)>∬ℓθ∗(x,s)P(dx,ds).

If for any is continuous, then

 ~θn→θ∗in probability
Theorem 2.

[Asymptotic normality of ] Assume that is three times differentiable at for all and that the matrix exists and is nonsingular. Assume that the third-order derivatives of are dominated in a neighborhood of by a function that is integrable with respect to . Assume also that the following conditions are satisfied:

1. The hypotheses of Theorem 1 are satisfied,

2. such that ,

3. ,

4. there exists a neighborhood of such that ,
.

Then the sequence is asymptotically normal with mean zero and covariance matrix

 Gθ∗=¨r(θ∗)−1Vθ∗(¨r(θ∗)−1)T, (12)

where is the matrix

 Vθ∗=E[p(X)˙ℓθ∗(X,S)˙ℓθ∗(X,S)Tqθ∗(X)]. (13)
Theorem 3.

[Asymptotic normality of the loss] If the conditions of Theorem 2 are satisfied then the sequence is asymptotically normal with mean zero and variance with

 Tθ∗=E⎡⎣(p(X)qθ∗(X)ℓθ∗(X,S)−r(θ∗))2⎤⎦. (14)

A straightforward corollary of Theorem 2 is that, if is nonsingular (which we assume from now on), then for any :

 P(n(~θn−θ∗)TG−1θ∗(~θn−θ∗)

with the

-quantile of the

distribution. Remark that the matrix depends on the unknown parameter . It is thus possible to use a plug-in estimator:

 ˆGn=ˆ¨rn(~θn)−1ˆVn(~θn)(ˆ¨rn(~θn)−1)T , (16)

with

 ˆ¨rn(θ)=1nn∑i=1p(Xi)q~θi−1(Xi)¨ℓθ(Xi,Si) , (17)
 ˆVn(θ)=1nn∑i=1p(Xi)2qθ(Xi)q~θi−1(Xi)˙ℓθ(Xi,Si)˙ℓθ(Xi,Si)T, (18)

and the Hessian of with respect to variable . The following lemma establishes that in probability and it is proved in E.

Lemma 1.

[Uniform convergence of ] Set and and assume that:

1. .

2. .

3. , , .

4. .

Then the sequence converges to in probability.

By Theorem 2: . Using Slutsky’s lemma, we have the following convergence in distribution:

 n(~θn−θ∗)TˆG−1n(~θn−θ∗)→χ2(m) . (19)

Equation (19) allows to define asymptotic confidence ellipsoids that will be defined precisely in Section 3.3.

3.3 Stopping criterion using a statistical hypothesis test

The estimation of the generalization error without a validation set is often based on Cross Validation. When AIS is used, the data points are no longer independent and identically distributed. We propose to use a stopping criterion that ensures that asymptotic normality is reached and then to use the asymptotic confidence ellipsoids determined above, which will then provide reliable confidence ellipsoids for the quantities of interest.

Theorem 4.

[Centered asymptotic normality of the loss gradient] Assume the same hypotheses as in Theorem 2 and:

1. .

Consider two independent datasets and generated with AIS strategy. Let be the weighted loss for for defined as in (11). Denote:

 ~θn,j=argminθ∈Θ~Rn,j(θ),j=1,2.

Then we have:

 √n(˙~Rn,1(~θn,2)−˙~Rn,2(~θn,1))L→N(0,8V(qθ∗,˙ℓθ∗))

as .

Denote

 ˆWn =n8(˙~Rn,1(~θn,2)−˙~Rn,2(~θn,1))TˆV−1n,12(˙~Rn,1(~θn,2)−˙~Rn,2(~θn,1)), (20) ˆVn,12 =12(ˆVn,1(~θn,1)+ˆVn,2(~θn,2)), (21)

with the empirical estimator in equation (18) for the -th AIS dataset for . By Theorem 4 and by Slutsky’s lemma, converges weakly to . It is, therefore, possible to define a stopping criterion inspired by statistical test theory to check the asymptotic normality of . Our stopping criterion is equivalent to the hypothesis test:

 (H0) :ˆWn follows χ2(m)against(H1) :ˆWn does not % follow χ2(m) . (22)

For , we then consider the statistical test which rejects if:

 ˆWn>q1−αχ2(m) , (23)

where denotes the -quantile of the distribution. Hence, this statistical test is of asymptotic level . An apparent drawback of this stopping criterion is that it doubles the computational cost, due to the necessity of having two independent AIS estimators and to compute . It is, however, possible to use the estimator

 ~θn,12=~θn,1+~θn,22, (24)

which has an asymptotic variance that is half the one of and as shown by the following proposition. There is, therefore, no increase of the computational cost.

Theorem 5.

The sequence is asymptotically normal with mean zero and covariance matrix .

Using Equation (19), it is possible to define the asymptotic confidence ellipsoids for the parameter at level :

 ^En,α={θ:(θ−~θn,12)TˆG−1n,12(θ−~θn,12)≤qχ2(m)1−α} , (25)

with

 ˆGn,12 =ˆ¨r−1n,12ˆVn,12ˆ(¨r−1n,12)T2 , (26) ˆ¨rn,12 =ˆ¨rn,1(~θn,1)+ˆ¨rn,2(~θn,2)2 , (27)

is defined by (21), and the estimators correspond to the estimator defined in Equation (17) with the -th AIS sample . Once the stopping criterium is fulfilled, we can claim that asymptotic normality is reached and the asymptotic confidence ellipsoid (25) can be used.

4 Practical implementation of AIS and comparison with a Random Sampling strategy

This section is primarily dedicated to explaining how to apply AIS to a real case study of seismic fragility curve estimation. For sake of clarity, we separate the implementation issue (section 4.1) from the initialization one (section 4.2). In addition, in order to demonstrate the superiority of the AIS strategy in comparison with a Random Sampling (RS) strategy, performance metrics are presented in section 4.3.

4.1 Practical implementation

This section is dedicated to explain how to apply AIS to a real case study of seismic fragility curve estimation. The classical parametric shape for seismic fragility curve is , with

the cumulative distribution function (cdf) of the standard normal distribution and

Kennedy1980 . In the following we will use the notation . Remark that is not compact. From an engineer perspective, it is possible to bound and from above and to bound from below. However the neighborhood of for

is problematic and thus, inspired by Bayesian inference theory

Box1973 , we consider a regularization to tackle this issue. The AIS loss in Algorithm 1 is replaced by:

 ~Rn,reg(θ;βreg)=1nn∑i=1p(Xi)q~θi−1(Xi)(Si−fθ(Xi))2+Ω(θ;βreg)n . (28)

This penalization parameter is motivated by the intrinsic difficulty of estimating the slope of the lognormal model when is small Keller15 . Fragility curves with small are hard to distinguish due to the convergence towards a degenerate deterministic fragility curve. This phenomenon is magnified with a small-sized sample. Remarking that , the convergence results for are guaranteed. The regularization only affects for small values of

. The regularization hyperparameter

is determined using cross-validation. The computation of will be detailed in Section 4.2.

Moreover, using AIS with the optimal instrumental density directly could increase the variance if the density has light tails. We propose finally an Adaptive Defensive and Regularized Importance Sampling (ADRIS) strategy as illustrated in Owen00 ; Hesterberg95 . The instrumental density becomes

 ~qθ,ε(x)=εp(x)+(1−ε)qθ(x) (29)

with . is a mixing parameter, between the original marginal pdf and the instrumental one , meaning that one time out of the element is drawn from the pdf . This distribution allows to bound the likelihood ratio:

 p(x)~qθ,ε(x)=1ε+(1−ε)qθ(x)p(x)<1ε .

Thus the defensive strategy bounds the variance even if the likelihood ratio is large. In the numerical applications, we sample the seismic signals with seismic intensity from a very large dataset of precomputed seismic signals with the original distribution that is called the unlabeled pool. The number of seismic signals is considered very large ( in our case) compared to the number of seismic signals used for fragility curve estimation, so that the theoretical results are still supposed valid. For ADRIS, the seismic signals are first sampled at random in the unlabeled pool, then an acceptance reject algorithm is used to sample with the instrumental density , the likelihood ratios are computed using Simpson’s quadrature.

4.2 Initialization of ADRIS and RS strategies

4.2.1 Initialization of the ADRIS algorithm

A key aspect of ADRIS procedure is the initialization parameter . As expected the closer is from the true parameter the faster ADRIS is in asymptotic normal regime. A naive approach is to get a small sample of size (e.g. ) from the original marginal density of and to compute the passive learning estimator (equation (4)). This crude estimation gives a sufficient initial parameter to start ADRIS.

A better approach is to consider a metamodel - in the broad sense - of the mechanical simulation. As often used by practitioners, a numerical resolution based on a modal base projection can be implemented to get an estimate of the fragility curve corresponding to the linear behavior of the structure of interest. It is then possible to get a huge amount of datapoints of the reduced model (e.g. an independent and identically distributed sample of pairs where is sampled with the original pdf and is the associated label obtained with the reduced model) in order to get a precise enough initial parameter approximated by . Statistical metamodels could also be used such as Gaussian Processes ECHARD2013232

or Support Vector Machines

Sainct20 .

In our applications reduced models are only used to give us prior knowledge on the fragility curve shape, encapsulated in the initial parameter of the ADRIS procedure. We then initialize ADRIS with a small sample of datapoints with the instrumental density (equation (29)). The regularization parameter is then determined by minimizing the Leave One Out error on this initialization sample. We denote by the corresponding regularization parameter.

4.2.2 Initialization of the RS algorithm

The initialization for the RS estimator is datapoints sampled using the original distribution , the penalization parameter is also computed using Leave One Out cross validation and the corresponding penalization parameter value is denoted by .

4.3 Performance metrics of the numerical benchmarks

The next section aims at assessing ADRIS performance in comparison with the RS strategy on test cases. For this, we here introduce performance metrics, inspired from Chabridon2017 ; Morio2015 , to unveil the quality of our active learning procedure. In the following we denote respectively by and the RS and ADRIS estimators.

• the Relative Standard Deviation

 ˜RSDn=√V[~Rn,reg(~θn;~βreg)]E[~Rn,reg(~θn;~βreg)], ˆRSDn=√V[ˆRn,reg(ˆθn;ˆβreg)]E[ˆRn,reg(ˆθn;ˆβreg)],

where .

• the Relative Biais

 RBn=|E[~Rn,reg(~θn;~βreg)]−E[ˆRn(ˆθn;ˆβreg)]|E[~Rn,reg(~θn;~βreg)].
• The efficiency

 νn=V[ˆRn,reg(ˆθn;ˆβreg)]V[~Rn,reg(~θn;~βreg)].

A value of shows that ADRIS has a smaller loss variance than RS. For simplicity we will only show the value of for the maximal sample size on each test case. The mean and variance above can be computed empirically using replications of the two procedures (ADRIS and RS) when the test cases are simple and when it is possible to run many replicates of the ADRIS and RS samples.

These metrics are also defined with the empirical testing error:

 Q(θ;βreg)=1ntnt∑i=1(S(t)i−fθ(X(t)i))2+Ω(θ;βreg)nt ,

where is a testing set (independent and identically distributed with the original distribution ). The performance metrics are thus also defined with the testing error by replacing or with and parameter or .

5 Numerical results

To evaluate ADRIS efficiency, a numerical benchmark has been performed with three test cases with increasing complexity:
1) A synthetic test case with known fragility curve and probability distribution of the seismic log-intensity measure ,
2) a nonlinear elastoplastic oscillator with kinematic hardening subjected to synthetic signals generated from the modulated and filtered white-noise ground-motion model Rezaeian10 as in Sainct20 ,
3) an industrial test case of a nuclear facility’s pipeline-system, submitted to the same artificial signals.
The oscillator test case aims to evaluate the effectiveness of the ADRIS strategy before its application to an industrial test case which is numerically much more costly. Moreover, since it well represents the essential features of the nonlinear responses of a large variety of real structures subjected to earthquakes, this test case allows to determine the value of the hyperparameter - thanks to a numerical benchmark - because there is no ad hoc procedure to do this.

5.1 A synthetic test case

Here we benchmark our methodology while having full knowledge of the true fragility curve. We generate 30,000 datapoints with the fragility curve with . The original marginal distribution of

is here a Gaussian distribution with mean

and variance . The unlabeled pool consists of 20,000 datapoints . 10,000 datapoints will be our validation set for testing error estimation, using crude Monte Carlo. Figure 1 shows (i) the target fragility curve

in dashed red line, (ii) a kernel density estimation of the density

based on the all dataset in green and (iii) a kernel density density estimation of the 120 datapoints obtained by ADRIS in red
. Figure 2 plots the training error and testing error for replications of the RS and ADRIS algorithms. The algorithms are initialized with datapoints, datapoints are extracted using ADRIS or RS from the unlabeled dataset.

As depicted by Figure 2 and Table 1, ADRIS does not seem to reduce the training error. This is normal because ADRIS selects seisms whose intensity measures maximize , which can be seen as an marginalized training loss variance of the observation. In other words, as illustrated in Figure 1 with the density , ADRIS selects difficult points - typically values of for which takes values between and - and therefore the training error can be large because it is not representative of the generalization error. RS and ADRIS strategies really distinguish themselves on the testing error, which is smaller for ADRIS than for RS. Moreover, ADRIS quickly converges to the known bias equal to . Finally the efficiency is , meaning that the testing error’s confidence interval for ADRIS is times smaller than for RS after iterations.

5.2 A nonlinear oscillator

This test case aims to validate the overall strategy developed in this work, on a simple but representative case. The goal is to analyze empirically the performance of ADRIS for this test case, which will not be possible for complex structures such as the one addressed in section 5.3. This section is therefore particularly comprehensive, starting from the initialization of the ADRIS algorithm up to the numerical verification of the theorems, while passing by the choice of .

5.2.1 Presentation of the oscillator

This second test case - illustrated in Figure 3 - is that of a single degree of freedom elasto-plastic oscillator which exhibits kinematic hardening. It was previously used as a simple mechanical model for fragility curve estimation in TREVLOPOULOS2019 ; Sainct20 . Its equation of motion is:

 ¨z(t)+2βωL˙z(t)+fNL(t)=−s(t) ,

with an artificial seismic signal, and are respectively the velocity and acceleration of a unit mass of the oscillator, is the damping ratio and the pulsation of the oscillator. Moreover, the nonlinear force is governed by two parameters: the post-yield stiffness and the yield displacement . We also define the maximal displacement , where is the duration of the seismic excitation. The failure state for this nonlinear oscillator is defined by the -valued variable , where is chosen such that , which means that the underlying structure is "rather well designed" with respect to the seismic scenario considered.

5.2.2 Initialisation of the ADRIS procedure

In this test case, for ADRIS initialization, we use the underlying elastic oscillator as a cheap model. The initialization parameter is approximated by (equation (4)) using a -sized dataset. In addition, the PGA is first considered as IM. Even if the PGA is not known to be the best indicator, doing so helps to verify the relevance of the methodology in a "less favorable" case. Note that the influence of the IM on the results will be discussed in section 5.2.7. As shown in Figure 4, the parameter could be considered "close" to the true parameter . Thus, datapoints are queried on the nonlinear oscillator with the instrumental density (equation (29)) before beginning launching the adaptive strategy.

5.2.3 Choice of ε

The choice of the defensive parameter value is cumbersome and there is no direct methodology for its choice. A benchmark on different values of is proposed in Table 2: remark that the test efficiency does not change between and however it is smaller when meaning that the value is too conservative. Accordingly, all the results will presented hereafter with a defensive parameter .

5.2.4 Stopping criterion

Figure 5

shows the value of the test statistics

for two independent ADRIS realizations, this motivates stopping the ADRIS algorithm at sampled seismic signals, as the value of is inferior to the quantile of the distribution and we can then claim that we have reached asymptotic normality.

5.2.5 Check of performances

In order to check the performances of the ADRIS algorithm, the unlabeled training set consists in seismic signals and the testing set is composed of signals. The benchmark study consists in replications with sampled seismic signals using ADRIS versus for the RS strategy.

Figure 6 and Table 3 compare the ADRIS and RS training and testing errors. The mean training loss for the replications is higher for ADRIS than for RS: indeed, the instrumental density is chosen to sample seismic signals that maximize the loss variance, resulting in a high training error. Remark that the ADRIS performance of the testing error is much better in terms of efficiency and . The ADRIS confidence interval for testing error are thus times smaller than for RS after iterations. Moreover, the mean test error of ADRIS is also significantly smaller than for random sampling and quickly converges to the “minimal" error related to the term in (2).

The mean fragility curve and the and confidence interval are shown in Figure 7. The parametric fragility curve estimated with a dataset of

seismic ground motions and the nonparametric fragility curve estimated by k-means clustering on the PGA with a dataset of

seismic ground motions, are also shown in order to validate both the model choice and the uncertainty reduction provided by ADRIS.

Figure 8 helps to visualize how ADRIS reduces the uncertainty of the fragility curve estimation: ADRIS is designed to sample seismic ground motions in the transition zone between and of the fragility curve, this phenomenon is responsible to the uncertainty reduction.