1 Introduction
Uncertainty quantification (UQ) is playing an increasingly important role in computational analysis of physical and engineering systems, particularly performance prediction, risk analysis and decision making. Generally speaking, there are two types of problems in UQ. One is the socalled Forward UQ, also named uncertainty propagation, where the many sources of uncertainty in system input and/or parameters are propagated through the computational model to analyze and predict the overall uncertainty in the system response. On the other hand, Inverse UQ, aims to infer uncertainty in a model or its parameters from measured data or simulations.
The general application of UQ includes aspects of both forward and inverse UQ. Often, this starts from the Bayesian inference of uncertainty measured data and proceeds to the propagation of those uncertainties through a computational model using Monte Carlo simulation or some other approach. Finally, the response outputs are systematically analyzed and assessed. One challenge that may arise in this process occurs when data collection and simulation occur concurrently. In such cases, forward UQ must be performed initially from limited data and then must be updated as additional data are collected. More specifically, Bayesian inference is used to define initial probability distributions for system parameters/inputs from limited data. These distributions are propagated through the model using Monte Carlo simulation. When new data are collected, Bayesian inference is once more conducted and the distributions updated. But, when the distributions are updated the corresponding Monte Carlo simulations must also be updated. Given the computational expense of Monte Carlo simulation, it is undesirable (and potentially infeasible) to conduct an entirely new study from the updated probabilities.
In this work, we discuss four strategies to update Monte Carlo simulations to accommodate a measure change in the input distributions. The objective is to minimize the number of additional calculations/simulations required while maintaining a high level of statistical confidence in the Monte Carlo results. We specifically compare the widely used importance sampling reweighting strategy ONeill2009 ; fetz2016imprecise ; pmlrv62troffaes17a ; zhang2018mssp with three new approaches based on augmenting the original sample set, filtering the original sample set, and a combined augmenting and filtering approach. The four approaches are systematically compared for various types of measure changes including cases where the support of the distribution changes. These examples serve to show that the importance sampling reweighting is effective when the change in distribution is modest and the support does not increase. The proposed combined augmenting and filtering approach, on the other hand, is robust and efficient in the sense that it minimizes the number of additional simulations needed even when the change in distribution is significant. However, augmenting or filtering alone are highly inefficient and are not recommended in practice. The updating strategy is then put into practice for a plate buckling analysis given ongoing data collection efforts to quantify the distribution of the material yield stress.
2 Bayesian inference and probability model selection
Let be a random variable defined on a complete probability space where is the sample space of events, is the sigmaalgebra, and is a probability measure. The objective of Bayesian inverse UQ is to infer a probability measure from a given dataset . The measure is assumed to follow a parametric form defined through a probability model having parameters .
Here, Bayesian inference occurs in two stages. First, the model form is inferred. Given a collection of candidate models with parameters
and associated prior probabilities
with , Bayes’ rule is applied to determine posterior model probabilities given the data as:(1) 
having and where
(2) 
is the marginal likelihood or evidence of model .
One challenge in determining is that Eq. (2) can be difficult to evaluate. Several methods have been proposed to estimate the evidence such as Laplace’s method konishi2008 and information theoretic approximations akaike1974new ; schwarz1978 ; hurvich1989regression . A detailed discussion of the evidence calculation can be found in ZHANG2018cmame2 . For simplicity, in this work we employ a Monte Carlo estimator given by
(3) 
in which samples are drawn from the parameter prior distribution and is the number of samples.
The model with maximum a posteriori probability (MAP) according to Eq. (1) is selected and the second stage of the inference is to identify the parameters of this model. Again, Bayes’ rule is applied such that the posterior parameter probabilities given the model and data can be computed from the prior probabilities as
(4) 
Here, the model evidence does not need to be evaluated explicitly as
can be estimated implicitly from samples drawn using the Markov Chain Monte Carlo method.
From the posterior parameter density , the precise parameters are estimated using an MAP estimate, which is closely related to the method of maximum likelihood (ML) estimation but employs an augmented optimization objective that incorporates a prior distribution over the quantity one wants to estimate. MAP estimates
as the mode of the posterior probability measure
gauvain1994maximum ; Bassett2018(5) 
Note that the MAP estimate of coincides with the maximum likelihood estimate when the prior probability measure is uniform (that is, a constant function).
In an effort where data collection and simulation occur concurrently, as motivated here, it is straightforward to apply this twostage Bayesian inference approach to update both the probability model and the associated parameters as new data are collected.
3 Methods for changing measure in Monte Carlo simulation
The Monte Carlo method is used to estimate the probabilistic response of the system from the deterministic response of the system evaluated at independent statistical samples of . Specifically, given independent samples of drawn independently from , the expected value of , can be estimated using Monte Carlo simulation by
(6) 
where the subscript denotes expectation with respect to and is the support of .
The challenge addressed here is that, as additional data are collected, the probability measure inferred from Bayesian updating may change considerably. That is, if the probability density inferred from the initial Bayesian inference is , the updated probability density may be given by where . Moreover, if is the support of and is the support of , then in general and . That is, and do not necessarily have the same support.
What is to be done then, when Monte Carlo simulation has already been performed using the density but the density of has been updated (or changed) to ? Certainly, it is undesirable to start again and perform a new Monte Carlo analysis using density . How can the results of the original Monte Carlo analysis be leveraged for Monte Carlo analysis on the updated distribution? In this section, we discuss four different strategies for updating Monte Carlo simulations for such a change in measure. Formally stated, consider Monte Carlo analysis has been used to estimate using (6). Using the results from Eq. (6), and perhaps some additional samples, we show four strategies to obtain a Monte Carlo estimator for . The efficiency of each strategy is compared with the prospect of a new Monte Carlo simulation on the updated density and the ultimate aim is to achieve the change in distribution with a minimal number of added simulations.
3.1 Importance sampling reweighting
Perhaps the most attractive option is to simply reweight the existing samples drawn from to convert an evenly weighted Monte Carlo estimator on into an unevenly weighted estimator on . This can be achieved using the principals of importance sampling as follows. Consider the desired expected value with respect to , , which can be written as:
(7) 
where, again, is the expectation with respect to . Defining weights (referred to as the importance weights), the importance sampling estimate of can be expressed as
(8) 
Thus, the new estimate is obtained simply from the original estimate by applying sample weights .
This option is attractive because it does not require any additional simulations. But, depending on the change in measure, it can have a strong adverse effect on the estimator. That is, if and have large discrepancy, the quality of may be much less than the quality of
. To formalize this, note that the variance of the importance sampling estimator is given by:
(9)  
From Eq. (9), it is apparent that we must have . This is a wellknown requirement of importance sampling as when and . In such cases importance sampling cannot be used.
Another challenge arises when faster than , usually in the tails of the distribution. When this occurs, as and the importance sampling estimator yielding a poor IS estimator. In such cases, the importance sampling density is said to be degenerate.
A useful measure for the quality of an importance sampling estimator is the socalled Effective Sample Size (). Formally defined as the ratio of the estimator variances times the number of samples as
(10) 
where is the variance of the importance sampling estimator of with samples drawn from given by Eq. (8) and is the variance of the classical Monte Carlo estimator of with samples drawn from itself. It is rarely feasible to calculate directly, but several approximations have been proposed as discussed in martino2017effective . The most common approximation is given by
(11) 
which, as shown by martino2017effective , is related to the norm between the importance sampling weights and the uniform Monte Carlo weights . It follows from Eq. (11) that the effective sample size decreases as the weights become degenerate (some very large weights and some very small weights) and as . The effective sample size in Eq. (11) will be the primary metric of assessing importance sampling quality throughout.
3.2 Augmenting sample sets
When the conditions for importance sampling reweighting are not ideal (i.e. effective sample size is small) or are simply not met (i.e. a change in support), it may be necessary to augment the existing sample set with additional samples and perform further simulations. In this section, a simple method is introduced for augmenting a sample set with new samples such that the augmented sample set .
Consider the updated probability density as a mixture distribution (convex combination) of the original distribution and a new, as yet unknown, distribution as:
(12) 
where and are constants subject to constraints discussed below. Solving for yields:
(13) 
To be a valid density, must satisfy and . The first condition implies:
(14) 
while the second condition states:
(15) 
Hence, the constants and are not unique, which implies that several distributions can be selected that satisfy Eq. (12). For reasons that will become clear shortly, we select the constants such that is minimized. This yields
(16)  
yielding the “augmentation density” or “correction density”
(17) 
Hence, when sampled in correct proportion, samples from and will combine to form a sample set that follows .
Returning to our initial problem, we have samples drawn from and we wish to draw an additional samples from such that the total set of samples follows . How many samples, , are required to match ? According to the composition method for generation of random variables law2015simulation , the distributions and should be sampled in proportions and respectively in the general case given in Eq. (12). Applying the values in Eq. (16) implies
(18a)  
(18b) 
Solving Eq. (18a) for yields
(19) 
From Eq. (19), it is clear that defining with any value of will require a larger number of augmented samples.
Similar to importance sampling reweighting, this approach has notable shortcomings. Most notably, Eq. (16) requires that . That is, the support of must cover the full support of . When this condition is satisfied, the primary drawback of this approach is that is can become very large. As expected, when (i.e. ), . However, when faster than , and the approach is not viable.
3.3 Filtering sample sets
Another alternative that does not require additional simulations is to “filter” the existing samples drawn from according to the wellknown acceptance/rejection (A/R) method. To apply A/R, let us scale the original density to serve as a majorizing function for . Recall that the majorizing function, , by definition must envelope such that and, in general, is not a probability density. Letting , we have . To satisfy the conditions on requires that
(20) 
which can be satisfied by setting
(21) 
Applying the A/R method, we then accept samples with probability
(22) 
Notice that when , the acceptance rate will decline and a larger proportion of the original samples will be rejected.
Applying A/R, the overall proportion of samples that will be accepted, , to the original samples drawn from is given by
(23) 
As in the previous cases, we note that the A/R method has the following limitations. Most notably, the method requires . That is, the approach is not viable if the support of the updated density is larger than the support of the original density. Notice also that, again, when (i.e. ), , which means that no samples will be rejected. However, when faster than , and the approach is not viable.
3.4 Augmenting and filtering sample sets
The final approach combines the benefits of the augmenting and filtering approaches while also addressing their limitations.Consider the original density and the updated density such as those shown in Figure 1. For illustration, it is convenient to define the function having the property when and when as also illustrated in Figure 1.
It is convenient to partition the total support according to the support and with and . The point(s) satisfying () represents the point(s) at which the support is divided into and . The region, where is referred to as the filtered region while the region where is called the augmented region.Next, define
(24a)  
(24b)  
(24c)  
(24d) 
According to this partitioning, the distributions and can be expressed as mixture distributions:
(25a)  
(25b) 
where
(26a)  
(26b)  
(26c)  
(26d) 
The general idea of the proposed approach is to augment the samples in the region with new samples and filter the samples in region . According to the mixture distributions in Eq. (25) and given that and are disjoint, we can sample independently over the support partitions and .
Let us focus on first. The objective is to obtain a set of samples that follows from Eq. (25b). Let us treat as a mixture distribution defined by:
(27) 
where the “correction” distribution is defined by:
(28) 
It is straightforward to show that Eqs. (27) and (28) are consistent and that is a valid pdf with .
Given a set of samples following , we have samples following and samples following with , , and . Over , we propose to add samples from such that the combined distribution of the samples follows . According to the mixture distribution in Eq. (27), this requires on average samples from and from . Therefore, the required number of added samples is given by
(29) 
It follows directly from the proof of the composition method that the augmented samples set follows . Moreover, it is interesting to note that the number of new samples required to correct the distribution scales as half the total variation distance between the original and the updated densities given by:
(30) 
That is, . Proof of Eq. (30) can be found in Appendix A.
Next, consider the range of support for (i.e. where ). The objective is to obtain a set of samples that follows from the mixture model in Eq. (25b). Given a set of samples following , again we have samples following and samples following with , , and .
Let us define as a majorizing function for and sample according to the A/R method. Note that – proof of which is straightforward. The A/R method proceeds as follows. Each of the samples from is accepted with probability . On average, this will yield a set of samples. Note that the number of rejected samples is given by
(31) 
That is, the same number of samples are rejected in region as are added in (with that number being proportional to the total variation distance), which keeps the total sample size constant on average.
The proposed method has been proven to modify a sample set to follow the over the range by adding samples according to a carefully defined “correction” (see Eq. (28)) and to follow over the range by filtering the existing sample set according to the A/R method with a carefully defined majorizing function. Moreover, the samples from and are proven to satisfy the correct proportions so as to follow the mixture distribution given by Eq. (25b). It therefore follows that the combined samples set follows the updated distribution .
3.5 Discussion of efficiency and degeneracy
The method proposed in Section 3.4 has considerable advantages over the methods in Sections 3.1  3.3. It eliminates the degeneracy problems present in all of the other methods, dramatically reduces the number of additional simulations required when compared to the purely augmented sample sets presented in Section 3.2, and dramatically reduces the number of rejected samples when compared to the purely filtered samples sets presented in Section 3.3.
Consider first the degeneracy issue. Using the method proposed in Section 3.4, there are no restrictions on the support of and as there are in the other methods. In the augmented region , where , the correction distribution is simply equal to a scaled version of when and, because , when . Moreover, the number of added samples in is shown in Eq. (29) is stable (proportional to the total variation distance) and cannot cause the sample size to more than double. That is, in the most extreme case where and have disjoint support, . This is in contrast to the purely augmented case where the number of added samples depends on the ratio , which goes to infinity under certain conditions (discussed in Section 3.2).
In the filtered region, where , again the method proposed in Section 3.4 places no restrictions on the support of and . When , must also be zero and, when , the acceptance probability . Also, the number of rejected samples in given in Eq. (31) is stable (again proportional to the total variation distance) and equal to the number of samples added in the augmented region. In contrast, the acceptance rate for the purely filtered method in Section 3.3 converges to zero under certain conditions.
Finally, as will be shown in the following Section, the method proposed in Section 3.2 proves far more efficient than the other methods in that it requires the addition of fewer samples than the purely augmented case and rejects fewer samples than the purely filtered case. This is directly related to the nature of the scaling. The number of added/rejected samples in both the purely augmented and purely filtered scale with the maximum ratio of the densities, which can be very large. The method proposed in Section 3.2, however, scales with half the total variation distance between the densities which is bounded on . Therfore, as previously stated, the method will never add more than samples and it keeps the same size constant.
Of course, if the support of the distributions allows it and the effective sample size is satisfactorily large, the Importance Sampling reweighting option affords the benefit that no new simulations are necessary. However, if the support of the distributions are different or if the penalty in effective sample size is not acceptable, the proposed method provides an efficient alternative that often requires only a small number of correction samples.
These approaches are explored in further detail in the following section.
4 Numerical illustrations
In this section, we illustrate the performance of these four strategies for two support relationships between the original distribution and the updated distribution. The first section considers distributions with common support and the second considers distributions with changing support. The benefits and limitations of each proposed strategy are compared and discussed for representative numerical examples.
4.1 Distributions with common support
Consider random samples drawn from an original probability density . Here, we explore the performance of the various sample adjustment strategies for five updated probability densities with different shapes and locations but identical support , as follows and illustrated in Fig.2. The objective, in each case, is to maintain a Monte Carlo sample size as close as possible to the original 10,000 samples with minimal additional samples.

Case 1: Normal distribution with a small shift –

Case 2: Normal distribution with a larger shift –

Case 3: Wider normal distribution –

Case 4: Narrower normal distribution –

Case 5: Multimodal mixture distribution –
The results for each sample strategy applied to all five cases are summarized in Table 1 and a discussion of each case follows.
IS  Augmenting  Filtering  Mixed  

Case  ESS  
1  9608  12214  22214  5322  4678  796  10000  
2  3717  796882  806882  9660  340  3812  10000  
3  4626  5000  15000  9784  216  1843  10000  
4  6614  3.63E+13 ()  3.63E+13 ()  5001  4999  3227  10000  
5  7115  1.17E+8 ()  1.17E+8 ()  5731  4269  2735  10000 
Case 1: Normal distribution with a small shift – In this case, the difference between the original and updated distributions is very small. Consequently, importance sampling is effective for reweighting the original samples generated from . The corresponding effective sample size remains quite large. A purely augmenting strategy requires a large number of additional samples, , to be drawn from the correction distribution (black dashed line in Figure 3a). As a result, the total number of samples is more than double the number of original samples. This is not considered a viable option. A purely filtering strategy, on the other hand, does not come at any additional computational cost, but this strategy eliminates samples, or approximately half the original samples. Although the reduced sample set fits the updated probability density well, as shown in Figure 3b, the smaller sample size reduces the accuracy of statistical estimates from Monte Carlo simulation appreciably. Figure 3c presents the mixed augmenting and filtering strategy where again illustrates the separation between the augmented and filtered regions. In this case, only an additional samples are required to maintain a sample size of and the samples match the updated distribution accurately.
Best Option: Importance Sampling Reweighting – Although augmenting and filtering requires only a relatively small number of new samples (), importance sampling reweighting requires no new calculations and maintains a large effective sample size ( samples).
Case 2: Normal distribution with a large shift – A large shift between and leads to large variation in importance weights which yields a poor importance sampling estimator with effective sample size only (Table 1). The performance of the other three strategies are shown in Figure 4. The number of samples generated from the augmenting strategy (Figure 4a), (Table 1) is unacceptably large. The filtering strategy, meanwhile, removes more than 96 of the original samples and retains only 340 samples, as shown in Figure. 4b, which isn’t nearly enough for Monte Carlo simulation. The mixed augmenting and filtering strategy, as shown in Figure 4c requires to maintain the full 10,000 sample size, and while it is not ideal to perform an additional 3800 calculations, it is preferable to the other options.
Best Option: Mixed Augmenting and Filtering
Case 3: Wider distribution – Significantly widening the distribution results in undersampling the tails of the updated distribution. This will cause importance sampling to place large weight on samples in the tails of the original distribution. Consequently, the importance sampling weights have high variance, which yields a poor estimator with small effective sample size, (Table 1). Regarding the augmenting strategy, many original samples can be effectively retained such that new samples only need to be added in the tails, as the correction distribution in Figure 5a shows. Nonetheless, , which is still quite large. The filtering strategy, meanwhile, retains only 2 of the original samples leaving only samples as illustrated in Figure 5b. The mixed strategy is much more efficient as it adds/filters samples (Figure 5c and Table 1), which is less than 20% of the original number, in order to maintain a sample size of 10,000.
Best Option: Mixed Augmenting and Filtering
Case 4: Narrower distribution – Narrowing the distribution results in the tails of the updated distribution being oversampled. For importance sampling, this is better than undersampling the tails; it more evenly distributes the sample weights resulting in an effective sample size . While larger than Cases 2 and 3, it is still is only of the total number of samples and will result in a fair, but not great, estimator. It is easy to show that the augmenting strategy requires infinite additional samples since . This is reflected in Table 1 as , which is finite only due to numerical discretization of and in implementation. It is therefore, not a viable option. The filtering strategy, as shown in Figure 6a remains viable as is bounded and retains approximately half of the original samples, thus performing slightly better than Cases 2 and 3. The mixed strategy, by comparison as shown in Fig 6b, requires 3227 new samples to maintain the sample size .
Best Option: Mixed augmenting and filtering OR Importance sampling reweighting – If additional simulations are affordable, the mixed strategy is preferred. If these additional simulations are not affordable, importance sampling reweighting is appropriate.
Case 5: Unimodal to multimodal distribution – This case represents an extreme where the distribution radically changes form. In this specific example (Figure 7), the multimodal updated distribution has tails that are slightly oversampled by the original distribution making it similar to Case 4. But also the central region of the distribution is oversampled by the original distribution similar to Case 3. These two facts govern the importance sampling reweighting yielding an effective sample size . Because it is similar in the tails to Case 4, again it is straightforward to show that , thus requiring infinite additional samples to match the updated distribution according to the augmenting strategy. This is reflected in Table 1 by samples which, again, results from numerical discretization of the pdf. The filtering strategy, as shown in Figure 7a, retains only of the original samples. For the mixed strategy, shown in Figure 7b, this example illustrates that the regions and may be complex. Here, there are filtering regions ( with )) in both tails and the center of the distribution separated by augmenting regions ( with ). The resulting number of additional calculations is reasonable in comparison to the other strategies.
Best Option: Mixed augmenting and filtering OR Importance sampling reweighting – If additional simulations are affordable, the mixed strategy is preferred. If these additional simulations are not affordable, importance sampling reweighting is appropriate.
4.2 Distributions with changing support
When the support changes from the original distribution to the updated distribution the resampling process may be more complicated. Here, we explore several such examples based on the following three general classes of support bounds:

Infinite Support: , e.g. Normal distribution.

Bounded support:
, e.g. Beta distribution.

Semiinfinite Support: or , e.g. Lognormal distribution.
We specifically select a representative distribution for each support condition with , , and as shown in Figure 8.
We further select these three distributions to have identical mean
. A total of six cases are considered such that each permutation of original and updated distribution is studied. The results for each resampling strategy are given in Table 2 and discussed below.IS  Augmenting  Filtering  Mixed  

Case  Original  Updated  Support  ESS  
1  9218  N/A  N/A  4266  5734  1046  10000  
2  9012  N/A  N/A  9406  594  989  10000  
3  N/A  7811  17811  N/A  N/A  728  10000  
4  N/A  5.19E+22 ()  5.19E+22 ()  N/A  N/A  1522  10000  
5  6125  N/A  N/A  9718  282  1875  10000  
6  N/A  N/A  N/A  N/A  N/A  931  10000 
Case 1: Infinite to bounded support – If the original distribution has infinite support while and the updated distribution is bounded as shown in Figure 9, i.e. , the importance sampling reweighting is effective with large effective sample size . It is capable of assigning zero weight to all samples outside and requires only minor nonuniformity for the remaining samples. The augmenting strategy, however, does not work given this support relationship because . The filtering strategy can be used here but nearly half of the total original samples are removed with . The mixed strategy is capable of adding samples where needed and removing all samples outside . It is quite efficient and only requires additional samples to maintain the 10,000 sample size.
Best Option: Importance Sampling Reweighting
Case 2: Infinite to semiinfinite support – For importance sampling reweighting, this case exhibits similar performance to Case 1 with slighly smaller effective sample size . Again it can assign zero weight to those samples not in and requires only small changes in the weights for other samples. Also, the augmenting strategy cannot be applied because . The filtering strategy is not a viable option because it removes samples  leaving the Monte Carlo sample size very small. The mixed augmenting and filtering strategy is very effective here requiring only samples to maintain the 10,000 sample size.
Best Option: Importance Sampling Reweighting
Case 3: Bounded to infinite support – When , importance sampling reweighting and the filtering approach cannot be applied. The original samples do not span and therefore must be supplemented. In this case, the purely augmenting strategy requires additional samples as shown in Figure 11b and Table 2. The mixed augmenting and filtering, by contrast, requires only additional samples – see Figure 11c. This is clearly the preferred approach.
Best Option: Mixed Augmenting and Filtering
Case 4: Bounded to semiinfinite support – As in Case 3, importance sampling reweighting and the filtering approach are not applicable here because . Furthermore, the augmenting strategy theoretically requires infinite added samples because as the lognormal, , decays exponentially and the beta, , decays polynomially as . This is reflected in Table 2 by an unrealistically large number of added samples that results from numerical discretization of the pdf. Therefore, the mixed augmenting and filtering strategy is the only viable option. This strategy is efficient requiring only new samples as illustrated in Figure 12 and Table 2.
Best Option: Mixed Augmenting and Filtering
Case 5: Semiinfinite to infinite support – In this case, the support conditions, , are the same as Cases 1 and 3. But, the effective sample size for importance sampling reweighting, , is considerably smaller than the other two cases. As in those cases, the augmenting strategy cannot be employed because . Moreover, the number remaining samples in the filtering strategy where is too small for Monte Carlo simulation as illustrated in Figure 13b. The augmenting and filtering strategy is effective for this problem requiring an additional samples.
Best Option: Mixed Augmenting and Filtering
Case 6: Semiinfinite to bounded support – With the same support conditions as Cases 3 and 4, importance sampling reweighting and filtering cannot be used here. Similarly, the augmenting strategy cannot be employed here because . Therefore, the mixed augmenting and filtering strategy is the only option and it is quite efficient, and robust as illustrated in Figure 14 and Table 2.
Best Option: Mixed Augmenting and Filtering
These six cases illustrate the performance of each method for differing changes in distribution support. From these results, it is clear that the mixed augmenting and filtering approach is the most robust as it can be applied under any support condition. The purely augmenting and purely filtering approaches, when applicable, are very inefficient and are therefore not recommended under any conditions. Importance sampling reweighting, although not universally applicable, may be advantageous when the change in distribution is small.
5 Application to Bayesian updating of plate buckling strength
Uncertainty in the material and geometric properties of ship structural components can significantly impact the performance, reliability and safety of the structural system ClassNK . In this work, we study the impact of uncertainty in material properties on buckling strength of a simply supported rectangular plate under uniaxial compression. An analytical formulation for the normalized buckling strength for a pristine plate was first proposed by Faulknerfaulkner1973
(32) 
where is the ultimate stress at failure, is the yield stress, and is the slenderness of the plate with width , thickness , and elastic modulus given by
(33) 
Eq. (33) was further modified by Carlsen carlsen1977 to study the effect of residual stresses and nondimensional initial deflections associated with welding
(34) 
where is the width of the zone of tension residual stress.
The design buckling strength is based on nominal values for the six variables in Eq. (34) provided in Table 3. However, the actual values of these variables often differ from the design values due to uncertainties in the material properties and “as built” geometry yielding uncertainty in the buckling strength. Overall, we are therefore interested in investigating the effect of the six uncertain variables shown in Table 3, but in this work we focus only on assessing the influence of uncertainty in the yield strength since it is the most sensitive variable identified by Global sensitivity analysis (see Table 3) and for clarity of demonstration.
Variables  Physical Meaning  Nominal Value  Mean  COV  Global Sensitivity 

width  36  0.992*36  0.028  0.017  
thickness  0.75  1.05*0.75  0.044  0.045  
yield strength  34  1.023*34  0.116  0.482  
Young’s modulus  29000  0.987*29000  0.076  0.194  
initial deflection  0.35  1.0*0.35  0.05  0.043  
residual stress  5.25  1.0*5.25  0.07  0.233 
The material is considered to be ABSB type marine grade steel having properties described through historical US Navy testing programs atua1996 ; kufman1990 ; gabriel1962 . A total of 79 yield stress data are available from the historical reports, which are shown in the histogram in Figure 15 and described statistically in Table 4 (replicated from hess2002 ).
Steel type  Min  Max  Mean  COV  Number of tests  Data Sources 

ABSB  27.6  46.8  34.782  0.116  79  atua1996 ; kufman1990 ; gabriel1962 
For the purposes of this study, we consider that these 79 data are being actively collected and that, at specific intervals of the data collection we will perform Bayesian inference on the data to identify a distribution for Monte Carlo simulation on plate buckling strength. In the Bayesian inference, we consider the following seven parametric probability distributions: Normal, Lognormal, Gamma, Logistic, Weibull, Loglogistic, and Nakagami. Both the distribution form and distribution parameters are selected using the MAP selection criterion described in Section 2.
We start by drawing 10 samples randomly from the 79 yield stress data and consider this as our initial test program. Bayesian inference on these data suggest an initial Gamma distribution fit,
, as illustrated in Figure 16.We then consider four subsequent extensions of the testing program having 20, 35, 55, and 79 test data. For each extension of the testing program, a new “updated” distribution (labeled ) is identified for the data using Bayesian inference as shown in Figure 17 and detailed in Table 5.
With each change in distribution, the Monte Carlo sample set (originally drawn from the Gamma distribution, ) must be modified to match the new distribution. For each updating, from , we start by evaluating the effective sampling size for importance sampling reweighting. If (i.e. imporance sampling effectively retains more than 90% of the samples), then importance sampling reweighting is used. Otherwise, the mixed augmenting and filtering strategy is employed to retain a set of 10,000 samples that matches the updated distribution.
The results of the sample updating are shown in Table 5 where, in the first three updates (, , and ) the distribution changes form and samples must be added/filtered in each case. In the final extension, the best fit distribution does not change and importance sampling reweighting can be used with
Comments
There are no comments yet.