1 Introduction
Uncertainty quantification (UQ) is the science of quantitatively characterizing and, if possible, reducing uncertainties in the computational evaluation of engineering/mathematical systems. It is used to determine how likely certain outcomes are if some parts of the system are not exactly known. Practically speaking, UQ is playing an increasingly important role in performance prediction, reliability analysis, risk evaluation and decision making. Uncertainty can be broadly classified into two categories
DerKiureghian2009 : epistemic, resulting from a lack of complete knowledge and modeling approximations, and aleatory, resulting from inherent randomness. It is widely accepted that probability theory provides an appropriate framework for the treatment of aleatory uncertainty although it is open to debate as to what mathematical treatment is most appropriate for epistemic uncertainty.
It is often considered preferable to view all uncertainty probabilistically  given the wellunderstood and intuitive nature of probability theory . This desire has given rise to the field of socalled imprecise probabilities wherein aleatory uncertainties are modeled using standard probability theory and epistemic uncertainties provide a level of “imprecision.” Despite efforts to develop a unified theory of imprecise probability Walley1991 ; Walley2000 , there remain numerous approaches to model this imprecision that include the use of random sets Molchanov2005 ; Fetz2004 ; fetz2016 , intervals and probability boxes Moore1979 ; Weichselberger2000 ; ferson2002 ; ferson2004 ; schobi2017 , Bayesian raftery1995 ; sankararaman2013 and frequentist walley1982 ; cattaneo2017 methods, and combinations of these theories dubois1991 ; dubois2005 among many others (e.g. Dubois2012 ). Additionally, DempsterShafer theory Dempster1967 ; Shafer1976 and fuzzy set theory Zadeh1965 aim to relax the constraints on probability measures to account for this imprecision. For the interested reader, an extensive review of many of these approaches for engineering applications can be found in beer2013imprecise .
In this work, we apply a multimodel Bayesian probabilistic approach, extended from zhang2018 , to quantify and propagate combined aleatory and epistemic uncertainty. Specifically, epistemic uncertainties manifest as uncertainties in the form of probability models characterizing a dataset (referred to as modelform uncertainties or structural uncertainties draper1995 ) and uncertainties in the parameters of the probability models (referred to as parameter uncertainties). Both modelform and parameter uncertainties are quantified from the given data using Bayesian inference. The result of this multimodel approach is a set of candidate probability models (each having associated probability) and the joint parameter probability densities for each model and provides a near comprehensive description of the uncertainties in the system rooted in probabilities (or more precisely probabilities of probabilities).
The imprecise probabilities quantified using the proposed approach are propagated through a model of a physical system using a Monte Carlo approach with importance sampling reweighting for simultaneous propagation of the full set of probability models zhang2018 . The procedure is critical in reducing the computational expense of Monte Carlobased imprecise probability propagation by reducing a multiloop Monte Carlo to a singleloop Monte Carlo.
Given that the approach employed herein is fully Bayesian in its construction and the datasets used for inference are necessarily small, prior probabilities are likely to influence the quantified uncertainties in important ways. The primary objective of this work is to improve our understanding of the influence of prior probabilities in both modelform and parameter uncertainties on the quantified uncertainties and, ultimately, on propagated uncertainties. We employ informative and noninformative priors for both modelform and parameter uncertainties under realistic data availability constraints. More specifically, priors are formulated based on either assumed ignorance (noninformative) or using historical data available from literature (informative) that may or may not be entirely appropriate for the present analysis. In other words, the informative prior may be incorrect in seemingly subtle ways. This is combined with the reality of small datasets for Bayesian inference used for quantifying imprecise probabilities and, an example of a simple plate buckling problem with uncertainty in material properties is used to illustrate how prior probabilities come to have a strong influence (good or bad) on multimodel uncertainty quantification and propagation from small datasets. Moreover, the effect of the prior is studied to observe, for this application, the rate of convergence of imprecise probabilities to the “true” probabilities for increasing dataset size. Even in these large data case, it is shown that seemingly rational (but ultimately incorrect) priors can create biases that preclude convergence to the true probabilities.
The paper is structured as follows. Sections 2 and 3 provide the basic theory for multimodel Bayesian uncertainty quantification for modelform (Section 2) and parameter uncertainties (Section 3). Section 4 reviews a new Monte Carlobased method for multimodel uncertainty propagation proposed in a recent work by the authors zhang2018 . Some minor modifications are included. Section 5 discusses the formulation of model and parameter prior probabilities. The influence of these prior probabilities on multimodel uncertainty quantification and propagation is then studied in the context of a simplified plate buckling problem in Section 6 where priors are considered to be either noninformative or rooted in historical data from literature. Several such priors are considered for both modelform and parameter uncertainties and their influence studied systematically. Finally, some concluding remarks are provided in Section 7.
2 Bayesian multimodel inference and modelform uncertainty
Among the most important problems in computational science and engineering are the quantification of modelform uncertainty and its use for model selection, with widespread recognition of these challenges dating back more than 30 years draper1987 ; dijkstra1988
. Probabilistic model selection has taken considerable strides forward with the development of approaches based on Bayes’ theory and
information theory. Bayesian approaches involving the notion of posterior model probabilities follow the work of Raftery raftery1995 and have been revived in the more recent work of Beck beck2004 ; beck2010 ; cheung2010 and Oden farrell2015 ; oden2016 ; prudencio2015 . Meanwhile the informationtheoretic approach is derived from the work of Akaike akaike1974new ; akaike1976 and its further generalizations schwarz1978 ; hurvich1989regression ; konishi2008 .The issue of model selection is fundamental to the quantification of input uncertainties for physicsbased calculations given limited data. In such cases, multiple probability models may reasonably fit the data. Generally, both the Bayesian and informationtheoretic model selection methods are employed to select a single “best” model based on the given data and a set of candidate models. That model is then the sole model used for uncertainty propagation without any further consideration for the assessment and propagation of modelform uncertainty draper1987 ; draper1995 . This approach is known to potentially misrepresent the true uncertainties in statistical quantities of interest (typically underestimating them) draper1995 , while also implicitly asserting that a “true” model exists, which is counter to the definition of a model Chatfield1995 . In certain cases, model averaging is performed (see hoeting1999bayesian and the associated commentary) and, while this may be preferable, it still ignores much of the true modelform uncertainty by propagating only averaged quantities rather than their full probability structure. Moreover, such selection needs very large datasets. Given scarcity of data, it is often impossible to identify a unique best model so that we need to quantify modelform uncertainty and retain multiple candidate models and their associated probabilities  a method referred as to multimodel inference burnham2004multimodel . The previous work of the authors zhang2018 presented an informationtheoretic approach for multimodel inference to quantify and propagate these modelform uncertainties. This work seeks to generalize this in a fully Bayesian framework.
2.1 Bayesian multimodel inference
In this work, we consider the specific case of probability model selection from sparse data. In all subsequent discussion, a model
refers to a parametric probability model for a random variable
(typically expressed in the form of a probability density function,
) having parameters . Given a collection of candidate models with model parameters , our objective in this section is to assess the “goodnessoffit” of each model given a dataset of independent observation of and ultimately infer probabilities that each model is the “best” model for the data.In the Bayesian setting, selection between two models and given data
is often performed by estimating the ratio of posterior odds as
(1) 
where Bayes’ factor is defined as the ratio of the evidence of
and , and the prior odds is the ratio of model prior of and . If the posterior odds are greater than one, then model is selected while if the posterior odds are less than one, model is selected.Intuitively, Bayes’ factor can be easily generalized for comparison of multiple candidate models. Consider the aforementioned collection of parametric models , with each model having an associated prior probability with . Bayes’ rule relates posterior model probabilities to prior model probabilities via the formula
(2) 
having and where
(3) 
is the marginal likelihood or evidence of model . Typically, the model with highest probability is deemed the most plausible in the set for the given data . In the multimodel inference context, rather than selecting the model with the highest probability, the models are ranked according to their probabilities given by Eq. (2) and all models with nonnegligible probability are retained.
In Bayesian parameter estimation the evidence
is just a normalization factor that does not need to be evaluated explicitly using Markov Chain Monte Carlo (MCMC). However, as evident from Eq. (
2), the evidence is critical in Bayesian multimodel inference and consequently needs to be calculated with caution. The following section discusses evidence calculation.2.2 Bayesian evidence calculation
The evidence in Eq. (3) can be computed in a number of different ways. In rare cases, the integral can be evaluated analytically. Usually, approximate or statistically exact (i.e. Monte Carlo) methods are necessary.
One efficient approximation uses Laplace’s approach konishi2008 to approximate the evidence as
(4) 
Taking the logarithm of this expression and multiplying it by , we obtain
(5) 
where
is the dimension of the parameter vector
, is the maximum likelihood estimate and is the inverse Hessian of the negative log likelihood (Fisher information matrix). Ignoring the terms in Eq. (5) with order less than with respect to the large sample size yields the Bayesian Information Criteria (BIC) schwarz1978(6) 
where is the dataset size. This quantity can be used to construct an asymptotic approximation to Bayes’ factor, namely raftery1995 . Combined with the model prior , posterior model probabilities from Eq. (2) can be expressed as
(7) 
where . Assigning uniform prior model probabilities to the set , , yields what are referred to as BIC model weights. In fact, Eq. (7) can be considered generalized BIC model weights for arbitrary prior model probabilities. Notice also that Eq. (6) may be thought of as an implicit approximation to evidence under a noninformative parameter prior (or Jeffreys parameter prior) even though it does not explicitly depend on a parameter prior.
The informationtheoretic multimodel selection (introduced in burnham2004multimodel and employed in the authors’ previous work zhang2018 ) can be shown as a special case of the Bayesian evidencebased multimodel selection used herein. Akaike akaike1974new showed that the maximized loglikelihood is a biased estimator of the KL information and that the bias is approximately equal to . Hence, the Akaike Information Criterion (AIC) is defined
(8) 
as an approximation of the KL information. By rescaling the AIC as
(9) 
the marginal likelihood of the model given the data can be expressed as akaike1981 . By normalizing these likelihoods to sum to 1, they are treated as model probabilities (as in Eq. (7)) with
(10) 
As shown by burnham2004multimodel , Eq. (10) is in fact a special case of Eq. (7) in which the prior model probabilities take the form
(11) 
This form of priors are referred to as savvy (shrewdly informed) priors because they depend on and .
The BIC and AIC based results are important because they illustrate directly the influence of priors in the asymptotic case. While the model probabilities in Eq. (2) are general, they can be approximated (in large data cases) by Eq. (7) and the AIC derived model probabilities are an instance of this approximation under certain prior information.
Because both the BIC and AIC are asymptotic quantities that require large dataset size, they are of limited practical use here. Although a small data correction of the AIC, denoted AICc, has been derived hurvich1989regression ; hurvich1995model and used in our previous work zhang2018 , this again implies a certain prior form and our objective here is to investigate the effect of the prior. Consequently, we must rely on other estimators for Eq. (3) that do not have asymptotic conditions or assume a prior form. We favor a Monte Carlobased statistical estimator given by
(12) 
in which samples are drawn from the parameter prior distribution and is the number of samples. The computational cost of the Monte Carlobased algorithm for probability models used in this paper is moderate, and its efficiency can be improved with parallel computing. For complex or high dimensional model evidence calculation, MCMCbased algorithms, including Chib and Jeliazkov chib2001 and nested sampling skilling2004nested may be preferable as discussed in the recent review literature bos2002 ; friel2012 ; zhao2016 .
3 Bayesian model parameter estimation
The multimodel inference process discussed in the previous section identifies a set of candidate model forms and their associated probabilities. For each of these models there are, of course, additional uncertainties associated with model parameters These uncertainties are quantified using classical Bayesian inference applying Bayes’ rule as
(13) 
where, again, is the likelihood function and is the prior probability density. The evidence, , (Eq. (3)) serves only as a normalizing constant in this case. Therefore, unlike in the model selection process, it does not need to be evaluated explicitly as the posterior can be estimated from samples using MCMC.
The simplest and most commonly used MCMC algorithms are MetropolisHastings (MH) hastings1970 and Gibbs sampling geman1984 . In this work, we use an MHbased MCMC algorithm  the Affineinvariant ensemble sampler goodman2010ensemble proposed by Goodman and Weare goodman2010ensemble and implemented in the emcee software package foreman2013emcee . The main advantage of this algorithm is that it leverages an ensemble of chains to adapt the proposal density through an implicit affine transformation. This has the effect of greatly improving efficiency for anisotropic and degenerate densities (increasing the acceptance rate while maintaining sample quality) and significantly reducing the correlation length of the Markov chains yielding independent samples more rapidly. An added benefit is that the method is largely “selftuning” – requiring only one or two tuning parameters compared with tuning parameters for most MHbased algorithms. For brevity, the reader is referred to goodman2010ensemble ; foreman2013emcee for algorithm details.
Although the parameter estimation performed here is conventional (nothing new), an important distinction of the multimodel parameter estimation process used herein is that we retain, and propagate, the full joint parameter density. Other conventional methods typically select a single, maximum likelihood parameter value and retain only the corresponding distribution for uncertainty propagation. This has the effect of ignoring parametric uncertainty. In combination with the multimodel selection, the result of our UQ process is therefore a set of probability models (with associated probabilities ), each of which has an associated joint parameter pdf . A method for propagation of this complete uncertainty has been previously proposed by the authors zhang2018 and is reviewed in the following section.
4 Efficient multimodel uncertainty propagation
Given a set of probability models with posterior model probabilities and joint posterior parameter densities , uncertainties associated with the random variable are propagated using a Monte Carlo approach that employs importance sampling reweighting to reduce a nested Monte Carlo analysis to a single Monte Carlo analysis. More specifically, consider that is now the input to some stochastic system . Given uncertainties in the form of the probability model of (represented by the posterior model probabilities ) and uncertainty in the parameters of each candidate model (described by the joint posterior parameter densities ), we aim to quantify uncertainties in the response quantity .
Conventional approaches to solving this type of problem involving uncertain probability distributions require nested Monte Carlo simulations where first the probability model space is sampled. That is,
probability models are sampled from the multimodel set according to their associated model probability masses . For each of the modelform samples, the parameter vector is randomly sampled from the joint parameter pdf to obtain a sample pdf (i.e. a realization of a specific modelform and it’s associated parameters). The set of sample pdfs serves as a finitedimensional approximation of the total uncertainty (both modelform and parametric) that can be propagated by Monte Carlo simulation. This is achieved by drawing samples from each of the distributions and evaluating the model times. Clearly this is a highly inefficient process and is intractable for problems of even moderate computational expense.The method proposed in zhang2018 reduces this expense considerably to a singleloop Monte Carlo by employing importance sampling reweighting (also used in fetz2016 for propagation of random sets). First, an optimal sampling density is identified by minimizing the expected mean square differences between the sampling density and the ensemble of probability models . This corresponds to solving the following optimization problem under isoperimetric constraint
(14)  
subject to 
where the action functional is the total square differences:
(15) 
and
is the expectation with respect to the posterior probability of the model parameters
. ensures that is a valid pdf. It is shown that the optimization problem in Eq. (14) has closedform solution given by the mixture model zhang2018(16) 
It is straightforward to show that this solution generalizes as:
(17) 
where is the posterior model probability for model .
Samples are drawn from the optimal sampling density and the response of the system evaluated at each sample point. The statistical response of the system is reweighted according to each of the sample pdfs using importance sampling as
(18) 
The result is simultaneous propagation of all probability models in the sampled set.
It is shown that this approach may be easily updated to accommodate new information from Bayesian inference as additional data are collected. This will typically reduce the uncertainty associated with the model form and parameters but will come at cost of a loss of optimality in the importance sampling density. If the change in optimal sampling density is considerable, the effect will be an increase in the statistical variance, or potential instability, of the importance sampling estimate. This can be addressed by resampling from the new optimal sampling density, but this is computationally prohibitive. In a parallel work
Jiaxin_Shields_corrected , a method is proposed to efficiently accommodate a measure change in Monte Carlo simulation that minimizes the impact on the sample set. In other words, it retains as many samples as possible from the original Monte Carlo set drawn from density and adds a minimal number of samples from a “correction” density such that the combined set follows the desired new density while keeping the sample size constant. The method is utilized herein to maintain efficiency as data are added but, because it is not essential to the objective of this work (investigation of the effect of priors), the details are not included. The interested reader is referred to Jiaxin_Shields_corrected .5 Formulating model and parameter priors
For a given set of models , the effectiveness of the Bayesian method depends firmly on the specification of the prior model probability and the parameter prior . Reasonable choices of prior distributions will have minor effects on posterior inference with wellidentified parameters and large data size. However, when datasets are small and/or prior data is not entirely appropriate, specification of prior probabilities becomes very important. In this section, we briefly review approaches for formulating noninformative and datadriven, informative priors for multimodel inference.
5.1 Prior model probabilities
A popular and simple choice for the prior model probability , is the uniform prior
(19) 
This prior is noninformative in the sense of favoring all models equally. Under this prior, the posterior model probability is equal to the ratio of the model evidence to the cumulative evidence,
(20) 
and, as mentioned, these asymptotically correspond to the BIC model weights. However, the apparent noninformativeness of Eq. (20) can be deceptive since it is only uniform in probability and will typically not be uniform on the model characteristics. Hence, in setups where several models are very similar and only a few are different, Eq. (20) may bias the posterior model probability away from accurate models chipman2001practical .
Burnham and Anderson burnham2004multimodel make a compelling case that model prior probabilities should depend on dataset size () and model complexity (i.e. number of model parameters, ). In other words, small datasets should have priors that favor less complex (lowerdimensional) models to avoid overfitting. This is a major motivation for the use of AIC as a model selection criterion given that it implies the use of savvy priors as discussed in Section 2.2. But, this effect is expected to be of minimal importance here as all of the considered probability models have comparable complexity.
Finally, model prior probabilities in realworld applications are often selected according to subjective preference, which may result from historical data, the modeler’s experience, or solicited expert opinion. This is especially important as it will be shown that strong prior beliefs can greatly influence posterior model probabilities leading to very accurate (if the priors are correct) or inaccurate (if the priors are incorrect) assessments of uncertainty.
In this work, we consider each of these respective prior model probabilities and aim to understand their influence on uncertainty quantification and propagation.
5.2 Parameter prior probabilities
Prior probabilities for model parameters also play an important role in multimodel uncertainty quantification and propagation. Here, we broadly distinguish between socalled noninformative and informative priors and elaborate how these various priors can be constructed under conditions of ignorance (no prior information is available), previously existing (often historical) data, and under subjective assumptions.
5.2.1 Noninformative priors
One of the most common noninformative priors is the uniform prior that is flat, diffuse and often considered as “vague”. It is worth noting that a diffuse or vague prior may not be uniform and sometimes a diffuse prior can be more informative than the uniform prior berger1989 ; gelman2006 ; tenorio2017 . The uniform prior can be expressed as
(21) 
where the range of , is a subset of the parameter space ().This indicates that there is no a priori reason to favor any particular parameter value. Instead, we only know its range . Thus, the posterior distribution in Eq. (13) is proportional to the likelihood,
(22) 
If the range is specified as the parameter space , Bayesian inference assuming a flat prior may cause an improper prior if
(23) 
In this case the normalizing constant sometimes does not exist. If an improper prior is employed, one needs to be sure that the posterior is proper.
Another commonly used noninformative prior is the Jeffreys prior jeffreys1946 , which is defined to be proportional to the square root of the determinant of the Fisher information matrix
(24) 
The Fisher information is given as:
(25) 
For certain models, the Jeffreys prior cannot be normalized and is therefore an improper prior in such cases.
In this work, we employ proper uniform priors as representative noninformative priors. While this admittedly does not account for various nuances that may arise from assuming different noninformative priors, the intention here is to compare the effects of a suitably representative noninformative prior on multimodel uncertainty quantification and propagation against the effects of various informative priors.
5.2.2 Informative priors
At the other extreme, an informative prior is one that yields a posterior that is not dominated by the likelihood; instead an informative prior has an essential impact on the posterior distribution. This is especially true for inference from small datasets. The appropriate use of informative priors illustrates the power of the Bayesian approach: information gathered from previous studies, past experiences or expert opinions can be combined with new data in a natural way. We can therefore interpret an informative prior as the state of our subjective prior knowledge. However, in practice, prior specification of subjective knowledge might be biased as it is often difficult to specify precisely and historical data, experiments, and experiences may not be totally appropriate for the current problem. One objective of this study is to understand the influence of such imprecise and/or incorrect informative priors on multimodel uncertainty quantification and propagation.
In this work, we formulate datadriven informative priors by exploiting historical data, denoted , as may be available in the literature. We specifically avoid formulating priors based on assumptions or intuition. In this sense, the historical data represents the existing state of knowledge as objectively as possible. Yet, as previously mentioned, these data may not be entirely appropriate for the problem at hand and therefore may or may not provide “good” priors.
The datadriven prior is quantified by applying Bayes’ rule to the historical data, . The posterior then becomes the prior for the analysis using the currently observed data, . This initial Bayesian inference starts with a suitable noninformative prior, termed the “preprior”. Within this framework, the currently observed data is effectively treated as an extension of the historical data . If the historical dataset is relatively large, the resulting prior is referred to as strongly informative and dominates the preprior. If the historical dataset is small, the resulting prior is referred to as weakly informative and retains some influence of the noninformative preprior.
The approach used in this work is summarized in three stages as follows:

Stage 1: Noninformative preprior  Noninformative prepriors can be developed in a number of different ways. When the likelihood function is given, one can derive the noninformative prior based on Jeffrey’s rule, or simple use a flat prior instead.

Stage 2: PreBayesian inference  A preBayesian inference is employed herein to identify the the posterior distribution based on historical data combined with a given noninformative prior and the specified model
(26) The posterior distribution is taken as the prior distribution for the currently observed data .

Stage 3: Nonparametric estimate from posterior samples  Eq. (26) is typically solved implicitly using MCMC. Therefore, the datadriven prior is not available in closedform for Bayesian updating using the new data,
. A nonparametric kernel density estimate is therefore used to approximate the unknown prior probability density function from the MCMC samples.
For multivariate density functions involving parameter vector with dimension , the kernel density estimate has the form given a sample set of size given model scott2015multivariate as
(27) where is the sample in the dimension of given model , and the corresponding bandwidth. is a chosen Gaussian kernel given by
(28) The kernel bandwidth is then determined by minimizing the asymptotic mean integrated square error (AMISE) silverman1986density such that, for the Gaussian kernel, the optimal bandwidth is
(29) where
is the standard deviation of the samples
. The kernel density estimate is then employed as the informative prior for Bayesian inference using the observed data .
6 Application: plate buckling strength problem
Uncertainty in the material and geometric properties of ship structural components can significantly impact the performance, reliability and safety of the structural system ClassNK . In this work, we apply the proposed methodology to quantify and propagate the uncertainty in material properties for buckling strength of a simply supported rectangular plate under uniaxial compression. An analytical formulation for the normalized buckling strength for a pristine plate was first proposed by Faulknerfaulkner1973
(30) 
where is the ultimate stress at failure, is the yield stress, and is the slenderness of the plate with width , thickness , and elastic modulus given by
(31) 
Eq. (31) was further modified by Carlsen carlsen1977 to study the effect of residual stresses and nondimensional initial deflections associated with welding
(32) 
where is the width of the zone of tension residual stress.
The design buckling strength is based on nominal values for the six variables in Eq. (32) provided in Table 1. However, the actual values of these variables often differ from the design values due to uncertainties in the material properties and “as built” geometry yielding uncertainty in the buckling strength. We are therefore interested in investigating the effect of the six uncertain variables shown in Table 1 on the buckling strength of simply supported mild steel plates. Emphasis is placed on assessing the influence of uncertainty in the yield strength since it is the most sensitive variable identified by Global sensitivity analysis (see Table 1) and for clarity of demonstration.
Variables  Physical Meaning  Nominal Value  Mean  COV  Global Sensitivity 

width  36  0.992*36  0.028  0.017  
thickness  0.75  1.05*0.75  0.044  0.045  
yield strength  34  1.023*34  0.116  0.482  
Young’s modulus  29000  0.987*29000  0.076  0.194  
initial deflection  0.35  1.0*0.35  0.05  0.043  
residual stress  5.25  1.0*5.25  0.07  0.233 
6.1 Description of historical data
The work of Hess et al. hess2002 presented a review of uncertainties in material and geometric properties for mild steel plates for ship building applications. They conducted statistical analysis of data compiled from tests/measurements sponsored by the Ship Structure Committee (SSC) mansour1984 ; atua1996 as part of an effort to establish a database of marine steel properties and tests/measurements performed by the Naval Surface Warfare Center, Carderock Division (NSWCCD). These past sources of yield strength data are very important because they provide a valuable source of prior information. However, it remains difficult to represent the uncertainties since the measured data are scarce. Hence, quantification of uncertainties and variations is necessary to determine the probabilistic characteristics of these random variables. In this work, we make use of the historical experimental data to predict the probabilistic characteristics of yield strength of mild steel. The source of the material property data are a series of historical reports from the SSC352 kufman1990 , SSC142 gabriel1962 and SSC145 boulger1962 , which include material from four classes of structural steels summarized as follows

ABSA  plates with thickness not exceeding 1/2 inch and all shapes

ABSB  plates with thickness over 1/2 inch but not exceeding 1 inch

ABSC  plates with thickness over 1 inch

ASTMA7  Historical conventional structural steel alloy replaced by ASTMA36
The three ABS steels are typical shipbuilding and marine steels and vary somewhat in chemical composition but possess nominally the same design properties (most notably ksi) while the ASTMA7 is a historical carbon steel having design yield strength in the range ksi. The statistical analysis of these data are reproduced from Hess et al. hess2002 in Table 2. These data are useful for our purposes since they are representative of the type of historical data (these tests data back to 1948) that may be available for assigning prior distributions in Bayesian inference but are not truly representative of what may be expected from modern materials. Thus, the statistical analysis of the four materials provided by hess2002 give us different priors from which to initiate our investigation.
Steel type  Min  Max  Mean  COV  Distribution  # of tests  Comments  

ABSA  31.9  39.6  36.091  0.059  Lognormal  33 


ABSB  27.6  46.8  34.782  0.116  Lognormal  79  Informative and correct  
ABSC  30.9  41.5  33.831  0.081  Lognormal  13 


ASTMA7  28.6  49.4  38.197  0.108  Normal  58  Informative but incorrect 
The application of interest here is a ship structural plate with thickness inch. It is therefore of ABSB material class and we assume that the “true” model for the ABSB material is that given in Table 2. Note that, in reality, this is not in fact the true model for ABSB material but for our purposes it provides a baseline from which we have an informative and correct prior. The ABSA and ABSC materials are similar to the “true” ABSB material and their datasets smaller so are considered to provide weakly informative but technically incorrect priors. Finally, the ASTMA7 material is considerably different. Given that it is a comparatively large dataset, we consider it to give an informative but incorrect prior. Note that, under practical conditions of limited data, an analyst may consider any one of these data sets to be “close enough” so as to define a prior for UQ (justifiably or not). Our objective is to study the influence of using these different priors in the context of multimodel Bayesian UQ.
Figure 1 shows histograms of the material data for each of the four classes: ABSA, ABSB, ABSC and ASTMA7.
The ABSB material data collected from the technical report SSC142 gabriel1962 has mean , coefficient of variation 0.116, and is assumed to follow a lognormal distribution. Again, we assume this to be the “true” model and, for our investigation, all “data” are synthetically generated from . The initial 10 yield strength values are shown in Figure 2.
Given these 10 values and the prior data, we contend that a single probability model form cannot be precisely identified. Therefore, we select seven candidate probability models including Gamma, Inverse Gaussian, Logistic, Loglogistic, Lognormal, Normal and Weibull. For each of these models, prior parameter densities are derived from each dataset in Figure 1 as described in Section 5.2.2 and Bayesian inference performed in the following.
6.2 Influence of datadriven priors on uncertainty quantification
In multimodel Bayesian UQ, there are two stages of inference related to modelform uncertainty (Section 2) and model parameter uncertainty (Section 3). There is an interesting interplay between these two stages of inference as suggested by Eqs. (2)(3) and the flowchart in Figure 3.
In the first stage, multiple candidate models are considered (i.e. the seven model listed above) and some assumptions are made regarding their prior model probabilities  perhaps informed by expert opinion. As data are collected, the model probabilities are updated using Bayes’ rule (Eqs. (2)(3)). But, these updated probabilities are influenced by the selection of the parameter prior in Eq. (3) and, as we will see, this can play an important role in model selection. In the second stage, the model parameter distributions are inferred from the data for each model form. These are obviously strongly dependent on the prior parameter densities. These inference processes combine to provide the posterior information used to quantify uncertainty in the parameter of interest (here ). The forthcoming Sections 6.2.1 and 6.2.2 aim to answer the question: What influence do prior assumptions (in modelform and model parameters) have on the accuracy and convergence of posterior probabilities?
6.2.1 Effect of priors on modelform uncertainty
In Bayesian model selection, it is common to assume equal prior probability (i.e. chipman2001practical . In certain instances, subjective nonequal probabilities may be assigned. In fact, for our problem the existing literature suggests a “preferred” distribution for (Hess et al. hess2002 suggest a lognormal distribution). With this information, we assign a prior model probability and assume equal weight () for the other models. Because there is a strong belief in the correct prior model, we refer to this as the “strong correct” prior. This strong correct prior will be compared against the uniform prior of equal probabilities as well as a “strong incorrect” prior wherein there is strong belief in the incorrect loglogistic model such that it has prior probability and all other prior probabilities are equal. The three model prior cases we consider are summarized in Table 3.
Uniform  “Strong Correct”  “Strong Incorrect”  

Gamma  1/7  0.0167  0.0167 
Inverse Gaussian  1/7  0.0167  0.0167 
Logistic  1/7  0.0167  0.0167 
Loglogistic  1/7  0.0167  0.9 
Lognormal  1/7  0.9  0.0167 
Normal  1/7  0.0167  0.0167 
Weibull  1/7  0.0167  0.0167 
As data are collected, posterior model probabilities are updated according to Eqs. (2)(3). These probabilities depend on the parameter prior assumption and, as a result, they will differ based on the historical data we use to derive the prior. In the small data case, it can be quite difficult to make any meaningful conclusions regarding model probabilities as evidenced by the data in Table 4, which gives the posterior model probabilities from 10 yield stress data for each of the parameter priors given equal prior model probabilities. Note that in these cases, the posterior is simply equal to the model evidence.
Distribution  AIC  Noninformative  ABSA  ABSB  ABSC  ASTMA7 

Gamma  0.168  0.167  0.159  0.157  0.170  0.166 
Inverse Gaussian  0.172  0.184  0.142  0.150  0.132  0.191 
Logistic  0.119  0.115  0.161  0.118  0.064  0.136 
Loglogistic  0.128  0.125  0.182  0.096  0.063  0.163 
Lognormal  0.167  0.162  0.184  0.140  0.182  0.176 
Normal  0.154  0.149  0.147  0.178  0.189  0.130 
Weibull  0.091  0.098  0.024  0.160  0.201  0.037 
This is a classic small data case where a precise “best” model is impossible to identify. Moreover, as suggested by the definition of model evidence, these posterior model probabilities are strongly dependent on the parameter prior with considerable differences across different priors.
We are also interested in the convergence of the modelform uncertainty as a function of the amount of data collected. The previous discussion in Table 4 highlighted how very small datasets lead to large modelform uncertainties – with further uncertainty introduced by the selection of the parameter prior. But how much data is necessary to reduce this uncertainty and how does the performance change given different parameter priors? Figure 4 shows the posterior model probabilities as a function of dataset size for different parameter priors (given equal prior model probabilities).
For comparison, Figure 5 shows the posterior model probabilities using AIC model selection (i.e. using savvy prior probabilities).
Notice that the AIC, noninformative, and ABSB priors show nearly identical trends with added data. Although they fail to identify a unique model (they essentially identify the lognormal and the inverse Gaussian with equal probability), we will see that they are in fact among the “best” priors in terms of convergence toward the true probability model. Of particular interest is the fact that the ABSA parameter prior converges toward the incorrect Gamma model and effectively discounts the lognormal model entirely.
Use of informative model prior probabilities can change this convergence behavior considerably. Figures 6 and 7 show the posterior model probabilities with dataset size for each of the seven models given the strong correct prior (Figure 6) and the strong incorrect prior (Figure 7) for each of the considered parameter priors.
For the strong correct model prior, four of the five cases show convergence toward the true lognormal model from which the data are drawn as the dataset grows large. Even with the strong incorrect model prior probabilities, the multimodel inference eventually suppresses the incorrect loglogistic model and identifies the correct lognormal form in these cases – indicating a degree of robustness for these parameter priors. Note also the ABSB with strong incorrect prior yield essentially equally probable lognormal and inverse Gaussian models because, under this prior the distributions are nearly identical in shape and the inference cannot discern between them.
In both cases, the ABSA parameter prior causes the inference to converge to the wrong Gamma model form even when 10,000 yield stress values are collected. The reason for this will be explored later but this points to the important conclusion that if the parameter prior is not wisely chosen, it may not be possible to infer even the correct model form for the data. This can have significant practical implications for uncertainty quantification and propagation.
6.2.2 Effect of parameter prior on parameter uncertainty
For each model form, the selection of the parameter prior will significantly impact the convergence of the posterior. Here, we focus on the case of the lognormal distribution as a representative case to illustrate this effect. Similar results for the other probability models were observed.
Data are generated according to the “true” lognormal distribution and Bayesian inference conducted to infer the parameters of the lognormal model using each of the five considered parameter priors. Table 5 shows the joint parameter pdf for “small” datasets ( 100 data) along with the true parameters (indicated by a ).
Data  Noninformative  ABSA (33)  ABSB (79)  ABSC (13)  ASTMA7 (58) 

Prior 





10 





25 





50 





100 





Notice that the ABSA and ASTMA7 priors yield the most rapid convergence but neither their priors nor their posteriors include the true parameter values. Indeed, these models are unable to infer the correct distribution despite the prior belief that they may serve as reasonable priors. The noninformative and ABSC priors provide relatively similar levels of convergence and include the true model. This is because they are sufficiently weak in the amount of incorrect information they provide. Lastly, as expected the correct ABSB prior exhibits the best convergence to the true model.
Table 6 provides similar plots for “large” datasets ( 500 data).
Data  Noninformative  ABSA (33)  ABSB (79)  ABSC (13)  ASTMA7 (58) 

500 





1000 





5000 





10000 





We see that the models with ABSA and ASTMA7 priors continue to narrow and move slowly toward the correct parameters. However, even after 10,000 data are collected they do not come to include the correct parameters in their joint density with any significant probability. The noninformative, ABSB, and ABSC meanwhile continue to converge correctly at similar rates (there is very little improvement from using the ABSB prior for large datasets).
An alternative way to look at this is to populate a set of possible distributions by Monte Carlo sampling from the joint parameter densities. This is useful both for illustration purposes and because, as shown in the following section, we do this in order to propagate the total uncertainty. Table 7 shows how these distributions change with dataset size for the noninformative, ABSB, and ABSA parameter priors.
Data  Noninformative  ABSA (33)  ABSB (79) 

10 



25 



50 



100 



Notice that the band of distributions for the noninformative and ABSB priors include the true distribution (bold) while the ABSA case does not. Moreover, the band of distributions from the ABSA prior is significantly narrower than those from both the noninformative and ABSB priors, which implies confidence in the distribution. In other words, this prior gives false confidence in the wrong set of models. This may have major implications on uncertainty propagation, which is taken up in Section 6.3.
6.2.3 Effect of priors on total uncertainty
The total uncertainty in the yield strength is represented by Monte Carlo sampling from the candidate distributions as described in Section 4. For each sample (pdf), a probability model is randomly selected according to the posterior model probabilities. The parameters of this model are then randomly sampled from its posterior joint pdf. The result is a Monte Carlo set of distributions as shown in Figure 8. This particular example shows a set of 5000 distributions given equal prior model probabilities with noninformative parameter priors for cases having datasets of size 10, 100, and 1000 data.
To measure the degree of uncertainty in a given model set, we compute the average mean square distance between the 5000 models in the set and the true lognormal density given by:
(33) 
where are the distributions in the set and is the lognormal pdf of the true model. This distance is plotted as a function of dataset size in Figure 9 for each parameter prior and for the three cases of model prior probabilities.
When the dataset size is small, there is a major benefit to using the correct ABSB prior. That is, the set of distributions is comparatively close to the true distribution. All other priors, meanwhile, start by poorly representing the true distribution (relatively large average mean square distance to the true distribution) but the noninformative and ABSC priors come to be almost as good as the ABSB prior after data are collected. The ABSA and ASTMA7 priors, on the other hand, cannot achieve the same level of accuracy as the other priors  even for very large datasets. The result is that the set of distributions generated from the ABSA and ASTMA7 priors can have residual errors that effectively results in identification/propagation of incorrect probability models (as evidenced again by the lognormal distributions from the ABSA prior in Table 7 which do not include the true model).
6.3 Influence of datadriven priors on uncertainty propagation
The results of the multimodel inference process are used to identify a set of probability models (e.g. Figure 8) that can be propagated through a physicsbased model using the method in Section 4 zhang2018 . Given the sensitivity of the posterior probabilities to the selection of the prior, this begs the question: What impact do prior assumptions have on response quantities from the model? If the prior yields rapid convergence, it stands to reason that we should expect rapid convergence (i.e. small uncertainty) in response quantities. But, how much of an improvement can be gained through good prior selection and how poor are the results if a bad prior is selected? The results of the previous section seem to imply that a poor prior can yield not just large uncertainties, but incorrect probabilistic response. We explore these issues in the context of our plate buckling problem in this section.
Total uncertainty is propagated using the IS reweighting method proposed in zhang2018 and reviewed in Section 4. For illustration, consider quantification based on the ABSB parameter prior with equal prior model probabilities. Table 8 shows the results of propagation for uncertainties quantified from different size datasets.
Data  OSD  CDFs  Mean  Probability of failure 

10 




25 




50 




500 




5000 




The left column shows the set of 5000 probability densities identified from MC sampling of the quantified uncertainties in model form and parameters along with the optimal sampling density for propagation. The second column shows the set of CDFs for the buckling strength along with the true buckling strength CDF from propagating the true lognormal model. Notice that the true is fully encapsulated within the set of propagated distributions. The final columns show CDFs for the mean buckling strength and probability of failure (). The bold line gives the overall CDF considering all model forms (and their probabilities) while the colored CDFs are conditional CDFs for each plausible model form. Also shown in these figures are the true mean buckling strength and which, in all cases, fall within the range of the CDFs.
As expected, the uncertainty diminishes with increased dataset size. More specifically, the band of distributions in the input PDFs and output CDFs narrow toward their correct distributions. Also, the range of the CDFs for mean buckling strength and narrow toward the true values as the dataset size increases.
These trends are clear and illustrate the method’s performance for cases where a good prior (ABSB) is selected. But, do these trends hold for other priors? To assess the effect of different priors, we quantify the convergence of the mean buckling strength, variance of buckling strength, and
using two different metrics. The first is a simple quantile confidence metric which defines the 95% confidence range for statistic
given data by:(34) 
The ranges for the mean, variance, and are therefore denoted , , and . The second is an accuracy metric, the “area validation metric” ferson2008 ; roy2011 , that measures the difference in area between the CDF and the true value for statistic given data as:
(35) 
Where is the CDF from the simulation and is the true value. For the mean, variance, and the accuracy metrics are therefore denoted by , , and , respectively.
Let us begin by investigating the effect of the prior model probability while retaining the correct ABSB parameter prior. Figure 10 shows convergence of the confidence metric (Eq. (34) and the area accuracy metric (Eq. (35)).
These figures show that the strong correct model prior probabilities provide significant improvements in both confidence and accuracy for the mean buckling strength and variance of buckling strength for small datasets. The improvement diminishes as the dataset size increases. For , on the other hand, the prior probabilities have relatively modest effect on convergence.
Next, consider the effect of the parameter priors. Here, we employ equal prior model probabilities and vary the parameter prior. Figure 11 shows convergence of the confidence and area metrics for the mean buckling strength, variance of buckling strength, and with dataset size.
As expected, we see that the ABSB prior shows consistently good performance in terms of both confidence and accuracy (it is a good prior). In fact, most of the other priors show reasonable performance as well and all converge in confidence at approximately the same rate. The problem lies in the accuracy convergence of the mean buckling strength and using the ASTMA7 and ABSA priors. Recall that these models did not accurately quantify input uncertainty. Consequently, the accuracy of response statistics is slow to converge.
This poses a significant problem because Figure 11 ac suggest a high level of confidence regardless of the prior but df suggest that accuracy depends on the prior. The result in these cases is high confidence in inaccurate statistics. This is more clearly illustrated in Figure 12, which shows CDFs for the mean buckling strength and for different priors given 10,000 data with equal prior model probabilities.
In the mean value, the CDF for the ASTMA7 prior is narrow but does not intersect the true value. Quantitatively, its confidence metric is small suggesting 95% probability that the value lies in the range but is inaccurate as the true value is . Similarly, the ASTMA7 and ABSA priors yield high confidence in incorrect estimates. Their values of 95% probability lie in the range and respectively but are both inaccurate given that the true value is . The result using these priors, even for large datasets, is high confidence in the wrong answer.
6.4 Discussion
The objective of imprecise probabilities in general, and the Bayesian multimodel UQ and propagation method proposed here more specifically, is to provide a nearcomplete picture of both epistemic and aleatory uncertainty in computational modeling. While the presented methodology is robust under noninformative priors, the work here has shown that it is not immune to biases introduced by improperly informed priors. Even with only slight misinformation in the prior (e.g. materials data that are from similar but not identical materials), the Bayesian approach can produce erroneous results that propagate through the computational modeling process yielding incorrect predictions of system performance and probability bounds that do not include the true response. Noninformative priors, while robust in bounding the real uncertainties, may be unnecessarily wide when compared to properly informed priors when datasets are small (for large datasets there is little benefit to informative priors). Consequently, it is the modelers responsibility the judge the relative “safety” of using noninformative priors with the risk and benefits of using informative priors.
7 Conclusion
In this work, the multimodel uncertainty quantification and propagation method previously proposed by the authors zhang2018 is recast in a fully Bayesian framework. This provides additional robustness in terms of quantifying uncertainties associated with probability model form in particular. Within this Bayesian framework, we are primarily interested in understanding the influence of prior probabilities in both probability modelform and probability model parameters on multimodel UQ and the propagation of these uncertainties.
The paper deals primarily with the case where uncertainties are quantified from small datasets, which necessitates a multimodel approach and makes prior probabilities important. Through an example considering the analytical buckling analysis of a simply supported plate, we systematically explore the effect of various prior modelform and model parameter probabilities on multimodel uncertainty quantification and propagation. With regard to modelform uncertainties, it is shown that assumptions about prior probabilities have a significant influence on quantified uncertainties when datasets are small but incorrect prior probabilities can be overcome by large datasets if the parameter priors are appropriate. With regard to model parameter priors, it is shown that priors derived from historical datasets of varying suitability to the present analysis have a clear influence on uncertainties quantified from small datasets. Moreover, parameter priors derived from historical datasets that are similar to the presently collected data (but nonetheless different) can introduce biases in the multimodel inference that persist even as very large datasets are collected.
The combined effects of modelform and model parameter priors on uncertainty propagation are then investigated. Again, it is shown that uncertainties in response quantities depend strongly on both priors and biases introduced by incorrect priors persist yielding inaccurate probabilistic response quantities even in the large data limit.
8 Acknowledgements
The work presented herein has been supported by the Office of Naval Research under Award Number N000141612582 with Dr. Paul Hess as program officer.
References
References
 (1) A. Der Kiureghian, O. Ditlevsen, Aleatory or epistemic? Does it matter?, Structural Safety 31 (2) (2009) 105–112.
 (2) P. Walley, Statistical reasoning with imprecise probabilities, Vol. 42, Peter Walley, 1991.
 (3) P. Walley, Towards a unified theory of imprecise probability, International Journal of Approximate Reasoning 24 (23) (2000) 125–148.
 (4) I. Molchanov, Theory of Random Sets, Vol. 53, SpringerVerlag, London, 2005.
 (5) T. Fetz, M. Oberguggenberger, Propagation of uncertainty through multivariate functions in the framework of sets of probability measures, Reliability Engineering & System Safety 85 (1–3) (2004) 73–87.
 (6) T. Fetz, M. Oberguggenberger, Imprecise random variables, random sets, and monte carlo simulation, International Journal of Approximate Reasoning 78 (2016) 252–264.
 (7) R. E. Moore, Methods and applications of interval analysis, Vol. 2, SIAM, 1979.
 (8) K. Weichselberger, The theory of intervalprobability as a unifying concept for uncertainty, International Journal of Approximate Reasoning 24 (2) (2000) 149–170.
 (9) S. Ferson, V. Kreinovich, L. Ginzburg, D. S. Myers, K. Sentz, Constructing probability boxes and DempsterShafer structures, Vol. 835, Sandia National Laboratories Albuquerque, 2002.
 (10) S. Ferson, J. G. Hajagos, Arithmetic with uncertain numbers: rigorous and (often) best possible answers, Reliability Engineering & System Safety 85 (1) (2004) 135–152.
 (11) R. Schöbi, B. Sudret, Uncertainty propagation of pboxes using sparse polynomial chaos expansions, Journal of Computational Physics 339 (2017) 307–327.
 (12) A. E. Raftery, Bayesian model selection in social research, Sociological methodology (1995) 111–163.
 (13) S. Sankararaman, S. Mahadevan, Distribution type uncertainty due to sparse and imprecise data, Mechanical Systems and Signal Processing 37 (1) (2013) 182–198.
 (14) P. Walley, T. L. Fine, Towards a frequentist theory of upper and lower probability, The Annals of Statistics (1982) 741–761.
 (15) M. E. Cattaneo, Empirical interpretation of imprecise probabilities, in: Proceedings of the Tenth International Symposium on Imprecise Probability: Theories and Applications, 2017, pp. 61–72.
 (16) D. Dubois, H. Prade, Random sets and fuzzy interval analysis, Fuzzy Sets and Systems 42 (1) (1991) 87–101.
 (17) D. Dubois, H. Prade, Intervalvalued fuzzy sets, possibility theory and imprecise probability., in: EUSFLAT Conf., 2005, pp. 314–319.
 (18) D. Dubois, H. Prade, Possibility theory: an approach to computerized processing of uncertainty, Springer Science & Business Media, 2012.
 (19) A. P. Dempster, Upper and lower probabilities induced by a multivalued mapping, The annals of mathematical statistics (1967) 325–339.
 (20) G. Shafer, A mathematical theory of evidence, Vol. 1, Princeton university press Princeton, 1976.
 (21) L. A. Zadeh, Fuzzy sets, Information and Control 8 (3) (1965) 338–353.
 (22) M. Beer, S. Ferson, V. Kreinovich, Imprecise probabilities in engineering analyses, Mechanical systems and signal processing 37 (1) (2013) 4–29.
 (23) J. Zhang, M. D. Shields, On the quantification and efficient propagation of imprecise probabilities resulting from small datasets, Mechanical Systems and Signal Processing 98 (2018) 465–483.
 (24) D. Draper, Assessment and propagation of model uncertainty, Journal of the Royal Statistical Society. Series B (Methodological) (1995) 45–97.
 (25) D. Draper, J. S. Hodges, E. E. Leamer, C. N. Morris, D. B. Rubin, A research agenda for assessment and propagation of model uncertainty.
 (26) T. K. Dijkstra, On Model Uncertainty and Its Statistical Implications: Proceedings of a Workshop, Held in Groningen, The Netherlands, September 25–26, 1986, Vol. 307, Springer Science & Business Media, 1988.
 (27) J. L. Beck, K.V. Yuen, Model selection using response measurements: Bayesian probabilistic approach, Journal of Engineering Mechanics 130 (2) (2004) 192–203.
 (28) J. L. Beck, Bayesian system identification based on probability logic, Structural Control and Health Monitoring 17 (7) (2010) 825–847.
 (29) S. H. Cheung, J. L. Beck, Calculation of posterior probabilities for bayesian model class assessment and averaging from posterior samples based on dynamic system data, ComputerAided Civil and Infrastructure Engineering 25 (5) (2010) 304–321.
 (30) K. Farrell, J. T. Oden, D. Faghihi, A bayesian framework for adaptive selection, calibration, and validation of coarsegrained models of atomistic systems, Journal of Computational Physics 295 (2015) 189–208.
 (31) J. T. Oden, E. A. Lima, R. C. Almeida, Y. Feng, M. N. Rylander, D. Fuentes, D. Faghihi, M. M. Rahman, M. DeWitt, M. Gadde, et al., Toward predictive multiscale modeling of vascular tumor growth, Archives of Computational Methods in Engineering 23 (4) (2016) 735–779.
 (32) E. Prudencio, P. Bauman, D. Faghihi, K. RaviChandar, J. Oden, A computational framework for dynamic datadriven material damage control, based on bayesian inference and model selection, International Journal for Numerical Methods in Engineering 102 (34) (2015) 379–403.
 (33) H. Akaike, A new look at the statistical model identification, IEEE transactions on automatic control 19 (6) (1974) 716–723.
 (34) H. Akaike, Canonical correlation analysis of time series and the use of an information criterion, Mathematics in Science and Engineering 126 (1976) 27–96.
 (35) G. Schwarz, et al., Estimating the dimension of a model, The annals of statistics 6 (2) (1978) 461–464.
 (36) C. M. Hurvich, C.L. Tsai, Regression and time series model selection in small samples, Biometrika 76 (2) (1989) 297–307.
 (37) S. Konishi, G. Kitagawa, Information criteria and statistical modeling, Springer Science & Business Media, 2008.
 (38) C. Chatfield, Model uncertainty, data mining and statistical inference, Journal of the Royal Statistical Society. Series A (Statistics in Society) 158 (3) (1995) 419–466.
 (39) J. A. Hoeting, D. Madigan, A. E. Raftery, C. T. Volinsky, Bayesian model averaging: a tutorial, Statistical science (1999) 382–401.
 (40) K. P. Burnham, D. R. Anderson, Multimodel inference understanding aic and bic in model selection, Sociological methods & research 33 (2) (2004) 261–304.
 (41) H. Akaike, Likelihood of a model and information criteria, Journal of econometrics 16 (1) (1981) 3–14.
 (42) C. M. Hurvich, C.L. Tsai, Model selection for extended quasilikelihood models in small samples, Biometrics (1995) 1077–1084.
 (43) S. Chib, I. Jeliazkov, Marginal likelihood from the metropolis–hastings output, Journal of the American Statistical Association 96 (453) (2001) 270–281.
 (44) J. Skilling, Nested sampling, in: AIP Conference Proceedings, Vol. 735, AIP, 2004, pp. 395–405.
 (45) C. S. Bos, A comparison of marginal likelihood computation methods, in: Compstat, Springer, 2002, pp. 111–116.
 (46) N. Friel, J. Wyse, Estimating the evidence–a review, Statistica Neerlandica 66 (3) (2012) 288–308.
 (47) Z. Zhao, T. A. Severini, Integrated likelihood computation methods, Computational Statistics (2016) 1–33.
 (48) W. K. Hastings, Monte carlo sampling methods using markov chains and their applications, Biometrika 57 (1) (1970) 97–109.
 (49) S. Geman, D. Geman, Stochastic relaxation, gibbs distributions, and the bayesian restoration of images, IEEE Transactions on pattern analysis and machine intelligence (6) (1984) 721–741.
 (50) J. Goodman, J. Weare, Ensemble samplers with affine invariance, Communications in applied mathematics and computational science 5 (1) (2010) 65–80.
 (51) D. ForemanMackey, D. W. Hogg, D. Lang, J. Goodman, emcee: The mcmc hammer, Publications of the Astronomical Society of the Pacific 125 (925) (2013) 306.
 (52) J. Zhang, M. Shields, Probability measure changes in monte carlo uncertainty propagation, To be submitted.
 (53) H. Chipman, E. I. George, R. E. McCulloch, M. Clyde, D. P. Foster, R. A. Stine, The practical implementation of bayesian model selection, Lecture NotesMonograph Series (2001) 65–134.
 (54) J. O. Berger, J. M. Bernardo, Estimating a product of means: Bayesian analysis with reference priors, Journal of the American Statistical Association 84 (405) (1989) 200–207.
 (55) A. Gelman, et al., Prior distributions for variance parameters in hierarchical models (comment on article by browne and draper), Bayesian analysis 1 (3) (2006) 515–534.
 (56) L. Tenorio, An Introduction to Data Analysis and Uncertainty Quantification for Inverse Problems, Vol. 3, SIAM, 2017.
 (57) H. Jeffreys, An invariant form for the prior probability in estimation problems, in: Proceedings of the Royal Society of London a: mathematical, physical and engineering sciences, Vol. 186, The Royal Society, 1946, pp. 453–461.
 (58) D. W. Scott, Multivariate density estimation: theory, practice, and visualization, John Wiley & Sons, 2015.
 (59) B. W. Silverman, Density estimation for statistics and data analysis, Vol. 26, CRC press, 1986.
 (60) N. Class, Investigation report on structural safety of large container ships (2014).
 (61) D. Faulkner, A review of effective plating to be used in the analysis of stiffened plating in bending and compression, Tech. rep. (1973).
 (62) C. A. Carlsen, Simplified collapse analysis of stiffened plates, Norwegian Maritime Research 5 (4).
 (63) P. E. Hess, D. Bruchman, I. A. Assakkaf, B. M. Ayyub, Uncertainties in material and geometric strength and load variables, Naval engineers journal 114 (2) (2002) 139–166.
 (64) C. G. Soares, Uncertainty modelling in plate buckling, Structural Safety 5 (1) (1988) 17–34.
 (65) A. Mansour, H. Jan, C. Zigelman, Y. Chen, S. Harding, Implementation of reliability methods to marine structures, TransactionsSociety of Naval Architects and Marine Engineers 92 (1984) 353–382.
 (66) K. Atua, I. Assakkaf, B. M. Ayyub, Statistical characteristics of strength and load random variables of ship structures, in: Probabilistic Mechanics and Structural Reliability, Proceeding of the Seventh Specialty Conference, Worcester Polytechnic Institute, Worcester, Massachusetts, 1996.
 (67) J. Kufman, M. Prager, Marine structural steel toughness data bank. volume 14, Tech. rep., DTIC Document (1990).
 (68) J. Gabriel, E. Imbembo, Investigation of the notchtoughness properties of abs ship platesteels, Tech. rep., SHIP STRUCTURE COMMITTEE WASHINGTON DC (1962).
 (69) I. BOULGER, W. Hansen, The effectof metallurgical variables shipplate steelson the transition temperatures in the dropweight 1 and charpy vnotch tests.
 (70) S. Ferson, W. L. Oberkampf, L. Ginzburg, Model validation and predictive capability for the thermal challenge problem, Computer Methods in Applied Mechanics and Engineering 197 (29) (2008) 2408–2430.
 (71) C. J. Roy, W. L. Oberkampf, A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing, Computer methods in applied mechanics and engineering 200 (25) (2011) 2131–2144.
Comments
There are no comments yet.