The use of computer simulations is becoming increasingly important in broad areas of science (Lin et al., 2010; Wong et al., 2017). High-fidelity mathematical models allow analysts to perform virtual (or in silico) experiments on complex natural or societal phenomena of interest (see Smith et al., 2009, among others). Predictions are often used to support policy-making. However, the level of sophistication of the models is often too high for analytical solutions to be available. In these cases, the only way to obtain a quantitative solution may be to encode complex mathematical equations in a computer software; so that the input-output mapping remains a black-box to the analyst. It then becomes important to carefully design and execute the computer experiment. The design and analysis of computer experiments (DACE) has entered the statistical literature with the seminal work of Sacks et al. (1989) (see also the monographs of Santner et al. (2003); Kleijnen (2008)). Since then, researchers have studied the creation of space-filling designs (Pronzato and Müller, 2012; He, 2017), the calibration of computer codes with real data (Tuo and Wu, 2015), their emulation (Marrel et al., 2012), the quantification of uncertainty in their output (Oakley and O’Hagan, 2002; Ghanem et al., 2016) and their sensitivity analysis (Oakley and O’Hagan, 2004; Borgonovo et al., 2014). These areas are intertwined. A given design may allow, for instance, not only an uncertainty quantification, but also the creation of an emulator and the analysis of sensitivity.
Probabilistic (or global) sensitivity measures are an indispensable complement of uncertainty quantification, as they highlight which areas should be given priority when planning data collection or further modelling efforts. International agencies such as the US Environmental Protection Agency (U.S. Environmental Protection Agency, 2009) or the British National Institute for Health Care Excellence (NICE, 2013) and the European Commission (2009) have issued guidelines recommending the use of probabilistic sensitivity analysis methods as the gold standard for ensuring reliability and transparency when using the output of a computer code for decision-making under uncertainty. Over the years, several probabilistic sensitivity measures have been proposed. Different measures enjoy alternative properties making them preferable in different contexts and for different purposes. We recall regression-based (Helton and Sallaberry, 2009)
, variance-based(Saltelli and Tarantola, 2002; Jiménez Rugama and Gilquin, 2018)
and moment-independent measures(Borgonovo et al., 2014), all of which offer alternative ways to quantify the degree of statistical dependence between the simulator inputs and the output. A transversal issue in realistic applications is that analytical expressions of these measures are unavailable and analysts must resort to estimation. This, however, is a challenging task, especially for simulators with a high number of inputs (the curse of dimensionality) or with long running times (high computational cost).
Recent results (Strong et al., 2012; Strong and Oakley, 2013) evidence the one-sample (or given-data) approach as an attractive design, which allows analysts to estimate global sensitivity measures from a single probabilistic sensitivity analysis sample, i.e., a sample generated for propagating uncertainty in the simulator. Thus, a one-sample approach has a nominal cost equal to the sample size and independent of the number of inputs, a feature that potentially reduces the impact of the curse of dimensionality. Related works such as Strong et al. (2012); Strong and Oakley (2013); Plischke et al. (2013); Strong et al. (2014, 2015) and Borgonovo et al. (2016) provide advances on theoretical and numerical aspects of the methodology, while at the same time, evidencing some open research questions. One-sample estimation procedures are closely related to scatter-plot smoothing, where partitioning of the covariate space plays a central role (Hastie and Tibshirani, 1990). Strong and Oakley (2013) show that the choice of partition size affects estimation, especially when the sample size is small (see Fig. 1 of (Strong and Oakley, 2013, p. 759)
) . In the literature, some heuristics for determining a partition selection strategy which is optimal in some sense have been proposed, but finding a universally valid heuristic seems out of reach (see AppendixA.1 for numerical experiments illustrating this issue). Moreover, the literature is concerned with point estimators and uncertainty regarding the estimated value of a sensitivity measure is not an intrinsic part of the analysis. Because most one-sample estimators are consistent (in the frequentist sense), an accurate estimation of the error is often overlooked. However, especially at small sample sizes, it is essential for transparency that interval estimates become part of result communication (see Janon et al. (2014b) among others).
We propose to enrich the one-sample design through the use of Bayesian non-parametric (BNP) methods, aiming to reduce and even eliminate the partition selection problem, while making uncertainty in the estimates a natural ingredient. First, we extend the partition approach, using Bayesian non-parametric models to augment the output sample within each partition set, by adequately generating additional synthetic data according to two alternative schemes. The Bayesian intuition supporting these designs can be interpreted as setting a prior on the conditional distribution of the output, given that the input realization falls within a given set of the partition. We build estimators based on this intuition for variance-based, density (pdf)-based and cumulative distribution function (cdf)-based global sensitivity measures. We compare the results with given-data estimators currently in use at low sample sizes, through numerical experiments. The results show that our estimators recover the correct ranking of the inputs, while providing an appropriate quantification of the estimation uncertainty. However, the results may be strongly influenced by the partition choice. Therefore, we investigate two additional classes of estimators based on Bayesian non-parametric joint and conditional density estimation methods. These estimators eliminate the partition selection problem and, at the same time, enable error quantification. Finally, we discuss the application of all the new estimators to the global sensitivity analysis of the benchmark computer code known as LevelE(Saltelli and Tarantola, 2002). Results show that the estimators correctly identify the key drivers of uncertainty at sample sizes lower than the ones used in previous literature. Additionally, the analyst obtains a quantification of the uncertainty in the estimates in the form of a posterior distribution which can be used to determine whether the available sample is large enough to make robust conclusions about the simulator input ranking.
The reminder of the paper is organized as follows. We begin in Section 2 by introducing the framework of global sensitivity analysis and the one-sample estimation approach for probabilistic sensitivity measures. Section 3 combines Bayesian non-parametric methods and the one-sample approach to create two new partition-dependent estimators. Section 4 derives two classes of Bayesian partition-free estimators by adopting Bayesian a non-parametric density estimation approach. Section 5 presents numerical results for the LevelE code. Section 6 offers discussion and conclusions. The appendices illustrate the algorithms of the proposed estimators and discuss additional implementation details.
2 Probabilistic sensitivity analysis of computer experiments
Formally, the sensitivity analysis framework considers a multivariate mapping with input space and output space , denoted as in its more general form. In the DACE set-up, represents a set of operations performed by a computer code which processes a set of inputs, resulting in a set of outputs of interest. The term represents a zero-mean error term, which is present when the simulator response is stochastic. For simplicity, we focus on deterministic univariate responses, with and
. When information is not sufficient to fix the values of the inputs, it is common to “assume to have information about the factors’ probability distribution, either joint or marginal, with or without correlation, and that this knowledge comes from measurements, estimates, expert opinion…”(Saltelli and Tarantola, 2002, p. 704). We denote the input probability space by , where represents the joint probability measure of , assumed known. Similarly, denotes the output probability space, where represents the distribution of induced by through .
It has been recently shown that several probabilistic sensitivity measures frequently used in practice can be expressed as expectations of measures of discrepancy between and . In particular, we focus on probabilistic sensitivity measures of the form:
where the expectation is calculated with respect to the marginal distribution of and is a pre-metric on the space of probability measures over , and is called the probabilistic sensitivity measure of with inner operator (Borgonovo et al., 2014).
Table 1 reports three probabilistic sensitivity measures encompassed by this construction, namely, the variance-based sensitivity measure (), the density-based -importance measure () and the cdf-based -importance measure () (Pearson, 1905; Saltelli and Tarantola, 2002; Oakley and O’Hagan, 2004). The quantity represents the expected fractional reduction in the simulator output variance attained by fixing and it is equal to the popular first-order variance-based sensitivity index (Iman and Hora, 1990; Sobol’, 1993) when the simulator inputs are mutually independent Saltelli and Tarantola (2002); Liu and Owen (2006).
Most global sensitivity measures are used by analysts to quantify the strength of the statistical dependence of on . It has been noted that first-order variance-based sensitivity measures do not possess the nullity-implies-independence property, known also as Rényi’s’s postulate D for measures of statistical dependence (Rényi, 1959). In simple terms, a null value of does not imply that is independent of . This has led to the introduction and study of sensitivity measures that satisfy such postulate and the class of distribution-based (or moment-independent) sensitivity measures is attracting increasing attention in the literature (see e.g. Gamboa et al., 2018; Da Veiga, 2015; Rahman, 2014). As representatives, we focus on the sensitivity measures and (Table 1), which quantify the expected separation between and through the -norm between densities and the Kolmogorov-Smirnov distance between cumulative distribution functions, respectively. Besides complying with Renyi’s postulate D, these probabilistic sensitivity measures are also invariant to monotonic transformations – a property that accelerates convergence in numerical estimation (for further details see e.g. Borgonovo et al., 2014).
As mentioned in the introduction, analytical expressions for these and other popular sensitivity measures are not available in most realistic applications, and their estimation is a prolific subject of research. We now discuss some relevant aspects of numerical estimation in the next section.
2.1 One-sample approach to the estimation of probabilistic sensitivity measures
The estimation of global sensitivity measures is a challenging task and the availability of efficient designs is crucial in realistic applications. The number of simulator evaluations necessary to estimate sensitivity measures encompassed by Eq. (1) for a simulator with simulator inputs, using a brute-force approach, would be of the order of simulator runs, where denotes the sample size required for Monte Carlo uncertainty quantification. The design becomes rapidly infeasible. For instance, if and , the simulator runs would require a prohibitive computational effort for most complex computer codes used in practice. However, notable advances in the literature have contributed in abating this computational burden (see Tissot and Prieur, 2015; Janon et al., 2014a, for reviews). Saltelli (2002), for instance, achieved the estimation of variance-based sensitivity measures at a cost of simulator runs, while the method of Saltelli et al. (1999) achieves a cost of order .
Recently, efforts have been made towards an estimation cost independent of the number of simulator inputs, . Strong and Oakley (2013), Strong et al. (2014) and Strong et al. (2015) show that value-of-information measures can be estimated from a single probabilistic sensitivity analysis sample, , i.e., from the Monte Carlo sample generated for uncertainty quantification, thus lowering the computational cost to simulator runs. Röhlig et al. (2009) and Strong et al. (2012) obtain similar results for first-order variance-based sensitivity measures and Plischke et al. (2013) extend the intuition to density-based measures. The approaches proposed in these works receive the common name of one-sample or given-data estimation methods.
One-sample methods can be seen as generalizations of the intuition developed for estimating the correlation ratio (Pearson, 1905). If
is a discrete random variable then, an input-output sample of (sufficiently large) sizecontains repeated observations of , for each fixed value , while the other factors, , remain random. This allows the estimation of directly from a sample of size . For a continuous , a similar result may be achieved by partitioning the support of into bins . The point condition is then replaced by the bin condition . Then, for any sensitivity measure encompassed by Eq. (1), a one-sample estimator is given by (Borgonovo et al., 2016):
where may be any estimator of . Note that by using equiprobable partition sets, should reduce to . In practice, this partition probability is estimated by the sample proportion, , where denotes the number of realizations for which the -th input falls within the -th partition set of its support. Borgonovo et al. (2016) (Theorem 2) show that, under mild conditions on the inner operator , a consistent version of the estimator in Eq. (2) can be obtained, if the size of the partition is chosen as a monotonically increasing function of the sample size , such that .
Indeed, the most popular one-sample estimator of relies on a plug-in estimator of the inner statistic, based on the output sample mean and variance, and respectively, to estimate the marginal mean and variance of . The within cluster sample mean with is used to estimate the conditional mean of . The final expression (see e.g. Strong et al., 2012) takes the form of Eq. (2) with:
The one-sample estimator for the importance introduced by Plischke et al. (2013) can be written as:
denote kernel-smoothed histograms of the full output vectorand the within cluster output vector , respectively. The authors propose a quadrature method for the numerical integration required by the
-norm in the inner operator, but other solutions could be used, producing similar estimators. Because estimates of this type rely on the approximation or estimation of probability density functions, we refer to them aspdf-based estimators.
Plischke and Borgonovo (2017) observe that the kernel-smoothing methods commonly involved in the calculation of pdf-based estimators may induce bias, even at high sample sizes, for simulators with a sparse output. Therefore, they introduce alternative cdf-based estimators which rely on the properties of empirical cumulative distribution functions.
Scheffé’s theorem allows one to write the -distance between two probability density functions in terms of the associated probability functions, as , where is the set of values for which . Since can be written as a union of intervals , these probabilities can be calculated from the corresponding cumulative distribution functions. Thus, a cdf-based estimator of can be obtained as:
For further details on the estimation of the intervals , we refer to Plischke and Borgonovo (2017).
Since is itself a cdf-based sensitivity measure, the definition of a one-sample cdf-based estimator is straightforward:
where , and are the empirical cdf’s of and , respectively, i.e.:
and denotes the indicator function, taking the value if and otherwise.
Recalling that the expected value of a random variable can be calculated as the integral of its survival function, , a cdf-based one-sample estimator of the variance-based sensitivity measure, is given by:
Notice that, since the empirical distribution functions are piece-wise constant, the integral in the above expression reduces to a sum. Plischke and Borgonovo (2017) propose an efficient way to calculate this integral.
Most of the estimators found in the literature, including those mentioned above, are constructed either as deterministic approximations or as (frequentist) point estimators. Therefore, quantification of the estimation error (or interval estimation) requires additional manipulation. Finding asymptotic distributions of the estimators in order to provide approximate confidence intervals is not straight forward, except for the variance-based estimator, and even in this case, they are accurate only for high sample sizes. For instance, Gamboa et al. (2016); Janon et al. (2014a); Tissot and Prieur (2015) show that variance-based estimators, calculated with a pick-and-freeze design or a replicated Latin hypercube, are asymptotically normal, but similar results are not available for other sensitivity measures. An alternative for non-deterministic sampling methods, is to replicate the estimation procedure in order to obtain a sample of estimates and corresponding sample-based confidence intervals. This, however requires a number of simulator evaluations and, as mentioned in the introduction, the computational cost of such an effort could be prohibitive for time-demanding realistic applications. The idea of replicates also excludes the use of quasi-random generators to create the probabilistic sensitivity analysis samples, as they are deterministic in nature. As a further alternative, bootstrap confidence intervals have been proposed in the literature in order to avoid the need for additional simulator runs (Plischke et al., 2013; Janon et al., 2014b), but these are not an integral part of the estimation process.
A second issue to consider when using partition-based one-sample methods is the sample size bias induced by the partition. Quantities related to the marginal distribution of are estimated using the full sample size , but those related to the within bin distribution of are estimated using a smaller sample size . While a sample size correction is implicit in the estimation of variances (see Eq. 3), the same is not true for the pdf and cdf estimates of equations (4) to (5). In other words, there is a different granularity when estimating the conditional and the unconditional distributions. In Section 3, we propose two partition-dependent Bayesian estimators which mitigate the sample size bias, while providing a natural way to quantify the estimation error, allowing interval estimation.
Within the Bayesian paradigm, unknown objects are treated as random, and assigned a prior probability measure which reflects the analyst’s uncertainty about their values. In this context,Oakley and O’Hagan (2004) treat the input-output mapping as unknown (at least before evaluation). Thus, they define a semi-parametric regression model with a Gaussian process prior, which allows posterior inference on variance-based sensitivity measures. In fact, it is possible to calculate posterior means for the conditional and unconditional variance of and respectively, either analytically or via numerical integration. The approach eliminates the need for a partition of the covariate space, thus solving the second issue mentioned above. However, the posterior distributions of the variance-based measures (e.g.
) are not available analytically, and finding a posteriori credibility intervals for estimation error quantification would be cumbersome and this aspect is not treated in the paper. Furthermore, it is not clear how to extend the results to the estimation of other (pdf or cdf-based) sensitivity measures. In Section4, we present two alternative partition-free Bayesian models which allow interval estimation for these types of sensitivity measures as well. For illustrative purposes, we focus on estimation of the three measures in Table 1.
3 Bayesian non-parametric partition-dependent estimation
We propose to quantify the uncertainty about fixed but unknown sensitivity measures, , before (a priori) and after (a posteriori) the observation of a sample, , within the Bayesian paradigm. The play the role of parameters of interest and they are linked to the data through functionals of the marginal and conditional distributions of and . In view of this, it seems sensible to induce a prior on by assigning a prior to the family of conditional probability measures. Notice that, since is assumed known, the marginal distribution of , , is fully determined by , so no additional prior specification is required. For each , is a family of probability measures on , indexed by , so defining a prior probability on this space is, in principle, not a simple task. Furthermore, the relation between and is complex, making it difficult to conceive an adequate parametric prior. In other words, choosing a family of distributions characterized by a finite-dimensional parameter , to express an expert’s uncertainty about through some prior on would seem overly restrictive, if not unreasonable. It is known that an inadequate prior may lead to troublesome posterior (Freedman, 1965) and hinder the properties of the proposed estimators. A natural alternative is to use a Bayesian non-parametric prior in order to ensure enough flexibility to capture complex data structures. Bayesian non-parametric methods are not restricted to a finite number of parameters to represent a distribution. Generally speaking, they rely on measure-valued stochastic processes to define priors on the space of probability measures of interest. The supports of such priors are wide, ideally covering the full range of all possible distributions, in our case, over (see e.g. Hjort et al., 2010, for an extensive discussion on bayesian non-parametric priors, their properties and their use).
Our first proposal can be interpreted as a Bayesian refinement of the cdf-based estimators introduced in the previous section and, as such, relies on a partition of the input space. We assume that the distribution of is identical for every , and denote it by . In practice, it is enough to assume that can be well approximated in this way. Prior uncertainty is expressed through a prior on the collection . For simplicity, we assume that such distributions are independent and identically distributed (i.i.d.), so the problem becomes that of finding a prior which assigns probability to a large enough set of probability distributions supported on . We focus our attention on the Dirichlet Process (DP), first introduced by Ferguson (1973) and widely studied in the BNP literature (see e.g. Hjort et al., 2010, Chapter 2, for a discussion on its properties). We therefore define, for each the following Bayesian non-parametric model:
where denotes a Dirichlet process with base measure and concentration parameter . The Dirichlet process could be replaced by a more general stick-breaking process, achieving greater flexibility at a similar computational cost (see e.g. Ishwaran and James, 2001; Pitman and Yor, 1997; Lijoi et al., 2007). In this case, the algorithms and proposed estimators would maintain a similar structure so we focus on the Dirichlet process, without loss of generality, in order to use a notation more familiar to a wider audience. With regards to the hypothesis of independence between the , it could be removed through the application of recent developments in BNP methods (see Wood et al. (2011); Teh et al. (2006); Teh and Jordan (2010); Camerlenghi et al. (2017) and Camerlenghi et al. (2019)). This, however, would lead to a complication of the estimation algorithms which goes beyond the scope of this paper.
Note that this Bayesian model is coherent, in the sense that it induces a unique prior over the unconditional distribution of , whenever the partitions are equiprobable, that is when for all and . In fact,
Then, by marginalizing, we obtain
because d does not depend on or . In other words, a priori, , so that the prior for the marginal simulator distribution is also a Dirichlet process. This statement alone, however, provides no information on the probabilistic dependence of on . Thus, it is not meaningful, by itself, for a sensitivity analysis.
The posterior for this model, given the simulator input-output realizations ( for short), , can be written as follows:
Note that the posterior of the marginal for can be obtained as:
which may depend both on and . However, the marginal coherence of the model still holds, at least asymptotically. Informally, for an equiprobable partition, , when the sample size is sufficiently large, so and . Furthermore, . Thus, asymptotically, does not depend on or and , where
and denotes the empirical distribution of based on the full set of observations, . Note that this is the usual posterior corresponding to the DP prior on .
The sensitivity measures we aim to estimate are functionals of the conditional and marginal distributions. The posterior means in eqs. (11) and (13), respectively, may be proposed as Bayesian estimators of such densities. Thus, a Bayesian point estimator of may be given by:
Unfortunately, the direct calculation of is impractical. Moreover, our purpose is to provide interval estimation, so as to quantify the uncertainty associated to point estimates. A way out is to sample observations (i.e., predicted realizations of the output) from and , in order to enrich the sample. More specifically, we have a vector of observations from the original simulator used to estimate , but only of these belong to and are therefore used to estimate . Because , the precision issue discussed in Section 2.1 emerges, causing a bias in the empirical estimation of . By re-sampling from and we can enlarge both vectors, making them of the same size and, potentially, arbitrarily large. Our proposal here is simply to sample observations from , thus obtaining two vectors of size . The intuition underlying this corresponds to the non-parametric Bayesian bootstrap (Bb) (Hjort, 1985, 1991). In our case, for each a sample of size is obtained from . A value of in Eq. (2) can be calculated through any of the methods discussed in Section 2.1, using to estimate all quantities related to the marginal distribution of and the extended vector to estimate all quantities related to the conditional distribution of . Informally, the weighted average over can be seen as approximately simulated from the posterior distribution of . By repeating this procedure times, we obtain a Bb sample . We propose the Monte Carlo average:
as a point estimator of
. Approximate credibility intervals can be obtained from the empirical quantiles. Note that, because eachis simulated from a single distribution, , the sampling process can be done in parallel and the method is computationally fast. However, the uncertainty is underestimated because the additional variability captured by the posterior distribution of Eq. (10) is ignored.
A more accurate alternative is to sample jointly from the Dirichlet process posterior distribution (10), instead of sampling each from the posterior mean. This can be done via the Pólya Urn scheme (Pu) of Blackwell and MacQueen (1973). Specifically, is generated as a realization of the Pólya sequence:
Once again, the extended samples can be used to obtain a value by any available method to calculate the expression in Eq. (2). We use to denote the Monte Carlo average of a sample of size generated in this way. Note that this is a point estimator with the same expectation as . However, a greater variability which fully accounts for the uncertainty on results in wider credibility intervals. The sampling procedure is now sequential for , so the price for greater accuracy in uncertainty estimation is a slightly higher computational time.
3.1 Simulation study
We illustrate the performance of the two classes of estimators proposed above, via two toy examples for which the sensitivity measures can be calculated analytically (see Table 2 columns and ). The first example is the 2-input simulator
where ,, so that the output
follows a Beta distribution. The second example is the 21-input simulator
where , with , , and . The 21 inputs are correlated with . Therefore, Model inputs with indices in are the most important, followed by inputs with indices in , followed by inputs with indices in . Columns to in Table 2 displays the analytical values of the sensitivity measures.
We are interested in small sample sizes, which make the estimation of global sensitivity measures challenging. In particular, we consider . The input data, , is generated via Quasi-Monte Carlo. For each , alternative choices of the partition size, , are explored. The mass parameter, , for the DP prior is set equal to throughout. The base measure, , is chosen in correspondence with the support of
: a Beta distribution for the first example and a Normal distribution for the second; the hyper-parameters are fixed through an empirical approach, based on the available sample. Note that this choice centres the prior distribution for roughly around the marginal distribution of , thus favouring, a priori, independence between the and , with a precision proportional to the number of observations in each partition set. In practical applications, prior information elicited from experts may be expressed through different choices of and .
We compare the Bayesian bootstrap and Pólya urn estimators to traditional point estimators for three global sensitivity measures. Results are reported in Fig.s 1 and 2: the first row corresponds to , the second to and the third to . Columns, from left to right, correspond to increasing sample sizes. Each graph is divided into three blocks displaying importance measures estimates based on alternative choices of . The dotted lines display the analytical values.
We first consider the left-most panel of Fig. 1(a). At the estimates vary notably with the partition size: they are downward biased for and upward biased for . Observe that at , we have , a number too small to be reasonably chosen by the analyst. However, the bias is systematic, that is, it affects identically all estimates. Estimates are less affected by the partition choice as the sample size increases. Recall that in realistic applications, where an analyst would not know the true values of the sensitivity measures, the main interest is on the ordinal ranking of the inputs. In this example, and are equally important. However, looking at the point estimators and the analyst would rank as more important than for most combinations of and . The credibility intervals for display a large overlapping that would prevent the analyst from ranking above : there is too much uncertainty in the estimates to make such conclusion. Notice the underestimation of the uncertainty surrounding . Rows (b) and (c) of Fig. 1 show a similar behaviour for the and sensitivity measures.
Fig. 2 shows the estimates for the input simulator in (16). For a better display clarity, instead of reporting seven sensitivity measures per group, we show numerical values for a representative of each input group, namely and . For variance-based sensitivity measures (Fig. 2(a) ) for all and considered, , , and are able to correctly identify as the most relevant variable. Regarding and (Fig. 2(b) and Fig. 2(c), respectively), one identifies as the most important input in almost all combinations of sample sizes and partition selections. The ranking becomes unclear for and . However, this choice would leave about realizations per partition, a number too small to be reasonably chosen by the analyst. For the remaining group of inputs, the overlapping error bands for the and estimates would not allow us to deem more relevant than , with either , or . Thus higher sample sizes would be needed for neatly ranking the second and third most important groups of simulator inputs.
Overall, Fig.s 1 and 2 suggest that the proposed estimators allow the identification of the most important inputs, even at small sample sizes, and, most relevantly, they provide a measure of the uncertainty in the assessment. However, the results also display a strong dependence on the partition size . While i) as observed in Strong and Oakley (2013) (see their Fig. 1, at p. 759), the importance of selecting an optimal partition size diminishes as the sample size increases and ii) a suboptimal partition selection has in most cases an identical impact on the sensitivity measures (i.e., the sensitivity measures of all inputs are simultaneously upward or downward biased), the analyst is still left with the question of what is the optimal partition size for a given sample. Unfortunately, there seems to be no universally optimal selection rule (see Appendix A.1). Clearly, the problem would be solved if partition-independent estimators were available. In the next section, we study two proposals of Bayesian estimators that avoid the partition choice problem.
4 Bayesian non-parametric partition-free estimation
In this section, we propose two classes of Bayesian partition-free estimators. The first is based on the use of an infinite mixture model to estimate the joint density of and . The second, uses a Bayesian non-parametric regression model to estimate the conditional density of given .
4.1 Joint density-based estimation
The intuition is that all sensitivity measures under consideration can be recovered from the joint distribution ofand
. Therefore, in order to do Bayesian inference onit suffices to place a prior on the joint density . We propose to do so by means of a nonparametric mixture model (see, e.g. Ferguson (1983); Lo (1984)). In other words, we consider to be defined as a mixture:
where is a parametric bivariate density and the mixing measure is a probability distribution over an appropriate space of parameters. The model is completed by assigning a non-parametric prior, , on . Most common choices of assign probability one to discrete distributions of the form
placing mass on locations . In the literature, particular attention has been paid to nonparametric priors admitting a stick-breaking construction (Pitman, 1996; Sethuraman, 1994) where the weights are defined as realization of random variables satisfying
and independent of . Rich families of stick-breaking priors can be defined via different distributional assignments for the sequence (Favaro et al., 2012; Ishwaran and James, 2001, see, e.g.,). The main advantage over other types of construction is that the stick-breaking representation of the random weights allows for efficient simulation algorithms, specially in the context of nonparametric mixture models (Ishwaran and James, 2001; Papaspiliopoulos and Roberts, 2008; Kalli et al., 2011; Yau et al., 2011). However, the most popular stickbreaking prior remains the Dirichlet process, well known even outside the specialized community of Bayesian nonparametrics. For this reason, we will focus our analysis on DP mixtures, thus letting . Additionally, for simplicity, we choose to be a bivariate normal density, following the density estimation scheme of Escobar and West (1995). In this case, and, to simplify calculations, we select as a conjugate Normal inverse-Wishart distribution. Thus, the the integral in (17) reduces to a sum and the joint density can be written as:
where the weights follow (19), with .
Inference on this model is usually achieved via an MCMC scheme resulting in a sample from the posterior distribution of given the . In the case of the DP-mixture, the function DPdensity from the R package DPpackage provides an off-the-rack solution. In practice, the MCMC scheme generates, at each iteration , values which, substituted in expression (20), produce a density function, . Analytical expressions for the marginal and conditional densities, and as mixtures of normal distributions are made easily available by the choice of the Gaussian kernel. Clearly, it is also possible to evaluate the corresponding cumulative distribution functions. Thus, it is possible compute the global sensitivity measures of interest, , , from their definitions (Table 1), obtaining a posterior sample of each. We denote the sample means by , and , respectively, proposing them as Bayesian point estimators. Approximate credibility intervals can be obtained from the empirical quantiles of the samples. The procedure is summarized in Section A.2.2 of Appendix A.2, to which we refer for further details.
It is important to observe that the known marginal distribution for does not, in general, coincide with the marginal distribution for derived from each . Thus, by using only the joint density to estimate the sensitivity measures, important information, standard in global sensitivity analysis is wasted. In fact, inference for conditional densities based in the joint model is known to be approximate (see e.g. Müller and Quintana, 2004). In the next section, we present an alternative estimation method which avoids this problem through a recent Bayesian approach to conditional density estimation.
4.2 Conditional density-based estimation
We now propose to use a Bayesian non-parametric regression model to do inference directly on the conditional density of , thus using all of the information contained in the to estimate the relationship between the variables and exploiting the knowledge of the marginal distribution of to obtain the marginal distribution of . The idea is to transform the non-parametric mixture of equation (20) into a mixture of conditional densities:
This time a non-parametric prior, , is placed on the family, of mixing distributions indexed by . Analogous to the DP mixture model of the previous section, a dependent DP mixture model or DDP mixture (MacEachern, 1999, 2000) is obtained when follows a DP prior, marginally for every , so that:
The random covariate-dependent weights follow the stick-breaking construction of Eq. (19), for i.i.d. random processes . In other words, for every . It has been proved sufficient flexibility is achieved through models in which only the particles or the weights depend on the covariate (Barrientos et al., 2012), the second option being favoured due to better predictive capabilities. Several proposals have been studied in the literature, focusing on alternative definitions of the random functional weights (e.g. Dunson and Park, 2008; Griffin and Steel, 2006; Dunson and Rodriguez, 2011).
The stick-breaking structure of the weights, which imposes a geometric decay, may be bypassed through an alternative construction allowing further flexibility:
The denominator of this expression is, again, an infinite mixture of parametric kernels, , this time with support . Each can be interpreted as the probability that a realization of comes from the -th regression component regardless of the value of , just as is the conditional probability given . Such density regression model, where the weights in (23) follow the stick-breaking representation of (19) and the extended parameters are i.i.d. from some adequate base measure, , was proposed by Antoniano-Villalobos et al. (2014), to which we refer the reader for additional details on the role and choice of hyper parameters, as well as the algorithm used for inference.
We adopt this construction to estimate the conditional density
as a mixture of linear regression models:
where is given by Eq. (23), with a DP prior. Once again, a MCMC approach is used to generate a sample, this time from the posterior distribution of . Each , together with the known marginal for can be used to calculate (e.g. by numerical integration) a corresponding marginal for . As discussed in Section 4.1, this is all that is needed to compute the global sensitivity measures of interest, , and . These, again allow point estimation, e.g. via the Monte Carlo averages, which we denote by , and , and interval estimation, via empirical quantiles. Section A.2.3 (Appendix A.2) summarizes the procedure and offers additional technical details.
4.3 Simulation study
We examine the performance of the classes of partition-independent estimators proposed in Sections 4.1 and 4.2, via the input and input simulators introduced in Section 3.1. For both joint and conditional density-based estimation, we set a burn-in period as and the stored MCMC samples size . Results are illustrated in Fig. 3.
Figure 3(ii) reports results for the input simulator. The Bayesian non-parametric joint estimators , and correctly recover the true values of the parameters and, as the sample size increases from to , the credibility intervals become narrower. At , there is no more overlap among the three groups of sensitivity measures, allowing the analyst to rank the inputs neatly. Regarding the Bayesian non-parametric conditional estimates, we observe that is correctly identified as the most relevant input at all sample sizes. The values , as well , are overestimated by the estimators at . However, the bias is reduced as increases and at the credibility intervals appear centred around the analytical values of the sensitivity measures. We also observe that for both classes of estimates the analytical value of the sensitivity measures falls within the credible intervals.
For this example joint Bayesian estimators seem to outperform their conditional counterpart. This is to be expected, because the joint Gaussian structure of the data is more easily recovered by the joint model in this case, so the loss due to ignoring the true distribution of has a lesser effect on the results. However, we can appreciate a reassuring improvement of the BNC estimates as the sample size increases. One may argue that, in a situation in which the true conditional distribution of given is unknown and may be complex, estimation based on the conditional density model may be preferred, as more robust; the price to pay is that a larger sample size may be required, specially in high-dimensional situations. We then challenge these results for input simulator in Eq. (15), in which the distributions are not normal.
Assume for the moment that the analyst does not know the true value of the sensitivity measures. In terms of ordinal ranking, Fig. 3(i) suggests that the two simulator inputs are equally important. The credibility intervals of and obtained with both the BNJ and BNC estimators are overlapping at the all sample sizes and for all sensitivity measures, so that the analyst cannot deem one of them more important than the other. The performance of the two estimators is similar. However, note that the credible intervals of the joint model (BNJ) are wider than those for the conditional model especially for variance-based sensitivity measures. As expected, for a non-normal distribution, the joint model resents from the wrongful introduction of the marginal distribution for . We analyze this behavior further in addressing results for the LevelE case study.
5 Case study: LevelE simulator
|Iodine Leach rate||LU()|
|Neptunium chain Leach rate||LU()|
|Iodine retention factor (1st layer)||LU()|
|Geosphere water velocity 1st layer||U(100,500)|
|Geosphere Length 1st layer||U(1,5)|
|Factor to compute Neptunium retention coefficients Layer 1||U(3,30)|
|water velocity in geosphere’s 2nd layer||LU()|
|Length of geosphere’s 2nd layer||U(50,200)|
|Retention factor for I (2nd layer)||U(1,5)|
|Factor to compute Neptunium retention coefficients Layer 2||U(3,30)|
|Stream flow rate||LU()|
stand for the uniform and log-uniform distributions respectively
In this section, we evaluate the performance of the proposed estimators through the benchmark simulator of sensitivity analysis, LevelE. The LevelE code simulates the release of radiological dose from a nuclear waste disposal site to humans over geological eras. The code has been developed in an international exercise launched by the Nuclear Energy Agency (NEA) in the mid 1980’s Nuclear Energy Agency (1989). Goal of the exercise was the realization of a reference simulator for the prediction of flow and transport of radionuclides in actual geologic formations against which to compare other simulators developed internationally to support the selection of radioactive wast management policies. Since then, LevelE has become the benchmark simulator of sensitivity analysis (Saltelli et al., 2000; Saltelli and Tarantola, 2002). During the international exercise, distributions for the uncertain simulator inputs were assessed (Table 3
), and have become the reference for analysis on this code. From a technical viewpoint, the LevelE code solves of a set of nested partial differential equations that compute the released radiological dose in Sievert/year over a time range ofto years. The detailed equations of the code are reported in Saltelli and Tarantola (2002).
Previous works have discussed the sensitivity analysis of this simulator using alternative sampling methods and sizes. For instance, Saltelli et al. (2000) employ simulator evaluations to obtain point estimates of the first and total order variance-based sensitivity indices. Saltelli and Tarantola (2002) employ simulator runs for the point estimation of first-order variance-based sensitivity indices, a second experiment with runs for the point of the first and total order sensitivity indices according to the design in Saltelli (2002) (no uncertainty in the estimates is provided). In Ratto et al. (2007), stable patterns for the estimation of variance-based sensitivity measures are obtained at a cost of about , after the input-output dataset has been used to train an emulator. In Castaings et al. (2012), design based on substituted columns sampling and permuted columns sampling are used, with convergence at about runs. Wei et al. (2014) propose a copula-based estimation methods that reduces the cost to about runs for point estimates, with replicates for obtaining confidence intervals. Plischke and Borgonovo (2017) apply a given-data design for the point estimators , and using a sample up to size , with estimates becoming stable for runs. Thus, a sample of size can be considered reflective of state of art for the identification of the key-uncertainty drivers of LevelE.
The graphs in Fig. 4 report the Bayesian bootstrap and Pólya urn estimators, vis-á-vis the point estimators for variance-based (graphs in row a), density-based (graphs in row b) and cdf-based (graphs in row c) sensitivity measures. The results show that already at the two most important simulator inputs are correctly identified. However, the estimates are sensitive to the partition size. Consider the right graph in row a). The credibility intervals of the variance-based Pólya urn estimators with are completely overlapping. This signals that, had the analyst chosen such partition size, the estimates would not be meaningful. The separation becomes, instead, clearer at smaller partition sizes with being possibly the optimal choice. Note that the estimates tend to be upward biased as the partition size increases, in agreement with our previous experiments and also with previous literature findings.
We then come to the joint and conditional partition-independent Bayesian density estimators (Fig. 5).
The two graphs in row (a) display the estimates and credibility intervals for variance-based sensitivity measures (,), the two graphs in row (b) for density-based sensitivity measures (, ) and the two graphs in row (c) for cdf-based (, ) sensitivity measures. Figure 5 shows that the two key-uncertainty drivers are correctly identified already at , by , and , as the credibility intervals of the associated sensitivity measures separate from the credibility intervals of the remaining simulator inputs. The and correctly identify the two most influential simulator inputs. However, fails to produce meaningful results for variance-based sensitivity measures at either sample sizes. This confirms the results of Section 4.3. The deviation from normality strongly affects the ability of BNJ to capture the conditional density of given , since much of the information contained in the data goes into the estimation of unnecessary components of the density mixture of the marginal distribution of . This reduces the estimation precision, leading to the wider confidence intervals.
Let us consider the perspective of an analyst interpreting the results overall. From the available Data, the analyst is able to obtain alternative estimators for representatives of three categories of sensitivity measures, with display of credibility intervals. With the exception of , the estimators communicate that uncertainty in the simulator response is mostly driven by two simulator inputs, with the remaining ones being of lower significance. Thus, the analyst is allowed to confidently report the key-uncertainty drivers to the decision-maker even if the sample size is limited. At the same time, Fig.s 4 and 5 communicate that the sample is not sufficient to rank the medium and low-important simulator inputs with confidence. If the decision-maker (modeler) wished sharper estimates of the sensitivity measures of these inputs, the analyst would need a larger sample size. This could be obtained either through additional runs of the original simulator or by fitting an emulator and, in case the fit is accurate, running the emulator instead of the original code.
This work has presented a fully Bayesian approach to the estimation of probabilistic sensitivity measures from a given sample. The proposed algorithms yield credibility intervals for the estimates without increasing computational burden. We have studied four classes of estimators. The first two find their theoretical ground in non-parametric Bayesian estimation based on the Dirichlet process. These estimators run in parallel with one-sample frequentist estimators currently in use, produce uncertainty in the estimates and are computationally simple to implement. However, they leave the analyst with the problem of choosing the optimal partition. The introduced conditional and unconditional non-parametric Bayesian estimators eliminate the partition selection problem, while producing uncertainty in the estimates. However, their numerical implementation needs to be carefully executed, as it requires a combination of numerical integration and MCMC. Algorithms are available, but their convergence might take a longer time than the Bayesian bootstrap and Pólya urn estimators. Then, how should one proceed in a practical situation? The several numerical experiments performed by the authors (of which a subset was reported in the paper) evidence that the estimators succeed in identifying key-uncertainty drivers at small sample sizes in most situations. Then, a suggested approach would be to apply first the Bayesian bootstrap and/or Pólya urn estimators on the available sample for computing an ensemble of sensitivity measures (e.g., ). If the sensitivity measure estimates and credibility intervals yield a clear picture of the simulator inputs influence, then the analysis could be considered satisfactory. However, the analyst ought to test this assertion repeating the estimates at alternative partition sizes. In case results are strongly dependent on the partition size, the analyst can invest in the Bayesian non-parametric estimation. If these estimators yield a clear picture about the simulator input influence, the analysis is conclusive. Conversely, a larger sample is needed and the analyst ought to plan for additional simulator runs.
While we have discussed three well-known global sensitivity measures, the paradigm presented here can be applied to the estimation of any global sensitivity measure, including, among others, value of information, sensitivity measures based on any discrepancy between densities or cumulative distribution functions.
From a more general perspective, the work shows that combining recent advances in Bayesian non-parametric density estimation with probabilistic sensitivity analysis in DACE may lead to improvements in the estimation of global sensitivity measures. Research in Bayesian non-parametric density estimation is active in Statistics and Machine Learning, but the advances in this discipline are not directly known to the DACE community. This work represents a first systematic bridge between these two closely related areas of Statistics, and we hope it could favour further research for transferring findings in Bayesian-non parametric estimation to the field of computer experiments. At the same time, exposing Bayesian estimation to the demands coming from probabilistic sensitivity analysis of realistic simulators may challenge state of the art and stimulate further research in Bayesian-non parametric estimation.
Appendix A Appendix
a.1 Numerical experiments for the partition selection problem
The authors performed several thought experiments on test cases. The results show the difficulty, maybe impossibility, of finding a universally valid rule for linking the partition size to the sample size . We report some experiments results.
Assume the analyst wants to find an “optimal ”(in some sense) partition refining strategy, i.e., a relationship that produces the partition size that minimizes the estimation error at sample size for the pdf-based point estimators , and cdf-based point estimator [eqs. (3), (4) and (6)]. We focus on one estimator type for simplicity and also because Borgonovo et al. (2016) propose an heuristic inspired by the rule of histogram partitioning of Freedman-Diaconis (Freedman and Diaconis, 1981), in which .
To evaluate the estimators’ performance at fixed values of and , we use the Root Mean Square Error (RMSE):
where is the number of bootstrap replicates. is the -th bootstrap replicates of .