Quantifying uncertainties on excursion sets under a Gaussian random field prior

by   Dario Azzimonti, et al.

We focus on the problem of estimating and quantifying uncertainties on the excursion set of a function under a limited evaluation budget. We adopt a Bayesian approach where the objective function is assumed to be a realization of a Gaussian random field. In this setting, the posterior distribution on the objective function gives rise to a posterior distribution on excursion sets. Several approaches exist to summarize the distribution of such sets based on random closed set theory. While the recently proposed Vorob'ev approach exploits analytical formulae, further notions of variability require Monte Carlo estimators relying on Gaussian random field conditional simulations. In the present work we propose a method to choose Monte Carlo simulation points and obtain quasi-realizations of the conditional field at fine designs through affine predictors. The points are chosen optimally in the sense that they minimize the posterior expected distance in measure between the excursion set and its reconstruction. The proposed method reduces the computational costs due to Monte Carlo simulations and enables the computation of quasi-realizations on fine designs in large dimensions. We apply this reconstruction approach to obtain realizations of an excursion set on a fine grid which allow us to give a new measure of uncertainty based on the distance transform of the excursion set. Finally we present a safety engineering test case where the simulation method is employed to compute a Monte Carlo estimate of a contour line.



There are no comments yet.


page 18


On the Selection of Random Field Evaluation Points in the p-MLQMC Method

Engineering problems are often characterized by significant uncertainty ...

Stochastic turbulence modeling in RANS simulations via Multilevel Monte Carlo

A multilevel Monte Carlo (MLMC) method for quantifying model-form uncert...

h- and p-refined Multilevel Monte Carlo Methods for Uncertainty Quantification in Structural Engineering

Practical structural engineering problems are often characterized by sig...

Distributionally Robust Bayesian Quadrature Optimization

Bayesian quadrature optimization (BQO) maximizes the expectation of an e...

Bayesian model comparison with un-normalised likelihoods

Models for which the likelihood function can be evaluated only up to a p...

Space-time shape uncertainties in the forward and inverse problem of electrocardiography

In electrocardiography, the "classic" inverse problem consists of findin...

KOALA: A new paradigm for election coverage

Common election poll reporting is often misleading as sample uncertainty...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In a number of application fields where mathematical models are used to predict the behavior of some parametric system of interest, practitioners not only wish to get the response for a given set of inputs (forward problem) but are interested in recovering the set of inputs values leading to a prescribed value or range of values for the output (inverse problem). Such problems are especially common in cases where the response is a scalar quantifying the degree of danger or abnormality of a system, or equivalently is a score measuring some performance or pay-off. Examples include applications in reliability engineering, where the focus is often put on describing the set of parameter configurations leading to an unsafe design (mechanical engineering [16][4], nuclear criticality [9], etc.), but also in natural sciences, where conditions leading to dangerous phenomena in climatological [18] or geophysical [3] settings are of crucial interest.

In this paper we consider a setup where the forward model is a function and we are interested in the inverse problem of reconstructing the set , where denotes the range of values of interest for the output. Often the forward model is costly to evaluate and a systematic exploration of the input space , e.g., on a fine grid, is out of reach, even in small dimensions. Therefore reconstructions of

have to be performed based on a small number of evaluations, thus implying some uncertainty. Various methods are available to interpolate or approximate an objective function relying on a sample of pointwise evaluations, including polynomial approximations, splines, neural networks, and more. Here we focus on the so-called

Gaussian Random Field modeling approach (also known as Gaussian Process, [31]).

Gaussian Random Field (GRF) models have become very popular in engineering and further application areas to approximate, or predict, expensive-to-evaluate functions relying on a drastically limited number of observations (see, e.g., [25], [41], [30], [4], [33], [5]). In this framework we assume that is a realization of a random field , which throughout the paper, unless otherwise noted, is assumed to be Gaussian with continuous paths almost surely. A major advantage of GRF models over deterministic approximation models is that, given a few observations of the function at the points

, they deliver a posterior probability distribution on functions, not only enabling predictions of the objective function at any point, but also a quantification of the associated uncertainties.

The mean of the posterior field gives a plug-in estimate of the set (see, e.g., [30] and references therein), however here we focus on estimates based on conditional simulations. The idea of appealing to conditional simulation in the context of set estimation has already been introduced in various contexts (see, e.g., [26], [13], [6]). Instead of having a single estimate of the excursion set like in most set estimation approaches (see, e.g., [15], [21], [32]), it is possible to get a distribution of sets. For example, Figure 1 shows some realizations of an excursion set obtained by simulating a Gaussian random field conditional on few observations of the function at locations (

, black triangles). A natural question arising in practice is how to summarize this distribution by appealing to simple concepts, analogous to notions of expectation and variance (or location and scatter) in the framework of random variables and vectors. For example the notions of Vorob’ev expectation and Vorob’ev deviation have been recently revisited 

[10] in the context of excursion set estimation and uncertainty quantification with GRF models. In Sections 2 and 5 we review another random set expectation, the distance average expectation (see, e.g., [2]). This expectation provides a different uncertainty quantification estimate in the context of GRF modeling, the distance average variability. Since the distance average variability heavily relies on conditional simulations, to the best of our knowledge, it has not been used before as an uncertainty quantification technique.

(a) True function, posterior GRF mean and true excursion set with  (horizontal lines at ).
(b) 3 realizations of the conditional GRF and the associated excursion set (horizontal lines at ), obtained with simulations at points in .
(c) 3 quasi-realizations of the conditional GRF and the associated random set (horizontal lines at ) generated by simulating at 30 optimally-chosen points (black diamonds, shown at ) and predicting the field at the points design.
Covariance function Matérn ()
Number of observations ()
Simulation points optimized Algorithm B
Number of simulation points ()
Figure 1: Gaussian random field model based on few evaluations of a deterministic function.

One of the key contributions of the present paper is a method to approximate conditional realizations of the random excursion set based on simulations of the underlying Gaussian random field at few points. By contrast, in the literature, Monte Carlo simulations of excursion sets are often obtained by simulating the underlying field at space filling designs, as shown in Figure 0(b). While this approach is straightforward to implement, it might be too cumbersome when fine designs are needed, especially in high dimensions. The proposed approach reduces the simulation costs by choosing few appropriate points where the field is simulated. The field’s values are then approximated on the full design with a suitable affine predictor. We call a quasi-realization of the excursion set the excursion region of a simulation of the approximate field. Coming back to the example introduced in Figure 1, Figure 0(c) shows quasi-realizations of the excursion set based on simulations of the field at points predicted at the fine design with the best linear unbiased predictor. Simulation points are chosen in an optimal way in the sense that they minimize a specific distance between the reconstructed random set and the true random set. With this method it is possible to obtain arbitrarily fine approximations of the excursion set realizations while retaining control on how close those approximations are to the true random set distribution.

The paper is divided into six sections. In Section 2 we introduce the framework and the fundamental definitions needed for our method. In Section 3 we give an explicit formula for the distance between the reconstructed random excursion set and the true random excursion set. In this section we also present a result on the consistency of the method when a dense sequence points is considered as simulation points; the proofs are in Appendix A. Section 4 explains the computational aspects and introduces two algorithms to calculate the optimized points. In this section we also discuss the advantages and limitations of these algorithms. Sections 5 presents the implementation of the distance average variability as uncertainty quantification measure. We show that this uncertainty measure can be computed accurately with the use of quasi-realizations. In Section 6 we show how the simulation method allows to compute estimates of the level sets in a two dimensional test case from nuclear safety engineering. The proposed method to generate accurate quasi-realizations of the excursion set from few simulations of the underlying field is pivotal in this test case as it allows us to operate on high resolution grids thus obtaining good linear approximations of the level set curve. Another six-dimensional application is presented in Appendix B, where the distribution of the excursion volume is estimated with approximate conditional simulations generated using the proposed simulation method.

2 Preliminaries

In this section we recall two concepts coming from the theory of random closed sets. The first one gives us the distance between the reconstructed set and the true random set, while the second one leads to the definition of an uncertainty quantification measure for the excursion set estimate. See, e.g., [28] Chapter 2, for a detailed overview on the subject.

Throughout the paper , , is considered an unknown real-valued continuous objective function and is a compact subset of . We model with , a Gaussian random field with continuous paths, whose mean function and covariance kernel are denoted by and . The range of critical responses and the corresponding excursion set are denoted by , a measurable element of the Borel -algebra of , and respectively. In most applications, is a closed set of the form for some . Here we solely need to assume that is closed in , however we restrict ourselves to for simplicity. Generalizations to unions of intervals are straightforward. The excursion set is closed in because it is the pre-image of a closed set by a continuous function. Similarly, defines a random closed set.

2.1 Vorob’ev approach

A key element for the proposed simulation method is the notion of distance in measure. Let be a measure on the Borel -algebra and . Their distance in measure (with respect to ) is defined as , where is the symmetrical difference between and . Similarly, for two random closed sets and , one can define a distance as follows.

Definition 1 (Expected distance in measure).

The expected distance in measure between two random closed sets with respect to a Borel measure is the function , defined as


Several notions of expectation have been proposed for random closed sets, in particular, the Vorob’ev expectation is related to the expected distance in measure. Consider the coverage function of a random closed set , defined as . The Vorob’ev expectation of is defined as the level set of its coverage function, i.e. [42], where the level satisfies for all . It is a well-known fact [28] that, in the particular case , the Vorob’ev expectation minimizes the distance in measure to among all measurable (deterministic) sets such that . Figure 1(a) shows the Vorob’ev expectation computed for the excursion set of the GRF in the example of Figure 1. While the Vorob’ev expectation is used for its conceptual simplicity and its tractability, there exists other definitions of random closed set expectation and variability. In the following we review another notion of expectation for a random closed set: the distance average and its related notion of variability.

(a) Excursion set realizations, coverage function (blue), selected -level (, dashed blue), Vorob’ev expectation (red dashed line at , length).
(b) Distance function realizations, average distance function (blue) and distance average expectation (red dashed line at , length) obtained with the Euclidean distance function.
Figure 2: Realizations obtained from the GRF presented in Figure 1 and two random set expectations for this excursion set.

2.2 Distance average approach

The distance function of a point to a set is defined as the function that returns the distance between and , where  is the space of all non-empty closed sets in (see [28] pp. 179–180 for details). In general, such distance functions can take any value in (see [2] and [28] for examples), however here we restrict ourselves to non-negative distances. In what follows, we use the distance function where is the Euclidean distance in .

Consider and assume that has finite expectation for all , the mean distance function is , defined as . Recall that it is possible to embed the space of Euclidean distance functions in . Let us further denote with the metric, defined as . The distance average of [28] is defined as the set that has the closest distance function to , with respect to the metric .

Definition 2 (Distance average and distance average variability).

Let be the value of that minimizes the -distance between the distance function of and the mean distance function of . If achieves its minimum in several points we assume to be their infimum. The set


is called the distance average of with respect to . In addition, we define the distance average variability of with respect to as .

These notions will be at the heart of the application section, where a method is proposed for estimating discrete counterparts of and relying on approximate GRF simulations. In general, distance average and distance average variability can be estimated only with Monte Carlo techniques, therefore we need to be able to generate realizations of . By taking a standard matrix decomposition approach for GRF simulations, a straightforward way to obtain realizations of is to simulate at a fine design, e.g., a grid in moderate dimensions, with large , and then to represent with its discrete approximation on the design , . A drawback of this procedure, however, is that it may become impractical for a high resolution , as the covariance matrix involved may rapidly become close to singular and also cumbersome if not impossible to store. Figure 1(b) shows the distance average computed with Monte Carlo simulations for the excursion set of the example in Figure 1. In the example the distance average expectation has a slightly bigger Lebesgue measure than the Vorob’ev expectation. In general the two random set expectations yield different estimates, sometimes even resulting in a different number of connected components, as in the example introduced in Section 5.

3 Main results

In this section we assume that has been evaluated at locations , thus we consider the GRF conditional on the values

. Following the notation for the moments of

introduced in Section 2, we denote the mean and covariance kernel of conditional on with and respectively. The proposed approach consists in replacing conditional GRF simulations at the finer design with approximate simulations that rely on a smaller simulation design , with . The quasi-realizations generated with this method can be used as basis for quantifying uncertainties on

, for example with the distance average variability. Even though such an approach might seem somehow heuristic at first, it is actually possible to control the effect of the approximation on the end result, as we show in this section.

3.1 A Monte-Carlo approach with predicted conditional simulations

We propose to replace by a simpler random field denoted by , whose simulations at any design should remain at an affordable cost. In particular, we aim at constructing in such a way that the associated is as close as possible to in expected distance in measure.

Consider a set of  points in , , and denote by the random vector of values of at . Conditional on , this vector is multivariate Gaussian with mean and covariance matrix . The essence of the proposed approach is to appeal to affine predictors of , i.e. to consider of the form


where is a trend function and is a vector-valued function of deterministic weights. Note that usual kriging predictors are particular cases of Equation (3) with adequate choices of the functions and , see, for example, [14] for an extensive review. Re-interpolating conditional simulations by kriging is an idea that has been already proposed in different contexts, notably by [29] in the context of Bayesian uncertainty analysis for complex computer codes. However, while the problem of selecting the evaluation points has been addressed in many works (see, e.g., [35, 25, 20, 30, 9] and references therein), to the best of our knowledge the derivation of optimal criteria for choosing the simulation points has not been addressed until now, be it for excursion set estimation or for further purposes. Computational savings for simulation procedures are hinted by the computational complexity of simulating the two fields. Simulating at a design with points with standard matrix decomposition approaches has a computational complexity , while simulating has a complexity . Thus if simulating might bring substantial savings.

In Figure 3 we present an example of work flow that outputs a quantification of uncertainty over the estimate for based on the proposed approach. In the following sections we provide an equivalent formulation of the expected distance in measure between and introduced in Definition 1 and we provide methods to select optimal simulation points .

Simulation step

Approximation step

  • Prior ;

  • Data ;

  • Fine simulation design .

Posterior GRF with mean and covariance kernel .

Obtain simulation points with Algorithm A: see Section 4.1; or Algorithm B: see Section 4.2.

Simulate at , where (see Section 3) Obtain quasi-realizations of .

Output: Quasi-realizations of ; Uncertainty quantification on (e.g. , , , Sections 5,6,B).
Figure 3: Flow chart of proposed operations to quantify the posterior uncertainty on .

3.2 Expected distance in measure between and

In the next proposition we show an alternative formulation of the expected distance in measure between and that exploits the assumptions on the field .

Proposition 3 (Distance in measure between and ).

Under the previously introduced assumptions and are Gaussian random fields and and are random closed sets.

a) Assume that and is a finite Borel measure on , then we have




where denotes the conditional probability .

b) Moreover, using the notation introduced in Section 3, we get



is the cumulative distribution function of a centered bivariate Gaussian with covariance

, with


c) Particular case: if is chosen as the simple kriging weights , then


where .


(of Proposition 3)

a) Interchanging integral and expectation by Tonelli’s theorem, we get

b) Since the random field is assumed to be Gaussian, the vector-valued random field is also Gaussian conditionally on , and proving the property boils down to calculating its conditional moments. Now we directly get and . Similarly, and . Finally, and Equation 6 follows by Gaussianity.

c) Expression in Equation 7 follows immediately by substituting into . ∎

Remark 1.

The Gaussian assumption on the random field in Proposition 3 can be relaxed: in part a) it suffices that the excursion sets of the field are random closed sets and in part b) it suffices that the field is Gaussian conditionally on .

3.3 Convergence result

Let be a given sequence of simulation points in  and set for all . Assume that is, conditionally on , a Gaussian random field with conditional mean  and conditional covariance kernel . Let be the best predictor of  given  and . In particular, is affine in . Denote by the conditional variance of the prediction error at :

Proposition 4 (Approximation consistency).

Let be the random excursion set associated to . Then, as , if and only if -almost everywhere.

Corollary 5.

Assume that the covariance function of  is continuous. a) If the sequence of simulation points is dense in , then the approximation scheme is consistent (in the sense that when ). b) Assuming further that the covariance function of  has the NEB property [39], the density condition is also necessary.

The proof of Proposition 4 is in Appendix A

4 Practicalities

In this section we use the results established in Section 3 to implement a method that selects appropriate simulation points , for a fixed . The conditional field is simulated on and approximated at the required design with ordinary kriging predictors. We present two algorithms to find a set that approximately minimizes the expected distance in measure between and . The algorithms were implemented in R with the packages KrigInv [12] and DiceKriging [33].

4.1 Algorithm A: minimizing

The first proposed algorithm (Algorithm A) is a sequential minimization of the expected distance in measure . We exploit the characterization in Equation (4) and we assume that the underlying field is Gaussian. Under these assumptions, an optimal set of simulation points is a minimizer of the problem,


Several classic optimization techniques have already been employed to solve similar problems for optimal designs, for example simulated annealing [34]

, genetic algorithms 

[22], or treed optimization [20]. In our case such global approaches lead to a dimensional problem and, since we do not rely on analytical gradients, the full optimization would be very slow. Instead we follow a greedy heuristic approach as in [35], [9] and optimize the criterion sequentially: given points previously optimized, the th point is chosen as the minimizer of where . The points optimized in previous iterations are fixed as parameters and are not modified by the current optimization.

The parameters of the bivariate normal, and , depend on the set and therefore need to be updated each time the optimizer requires an evaluation of the criterion in a new point. Those functions rely on the kriging equations, but recomputing each time the full kriging model is numerically cumbersome. Instead we exploit the sequential nature of the algorithm and use kriging update formulas [11] to compute the new value of the criterion each time a new point is analyzed.

Numerical evaluation of the expected distance in measure poses the issue of approximating both the integral in

and the bivariate normal distribution in Equation (

8). The numerical approximation of the bivariate normal distribution is computed with the pbivnorm package which relies on the fast Fortran implementation of the standard bivariate normal CDF introduced in [19]. The integral is approximated via quasi-Monte Carlo method: the integrand is evaluated in points from a space filling sequence (Sobol’, [7]) and then approximated with a sample mean of the values.

The criterion is optimized with the function genoud [27], a genetic algorithm with BFGS descents that finds the optimum by evaluating the criterion over a population of points spread in the domain of reference and by evolving the population in sequential generations to achieve a better fitness. Here, the gradients are numerically approximated.

4.2 Algorithm B: maximizing

The evaluation of the criterion in Equation (8) can become computationally expensive because it requires a high number of evaluation of the bivariate normal CDF in order to properly estimate the integral. This consideration led us to develop a second optimization algorithm.

We follow closely the reasoning used in [35] and [4] for the development of an heuristic method to obtain the minimizer of the integrated mean squared error by maximizing the mean squared error. The characterization of the expected distance in measure in Equation (4) is the integral of the sum of two probabilities. They are non-negative continuous functions of as the underlying Gaussian field is continuous. The integral, therefore, is large if the integrand takes large values. Moreover, interpolates in hence the integrand is zero in the chosen simulation points. The two previous considerations lead to a natural variation of Algorithm A where the simulation points are chosen in order to maximize the integrand.

Algorithm B is based on a sequential maximization of the integrand. Given points previously optimized, the th point is the maximizer of the following problem,

The evaluation of the objective function in Algorithm B does not require numerical integration in , thus it requires substantially less evaluations of the bivariate normal CDF.

The maximization of the objective function is performed with the L-BFGS-B algorithm [8] implemented in R with the function optim. The choice of starting points for the optimization is crucial for gradient descent algorithms. In our case the objective function to maximize is strongly related with , the coverage function of , in fact all points where the function takes high values are reasonable starting points because they are located in regions of high uncertainty for the excursion set, thus simulations around their locations are meaningful. Before starting the maximization, the function is evaluated at a fine space filling design and, at each sequential maximization, the starting point is drawn from a distribution proportional to the computed values of .

4.3 Comparison with non optimized simulation points

In order to quantify the importance of optimizing the simulation points and to show the differences between the two algorithms we first present a 2-d analytical example.

Consider the Branin-Hoo function (see [25]) multiplied by a factor -1 and normalized so that its domain becomes . We are interested in estimating the excursion set with evaluations of . We consider a Gaussian random field with constant mean function and covariance

chosen as a tensor product Matérn kernel (

[38]. The covariance kernel parameters are estimated by Maximum Likelihood with the package DiceKriging [33]. By following the GRF modeling approach we assume that is a realization of and we condition on evaluations. The evaluation points are chosen with a maximin Latin Hypercube Sample (LHS) design [37] and the conditional mean and covariance are computed with ordinary kriging equations.

Discrete quasi-realizations of the random set on a fine grid can be obtained by selecting few optimized simulation points and by interpolating the simulations at those locations on the fine grid. The expected distance in measure is a good indicator of how close the reconstructed set realizations are to the actual realizations. Here we compare the expected distance in measure obtained with optimization algorithms A and B and with two space filling designs, namely a maximin Latin Hypercube Sample [37] and points from the Sobol’ sequence [7].

Figure 4: Expected distance in measure for different choices of simulation points

Figure 4 shows the expected distance in measure as a function of the number of simulation points. The values were computed only in the dotted points for algorithms A and B and in each integer for the space filling designs. The optimized designs always achieve a smaller expected distance in measure, but it is clear that the advantage of accurately optimizing the choice of points decreases as the number of points increases, thus showing that the designs tend to become equivalent as the space is filled with points. This effect, linked to the low dimensionality of our example, reduces the advantage of optimizing the points, however in higher dimensions a much larger number of points is required to fill the space hence optimizing the points becomes more advantageous, as shown in Appendix B. Algorithm A and B show almost identical results for more than 100 simulation points. Even though this effect might be magnified by the low dimension of the example, it is clear that in most situations Algorithm B is preferable to Algorithm A as it achieves similar precision while remaining significantly less computationally expensive, as shown in Figure 6.

5 Application: a new variability measure using the distance transform

In this section we deal with the notions of distance average and distance average variability introduced in Section 2 and more specifically we present an application where the interpolated simulations are used to efficiently compute the distance average variability.

Let us recall that, given realizations of the random closed set , we can compute the estimator for


where is the empirical distance function and is the threshold level for , chosen in a similar fashion as in Definition 2, see [2] for more detail. The variability of this estimate is measured with the distance average variability , which, in the empirical case, is defined as


where is the distance.

The distance average variability is a measure of uncertainty for excursion set under the postulated GRF model; this value is high when the distance functions associated with the realizations are highly varying, which implies that the distance average estimate of the excursion set is uncertain. This uncertainty quantification method necessitates conditional simulations of the field on a fine grid to obtain a pointwise estimate. Our simulation method generates quasi-realizations in a rather inexpensive fashion even on high resolution grids, thus making the computation of this uncertainty measure possible.

We consider here the two dimensional example presented in Section 4 and we show that by selecting few well-chosen simulation points , with , and interpolating the results on , it is possible to achieve very similar estimate to full design simulations. The design considered for both the full simulations and the interpolated simulations is a grid with points, where . The grid design allows us to compute numerically the distance transform, the discrete approximation of the distance average, with an adaptation for R of the fast distance transform algorithm implemented in [17]. The precision of the estimate is evaluated with the distance transform variability, denoted here with , an approximation on the grid of the distance average variability, Equation (10).

The value of the distance transform variability is estimated with quasi-realizations of obtained from simulations at few points. The conditional Gaussian random field is first simulated times at a design containing few optimized points, namely 10, 20, 50, 75, 100, 120, 150, 175, and then the results are interpolated on the grid with the affine predictor . Three methods to obtain simulation points are compared: Algorithm A and B presented in the previous section and a maximin LHS design. The simulations obtained with points from each of the three methods are interpolated on the grid with the same technique. In particular, the ordinary kriging weights are first computed in each point and then used to obtain the value of the interpolated field from the simulated values . This procedure is numerically fast as it only requires algebraic operations.

For comparison a benchmark estimate of is obtained from realizations of stemming from conditional Gaussian simulations on the same grid of size .

Both experiments are reproduced 100 times, thus obtaining an empirical distribution of , with , and of for each .

Figure 5: Comparison of the distributions of the simulated for different methods (from left to right Algorithm A, B and Maximin LHS), the dashed horizontal line marks the median value of the benchmark () distribution.

Figure 5 shows a comparison of the distributions of obtained with full grid simulations and the distributions obtained with the interpolation over the grid of few simulations.

The distributions of obtained from quasi-realizations all approximate well the benchmark distribution with as little as simulation points, independently of the way simulation points are selected. This effect might be enhanced by the low dimension of the example, nonetheless it suggests substantial savings in simulation costs.

Figure 6: Total CPU time to obtain all realizations of . Full grid simulations only include simulation time (dot-dashed horizontal line), while both algorithms include simulation point optimization (dashed lines) and simulation and interpolation times (solid lines).

The optimized designs (Algorithm A and B) achieve better approximations with less points than the maximin LHS design. In particular the maximin LHS design is affected by a high variability, while the optimized points converge fast to a good approximation of the benchmark distribution. Interpolation of simulations at points optimized with Algorithm A results in a relative error of the median estimate with respect to the benchmark of around .

Algorithm B shows inferior precision than Algorithm A for very small values of . This behavior could be influenced by the dependency of the first simulation point on the starting point of the optimization procedure. In general, the choice between Algorithm A and Algorithm B is a trade-off between computational speed and precision. For low dimensional problems, or more in general, if only a small number of simulation points is needed, then Algorithm A could be employed at acceptable computational costs. However as the dimensionality increases more points are needed to approximate correctly full designs simulations, then Algorithm B obtains similar results to A at a much lower computational cost. Both algorithms behave similarly when estimating this variability measure with , thus confirming that the reconstructed sets obtained from simulations at points that optimize either one of the criteria are very similar, as already hinted by the result on distance in measure shown in the previous section. In most practical situations Algorithm B yields the better trade off between computational speed and precision, provided that enough simulation points are chosen.

Figure 6 shows the total CPU time for all the simulations in the experiment for Algorithm A, Algorithm B and for the full grid simulations, computed on the cluster of the University of Bern with Intel Xeon E5649 2.53GHz CPUs with 4GB RAM. The CPU times for Algorithm A and B also include the time required to optimize the simulation points. Both interpolation algorithms require less total CPU time than full grid simulations to obtain good approximations of the benchmark distribution (). If parallel computing is available wall clock time could be significantly reduced by parallelizing operations. In particular the full grid simulation can be parallelized quite easily while the optimization of the simulation points could be much harder to parallelize.

6 Test case: Estimating length of critical level set in nuclear safety application

In this section we focus on a nuclear safety test case and we show that our method to generate quasi-realizations can be used to obtain estimates level set on high resolution grids.

The problem at hand is a nuclear criticality safety assessment. In a system involving nuclear material it is important to control the chain reaction that may be produced by neutrons, which are both the initiators and the product of the reaction. An overproduction of neutrons the radioactive material is not safe for storage or transportation. Thus, the criticality safety of a system is often evaluated with the neutron multiplication factor (effective or ) which returns the number of neutrons produced by a collision with one neutron. This number is usually estimated using a costly simulator. If the chain reaction is unstable, otherwise it is safe. In our case we consider a storage facility of plutonium powder, whose is modeled by two parameters: the mass of plutonium () and the logarithm of the concentration of plutonium (). The excursion set of interest is the set of safe input parameters , where is safety threshold, fixed here at . This test case was also presented in [9] to illustrate batch-sequential SUR strategies. The parameter space here is transformed into the unit square .

(a) Contour lines realizations obtained with full design simulations. The function was evaluated at the points denoted by triangles.
(b) Contour lines realizations obtained with simulations at simulation points (diamonds) and predicted at the full design grid.
Figure 7: Realizations : full design simulations (a) and reinterpolated simulations at locations .

The set of interest is with , the level set of . We are interested in estimating this one dimensional curve in . Since we only have few evaluations of the function at points , shown in Figure 7, a direct estimate of is not accurate. We rely instead on a random field model with prior distribution Gaussian, constant mean and a tensor product Matérn () covariance kernel. The parameters of the covariance kernel are estimated by Maximum Likelihood with the package DiceKriging, [33]. From the posterior distribution of , conditioned on evaluations of at , it is possible to estimate . A plug-in estimate of could be generated with the posterior mean , however this procedure alone does not provide a quantification of the uncertainties. Instead from the posterior field we generated several realizations of . This procedure requires simulations of the posterior field at high quality grids however, even in a two dimensional parameter space, the procedure is computationally burdensome. In fact, while a discretization on a grid delivers a low quality approximation, simulations of the field at such grids are already expensive to compute. For this reason we choose to simulate the field at appropriate simulation points and to predict the full simulations with the linear interpolator introduced in (3).

Figure 7 shows few realizations of discretized on a grid , obtained with simulations of the field at all points of the design (Figure 6(a)) and with simulations at simulation points, chosen with Algorithm B (Figure 6(b)). The two sets of curves seem to share similar properties. The expected distance in measure between and , as introduced in Definition 1, could be used here to quantify this similarity however, here we propose to use the arc length of each curve, defined as follows, as it is easier to interpret in our application.

Figure 8: Distributions of statistic computed over

experiments for Kolmogorov-Smirnov test with null hypothesis

. Simulation points chosen with Algorithm B or with maximin LHS design.

Consider a regular grid . For each realization, we select the points , where is small. contains all the points of the discrete design that have response close to the target. We order the points in in such a way that are vertices of a piecewise linear curve approximating . We approximate the arc length of the curve with the sum of the segments’ lengths: . By computing the length for each realization we obtain a Monte Carlo estimate of the distribution of the arc length. We can now compare the distributions of arc length obtained from reconstructed realizations simulated at few locations with the distribution obtained from simulations at the full grid in order to select the number of simulation points that leads to quasi-realizations for whose length is indistinguishable from the full grid realizations’ length.

Let us define the random variables and , the arc lengths of the random set generated with full design simulations ( grid) and the length of the random set generated with simulations at points respectively. We compare the distributions of and with Kolmogorov-Smirnov tests for several values of . The null hypothesis is . The distributions are approximated with simulations, either at the full grid design or at the selected points. For each , 100 repetition of the experiment were computed, thus obtaining a distribution for the Kolmogorov-Smirnov statistic. Figure 8 shows the value of the statistic for each , where the simulation points are obtained either with Algorithm B or with a maximin LHS design. For optimized points, the statistic is below the critical value for at least of the experiments, thus it is not possible to distinguish the two length distributions with a significance level of . If the simulation points are chosen with a maximin LHS design instead, the statistic is below the critical value for at least of the experiments with simulation points, as it is also shown in Figure 8. This result shows again the importance of choosing optimized simulation points. The approximation of with leads to substantial computational time savings. The computational time for simulations of the field at the full grid design ( points) is seconds, while the total time for finding appropriate simulation points (with Algorithm B), simulate the field at these locations and reinterpolate the field at the full design is seconds (average over 100 experiments).

The expected distance in measure introduced in Section 2.1 could also be used here to quantify how far the quasi-realizations are from the full grid realizations.

7 Conclusions

In the context of excursion set estimation, simulating a conditional random field to obtain realizations of a related excursion set can be useful in many practical situations. Often, however, the random field needs to be simulated at a fine design to obtain meaningful realizations of the excursion set. Even in moderate dimensions it is often impractical to simulate at such fine designs, thus rendering good approximations hard to achieve.

In this paper we introduced a new method to simulate quasi-realizations of a conditional Gaussian random field that mitigates this problem. While the approach of predicting realizations of the field from simulations at few locations has already been introduced in the literature, this is the first attempt to define optimal simulation points based on a specific distance between random closed sets: the expected distance in measure. We showed on several examples that the quasi-realizations method reduces the computational cost due to conditional simulations of the field, however it does so relying on an approximation. In particular the random set quasi-realizations optimality with respect to the expected distance in measure does not necessarily guarantee that other properties of the set are correctly reproduced.

The quasi-realizations approach allowed us to study an uncertainty measure that, to the best of our knowledge, was not previously used in practice: the distance average variability. The estimation of the distance average variability is appealing when realizations of the excursion set on fine grids are computationally cheap. We showed on a two dimensional test function that it is possible to reduce computational costs by at least one order of magnitude, thus making this technique practical. In general the quasi-realizations approach could improve the speed of distance average based methods as, for example, [23] and [24].

We presented a test case in safety engineering where we estimated the arc length’s distribution of a level set in a two dimensional parameter space. The level set was approximated by piecewise linear curve, the resolution of which depends on the simulation design. A Monte Carlo technique based on realizations of the excursion set obtained with full design simulations is computationally too expensive at high resolutions. Reconstructed simulations from simulations of the field at few well chosen points reinterpolated on a fine design made this application possible. In particular we showed that the distribution of the arc length obtained with a full design simulation at a rough design, a grid , was not significantly different than the distribution obtained from reconstructed sets with simulations at well chosen points, thus opening the way for estimates on higher resolution grids.

Conditional realizations of the excursion set can also be used to estimate the volume of excursion, in appendix we show how to handle this problem with Monte Carlo simulations at fine designs.

We presented two algorithms to compute optimal simulation points. While the heuristic Algorithm B is appealing for its computational cost and precision, there are a number of extensions that could lead to even more savings in computational time. For example, the optimization of the points in this work was carried out with generic black box optimizers but it would be possible to achieve appreciable reductions in optimization time with methods based on analytical gradients.

Appendix A Proof of Proposition 4

Let us first assume that -almost everywhere. The expected distance in measure can be rewritten, according to Equation (4), as . Since is a finite measure on  and , it is sufficient by the dominated convergence theorem to prove that -almost everywhere.

Pick any such that and . Then, for any ,

With , it follows that


Since -almost everywhere and wherever , Equation (11) proves the sufficiency part of Proposition 4.

Conversely, assume that when , or equivalently that  converges to zero in . Then  also converges to zero in measure:

For any , consider the following sets:

Then we have the following technical result.

Lemma 6.

For all and , there exists (that does not depend on , or ) such that , and therefore .

Using Lemma 6, for any  and , we have

In other words, converges to zero in measure on . As a consequence, since this is a decreasing sequence, converges to zero -almost everywhere on , and therefore -almost everywhere on . Convergence also trivially holds where .

a.1 Proof of Lemma 6

Let , , and set . Because , and in particular . Assume, without loss of generality, that and . Recall that where is the event defined by

Recall also that, since  is a Gaussian process, and are independent Gaussian variables under , with .

Let us first assume that . As a consequence, and therefore

For any , the following inclusions hold: