Optimal fidelity multi-level Monte Carlo for quantification of uncertainty in simulations of cloud cavitation collapse

05/11/2017 ∙ by Jonas Šukys, et al. ∙ ETH Zurich Eawag 0

We quantify uncertainties in the location and magnitude of extreme pressure spots revealed from large scale multi-phase flow simulations of cloud cavitation collapse. We examine clouds containing 500 cavities and quantify uncertainties related to their initial spatial arrangement. The resulting 2000-dimensional space is sampled using a non-intrusive and computationally efficient Multi-Level Monte Carlo (MLMC) methodology. We introduce novel optimal control variate coefficients to enhance the variance reduction in MLMC. The proposed optimal fidelity MLMC leads to more than two orders of magnitude speedup when compared to standard Monte Carlo methods. We identify large uncertainties in the location and magnitude of the peak pressure pulse and present its statistical correlations and joint probability density functions with the geometrical characteristics of the cloud. Characteristic properties of spatial cloud structure are identified as potential causes of significant uncertainties in exerted collapse pressures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 20

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Cloud cavitation collapse pertains to the inception of multiple gas cavities in a liquid, and their subsequent rapid collapse driven by an increase of the ambient pressure. Shock waves emanate from the cavities with pressure peaks up to two orders of magnitude larger than the ambient pressure [46, 12, 7]. When such shock waves impact on solid walls, they may cause material erosion, considerably reducing the performance and longevity of turbo-machinery and fuel injection engines [65, 38]. On the other hand, the destructive potential of cavitation can be harnessed for non-invasive biomedical applications [17, 8] and efficient travel in military sub-surface applications [5]. Prevalent configurations of cavitation often involve clouds of hundreds or thousands of bubbles [7]. The cloud collapse entails a location dependent, often non-spherical, collapse of each cavity that progresses from the surface to the center of the cloud. Pressure waves emanating from collapsing cavities located near the surface of the cloud act as amplifiers to the subsequent collapses at the center of the cloud. The interaction of these pressure waves increases the destructive potential as compared to the single bubble case. Cavitation, in particular as it occurs in realistic conditions, presents a formidable challenge to experimental [6, 10, 79] and computational [1, 72] studies due to its geometric complexity and multitude of spatio-temporal scales. Blake et al. [4] studied the single bubble asymmetric collapse using a boundary integral method. Johnsen and Colonius [34] investigated the potential damage of single collapsing bubbles in both spherical and asymmetric regime for a range of pulse peak pressures in shock-induced collapse. Lauer et al. [37] studied collapses of arrays of cavities under shock waves using the sharp interface technique of Hu et al. [31]. Recent numerical investigation of cloud cavitation involved a cluster of 125 vapor bubbles inside a pressurized liquid at 40 bar [18, 1], and a cluster of 2500 gas bubbles with ambient liquid pressure of 100 bar [61]. Large scale numerical simulations of cloud cavitation collapse considered clouds containing 50’000 bubbles [28]. Visualizations of such a collapsing cloud and the resulting focused pressure peak at the center are reproduced in Figure 1. However the computational demands of these simulations do not allow for further parametric studies.

Figure 1: Surface view of a collapsing cloud containing 50’000 gas cavities (left) and pressure peak at the center of the cloud (right). Outer bubbles evolve into cap-like shapes, forming an array of inwards directed velocity micro-jets. The resulting focused pressure peak amplifications of two orders of magnitude are exerted at the cloud center [28].

A challenge in modeling and quantifying cloud cavitation collapse is the dependence of critical Quantities of Interest (QoIs), such as peak pressure or collapse time, on a particular (random) cloud configuration [72] (see also Figure 2).

Figure 2:

Evolution of the normalized peak pressure versus normalized time in free-field collapse of two spherical clouds containing 1’000 bubbles. Two samples are drawn from the same uniform distribution (for the positions of the cavities) and log-normal distribution (for the radii of the cavities).

The systematic study of such dependencies can be addressed through an Uncertainty Quantification (UQ) framework, recently applied in [16, 5]. In [42, 43, 44], a mathematical framework was provided for uncertain solutions of hyperbolic equations. Popular probability space discretization methods include generalized Polynomial Chaos (gPC) techniques (please see [13, 39, 74, 58, 75, 24] and references therein). An alternative class of methods for quantifying uncertainty in PDEs are the stochastic collocation methods [78, 40, 77]. However, the lack of regularity of the solution with respect to the stochastic variables impedes efficient performance of both the stochastic Galerkin as well as the stochastic collocation methods, in particular for high-dimensional parameter spaces. Here, we propose the development and implementation of non-intrusive Monte Carlo (MC) methods for UQ of cloud cavitation collapse. In MC methods, the governing equations are solved for a sequence of randomly generated samples, which are combined to ascertain statistical information. However, the robustness of MC methods with respect to solution regularity comes at the price of a low error convergence rate regarding the number of samples. Drawbacks of the mentioned numerical uncertainty quantification methods inspired the development of various multi-fidelity methods, such as Multi-Level Monte Carlo (MLMC) [21], multi-fidelity recursive co-kriging and Gaussian–Markov random field [56]. Further developments include multi-fidelity Gaussian Process Regression (GPR) based on co-kriging [51], and purely data-driven algorithms for linear equations using Gaussian process priors to completely circumvent the use of numerical discretization schemes [60]. MLMC methods were introduced by Heinrich for numerical quadrature [29], then pioneered by Giles for Itô SPDEs [21], and have been lately applied to various stochastic PDEs [3, 14, 23, 49]. The MLMC algorithm was also extended to hyperbolic conservation laws and to massively parallel simulations of the random multi-dimensional Euler, magneto-hydrodynamics (MHD) and shallow water equations [42, 43, 44, 45, 70, 67]

. Subsequent MLMC improvements include Bayesian inference for fusing knowledge on empirical statistical estimates and deterministic convergence rates

[15], an alternative multi-fidelity setting where sample recycling with optimal control variate coefficients are employed [54, 55]

, and multi-level Markov Chain Monte Carlo posterior sampling in Bayesian inference problems

[19, 20].

The ensemble uncertainty of the clouds is parametrized by means of probability distributions of cavity radii, positions and initial internal pressures. Our goal is to perform simulations of cloud cavitation collapse with unprecedented number of interacting cavities with full UQ on QoIs (peak pressure magnitudes, locations and cloud collapse times) in terms of means, variances, confidence intervals and even probability density functions. To the best of our knowledge, such extensive UQ analysis of (even small) clouds has not been accomplished before. We propose an improved non-intrusive MLMC method with novel optimal control variate coefficients coupled with the state-of-the-art numerical solver for cloud cavitation collapse simulations and provide robust statistical estimates on relevant quantities of interest, emphasizing the importance of UQ in such problems.

The paper is structured as follows: section 2 introduces the governing multi-phase equations and the finite volume method used for their solution. It also presents the optimal control variate MLMC method for the statistical sampling of QoIs. Numerical experiments quantifying such uncertainties and identifying their relations to the geometric properties of the cloud by means of joint probability density function estimates are reported in section 3. We summarize our findings in section 4.

2 Computational methods

The governing system of equations for multi-phase flows are discretized using a Finite Volume Method (FVM) that is efficiently implemented so as to take advantage of supercomputer architectures. The sampling needed to estimate statistical properties from an ensemble of evolved QoIs is performed by a novel Optimal Fidelity MLMC (OF-MLMC) method. We introduce a Python implementation PyMLMC [69] of OF-MLMC and its embedding in an open source uncertainty quantification framework [26].

2.1 Governing equations

The dynamics of cavitating and collapsing clouds of bubbles are governed by the compressibility of the flow, with viscous dissipation and capillary effects taking place at orders of magnitude slower time scales. Hence, we describe cloud cavitation collapse through the Euler equations for inviscid, compressible, multi-phase flows. The system of equations, derived from the Baer-Nunziato model [2], describes the evolution of density, momentum and total energy of the multi-phase flow in the domain as [50, 57]:

(1)
(2)
(3)
(4)
(5)

where

(6)

All quantities, unless otherwise stated, depend on spatial variable and time variable . This system comprises two mass conservation equations, one for each phase, conservation equations for momentum and total energy in single- or mixture-fluid formulation as well as a transport equation for the volume fraction of one of the two phases with source/sink term on the right-hand side. In (1)–(5), denotes the velocity, the pressure,

the identity tensor,

the (mixture) density, the (mixture) total energy , where is the (mixture) specific internal energy. Moreover, , and with are density, volume fraction and speed of sound of the two phases. For the mixture quantities, the following additional relations hold: , , and . We do not account for mass transfer between different phases (evaporation or condensation), so that the above equations describe a multi-component flow. The source term in (5) for homogeneous mixtures [35] describes the reduction of the gas volume fraction in a mixture of gas and liquid when a compression wave travels across the mixing region and vice versa for an expansion wave. For a more detailed analysis on the positive influence of this term on the accuracy of the model equations, we refer to [61].

The equation system is closed by an appropriate equation of state for each of the phases. We employ the stiffened equation of state (see [41] for a review) to capture liquids and gases. This enables a simple, analytic approximation to arbitrary fluids and is expressed by

(7)

where isobaric closure is assumed [57]. Parameters and depend on the material. For , the equation of state for ideal gases is recovered. For simulations in this manuscript, and are used for water and and for air.

2.2 Numerical method

The governing system (1)–(5) can be recast into the quasi–conservative form

(8)

where

is the vector of conserved (except for

which has a non-zero source term) variables and , , are vectors of flux functions

The source term has all elements equal to zero except the last one

which appears due to rewriting (5) in conservative form [33] and incorporating the present source term. For the system (8), initial condition over the entire domain as well as boundary conditions at for all times need to be provided to complete the full specification of the multi-phase flow problem. The method of lines is applied to obtain a semi-discrete representation of (8

), where space continuous operators are approximated using the FVM for uniform structured grids. The approach yields a system of ordinary differential equations

(9)

where is a vector of cell average values and is a discrete operator that approximates the convective fluxes and the given sources in the governing system. The temporal discretization of (9) is obtained by an explicit third-order low-storage Runge-Kutta scheme [76]. The computation of the numerical fluxes is based on a Godunov-type scheme using the approximate HLLC Riemann solver originally introduced for single-phase flow problems in [73]. The Riemann initial states are determined by a shock capturing third- or fifth-order accurate WENO reconstruction (see [32]). Following [33], the reconstruction is carried out using primitive variables, and the HLLC Riemann solver is adapted to (5) to prevent oscillations at interface. The solution is advanced with a time–step size that satisfies the Courant-Friedrichs-Lewy (CFL) condition. For the coefficient weights in the Runge-Kutta stages, the values suggested in [25] are used, resulting in a total variation diminishing scheme.

2.3 Cubism-MPCF

The FVM used for the spatio-temporal numerical discretization of non-linear system of conservation laws in (8) is implemented in the open source software Cubism-MPCF [64, 27, 28, 61]. The applied scheme entails three computational kernels: computation of CFL-limited time-step size based on a global reduction, evaluation of the approximate Riemann problem corresponding to the evaluation of the right-hand side in (9) for each time step, and appropriate Runge-Kutta update steps. The solver is parallelized with a hybrid paradigm using the MPI and OpenMP programming models. The software is split into three abstraction layers: cluster, node, and core [27]. The realization of the Cartesian domain decomposition and the inter-rank MPI communication is accomplished on the cluster layer. On the node layer, the thread level parallelism exploits the OpenMP standard using dynamic work scheduling. Spatial blocking techniques are used to increase locality of the data, with intra-rank ghost cells obtained from loading fractions of the surrounding blocks, and inter-rank ghost cells obtained from a global buffer provided by the cluster layer. On the core layer, the actual computational kernels are executed, exploiting data level parallelism and instruction level parallelism, which are enabled by the conversion between the array–of–structures and structure–of–arrays layout. For the simulations reported here, main parts of the computations were executed in mixed precision arithmetic. More details on software design regarding the parallelization and optimization strategies used in Cubism-MPCF can be found in [27, 28, 64, 61]. Cubism-MPCF has demonstrated state–of–the–art performance in terms of floating point operations, memory traffic and storage, exhibiting almost perfect overlap of communication and computation [27, 61]. The software has been optimized to take advantage of the IBM BlueGene/Q (BGQ) and Cray XC30 platforms to simulate cavitation collapse dynamics using up to 13 Trillion computational cells with very efficient strong and weak scaling up to the full size of MIRA (Argonne National Laboratory) and Piz Daint (Swiss Supercomputing Center) supercomputers [64, 28, 61].

2.4 Multi-Level Monte Carlo (MLMC) method

In this section, we introduce the MLMC framework for UQ. We also present a novel and improved numerical sampling method for approximating the statistical QoI.

This study is grounded on the theoretical probabilistic framework for non-linear systems of conservation laws introduced in [43, 68]. Uncertainties in the system of conservation laws (8), such as uncertain initial data at time for the vector of conserved quantities , are modeled as random fields [43, 68]. They depend on the spatial and temporal variables and , as well as on the stochastic parameter , which represents variability in the cloud configuration. For instance, for uncertain initial data, we assume

(10)

i.e., at every spatial coordinate , initial data is a random vector containing values, one for each equation in (8). We further assume that is such that at every spatial point

the statistical moments such as expectation and variance exist and are defined by

(11)

and

(12)

Such uncertainties, for instance in initial data , are propagated according to the dynamics governed by (8). Hence, the resulting evolved solution for is also a random field, called random entropy solution; see [43, 68, 42] for precise formulation and details.

2.4.1 The classical MLMC

The statistical moments of the QoIs, such as expectation , are obtained through sampling by the MLMC methodology. Multi-level methods employ a hierarchy of spatial discretizations of the computational domain , or, equivalently, a hierarchy of numerical deterministic solvers as described in subsection 2.2, ordered by increasing “level of accuracy” . On each such discretization level , and for a given statistical realization (a “sample”) , a numerical approximation of the QoI using the applied FVM will be denoted by .

The classical MLMC estimator [21] provides accurate and efficient estimates for statistical moments of in terms of the telescoping sum of numerical approximations over all levels. In particular, an approximation of is constructed from the approximate telescoping sum

(13)

by additionally approximating all exact mathematical expectations using Monte Carlo sampling with level-dependent number of samples for QoIs , leading to

(14)

We note that each sample is obtained by solving the governing system (8) using the FVM method from subsection 2.2 with discretization (number of mesh cells and time steps) corresponding to level . Assuming that samples for and differences are drawn independently (however, sample pairs and are statistically dependent), the statistical mean square error of the (here assumed to be unbiased) standard MLMC estimator is given by the sum of sample-reduced variances of differences between every two consecutive levels,

(15)

Assuming that , the MLMC sampling error in (15) can be approximated in terms of correlation coefficients of every two consecutive levels, i.e.,

(16)

Note, that strongly correlated QoIs on two consecutive levels lead to significant reduction in the required number of samples on levels . Optimal number of samples for each level can be obtained using empirical or approximate asymptotic estimates on and by minimizing the amount of total computational work for a prescribed error tolerance such that in (15), as described in [21]. Here, denotes the amount of computational work needed to compute a single sample (statistical realization) on a given resolution level . For levels , denotes the amount of computational work needed to compute a pair of such samples on resolution levels and . Number of samples was shown to decrease exponentially with increasing level , and hence such reduction directly translates into large computational savings over single-level MC sampling methods, as reported in [42, 43, 44, 70, 67].

2.4.2 Optimal fidelity MLMC: control variates for two-level Monte Carlo

We present a novel method for reducing the variance and further increasing the efficiency of the classical MLMC method. The backbone of MLMC is the hierarchical variance reduction technique. Assuming only two levels, a coarser level and a finer level , statistical moments at level use simulations from coarser discretization level as a control variate with “known” and the predetermined coupling coefficient . The coupled statistical QoI is given by:

(17)

The variance reduction that is achieved in (17) by replacing with depends on the correlation between and ,

(18)

The expectation is considered “known” since it can be approximated by sampling estimator that is computationally affordable due to the lower resolution of the solver on coarser level . Statistical estimators using samples are applied to instead of , leading to the building block of the MLMC estimator:

(19)

In standard MLMC, the coefficient in (17)–(19) is set to unity [21, 22]. This constraint limits the efficiency of the variance reduction technique. In particular, assuming that variances at both levels are comparable, i.e. , the standard MLMC estimator fails to reduce the variance if the correlation coefficient drops below threshold of ,

(20)

Hence, in standard MLMC, such moderate level correlations would not provide any variance reduction; on the contrary, variance would be increased. To avoid this, the optimal minimizing the variance of as in (18) can be used:

(21)

A consequence of (21) is that the predetermined choice of in (17) is optimal only under very restrictive conditions: perfectly correlated levels with correlation coefficient and assumption that coarser level estimates are already available (hence no computation is needed, i.e., ). Note, that for optimal as in (21), variance is always reduced in (18), even for ,

(22)

For , it is necessary to obtain an estimate for in (17) as well. In such case, using the independence of estimators and

and the central limit theorem, the variance of the two-level estimator

as in (19) is given by

(23)

Given a constraint on the available computational budget , the number of available samples on each level depends on the required computation cost at that level, i.e.,

(24)

Here, we have not yet considered the non-uniform distribution of computational budget across levels and . Such distribution will be addressed in terms of optimal number of samples for each level in the forth coming subsubsection 2.4.3. Plugging (24) to (23) and multiplying by the computation budget , the total computational cost for variance reduction in is given by

(25)

where variances of and are weighted by the corresponding computational costs and , respectively. In order to find optimal , the computational variance reduction cost in (25) is minimized instead of in (18). The resulting optimal coefficient is given by

(26)

We note, that (26) reduces to standard control variate coefficient (21) for .

2.4.3 Optimal fidelity MLMC: control variates for multi-level Monte Carlo

We generalize the concept of optimal control variate coefficients from subsubsection 2.4.2 to arbitrary number of levels in MLMC estimator. In particular, control variate contributions from multiple levels can be used,

(27)

allowing for generalization of the telescoping sum in (13) to the control variate setting introduced in (17), where the coefficient for the finest level is fixed to :

(28)

Using the mutual statistical independence of and differences together with the central limit theorem analogously as in subsubsection 2.4.2, the total computational variance reduction cost, generalizing (25) for over all levels as in (27), is given by

(29)

Minimization of pertains to solving a linear system of equations,

(30)

Denoting , , and , system (30) can be written in a form of a tridiagonal matrix

(31)

For a special case of two levels with , the solution of (31) reduces to (26). Generalizing (14), the unbiased Optimal Fidelity Multi-Level Monte Carlo (OF-MLMC) estimator is then given by

(32)

with optimal control variate coefficients given by (31). We assume that samples for and differences are drawn independently (however, individual and are statistically dependent). The statistical mean square error of the OF-MLMC estimator is given by the sum of sample-reduced variances of -weighted differences between every two consecutive levels,

(33)

Given computational costs of evaluating a single approximation for each level and a desired tolerance , the total computational cost of OF-MLMC can be minimized under constraint of by choosing optimal number of MC samples on each level according to [48], yielding

(34)

Alternatively, given available total computational budget instead of a desired tolerance , the OF-MLMC error is minimized by choosing optimal according to

(35)

2.4.4 Adaptive optimal fidelity MLMC algorithm

The OF-MLMC algorithm proceeds iteratively, with each iteration improving the accuracy of the estimated statistics for the quantities of interest, such as the expectation . Each iteration also improves the accuracy of auxiliary parameters, such as , , , , and the optimality of the number of samples for each level .
A single iteration of the algorithm consists of the following 8 steps.

  1. Warm-up: Begin with level-dependent number of warm-up samples,

    (36)

    The choice of as in (36) prevents efficiency deterioration of the final OF-MLCM estimator by ensuring that the total computational budget for the warm-up iteration does not exceed ; at the same time, it allows to prescribe sufficiently many samples on the coarser levels, where is expected to be large. In our particular application, computational work for each level scales as , and hence the amount of required warm-up samples is given by for . We note, that constant (level-independent) number of warm-up samples can be very inefficient [48, 15].

  2. Solver: Evaluate approximations for each level and sample .

  3. Indicators: Using , estimate , , and for . Optionally, empirical estimates of could be used within Bayesian inference framework to fit an exponential decay model for w.r.t. levels . Assuming Gaussian errors, this reduces to a least-squares line fit to the natural logarithm of indicators .

  4. Coefficients: Compute control variate coefficients from estimated and using (31).

  5. Errors: Using and , estimate -weighted covariances and total sampling error as in (33).

  6. Estimator: If the required tolerance is reached, i.e., , or if the prescribed computational budget is spent, then evaluate OF-MLMC estimator (32) and terminate the algorithm. Otherwise, proceed to the optimization step.

  7. Optimization: Compute optimal required number of samples from and using either (34) or (35), respectively.
    Remark. If we obtain for some level , we keep the already computed samples, i.e., we set . In order to adjust for such a constraint in the optimization problem, we subtract the corresponding sampling error from the required tolerance , or subtract the corresponding amount of computational budget from , respectively. Afterwards, the number of samples for the remaining levels (where was not enforced) are re-optimized according to either (34) or (35), respectively. We repeat this procedure until is satisfied for all levels .

  8. Iterations: Go back to step (2) and continue the algorithm with the updated number of samples .

If the empirical estimates in steps (3)–(5) of the above adaptive OF-MLMC algorithm are sufficiently accurate, it will terminate after two iterations - the initial warm-up iteration and one additional iteration with already optimal and . A more detailed discussion of the special cases within OF-MLMC algorithm, alternative approaches, related work and possible extensions is provided in Appendix A.

2.5 PyMLMC

The OF-MLMC algorithm is distributed as an open source library PyMLMC [69]. A diagram of the software components is shown in Figure 3. PyMLMC provides a modular framework for native execution of deterministic solvers in their respective “sandbox” directories. This allows maximum flexibility for programming languages, distributed and/or shared memory architectures, and support for many-core accelerators. Due to the lack of communication among such sandboxed executions for each sample, the load balancing across samples can be relayed to the submission system (e.g. Slurm, LSF, LoadLeveler, Cobalt) of the compute cluster. Nested (across and within samples) parallelism is used, where few large parallel jobs are submitted for fine levels, and, in parallel, many small (possibly even serial) jobs are submitted for coarse levels. To increase the efficiency and reduce the stress on submission systems, job batching (submitting a single job that computes multiple samples subsequently) and job merging (submitting a single job that computes multiple samples concurrently) or a combination of both is implemented. Once all samples (or at least some of them) are computed, statistical estimators are constructed as a post-processing step using the NumPy and SciPy libraries. The “sandboxing” framework enables any additional required statistical estimates or QoIs to be evaluated at a later stage without the need to re-execute any of the computationally expensive sampling runs. The amount of required disk space in multi-level methods scales linearly w.r.t. the amount of required computational budget. In particular, the total required disk space for all samples on all levels is of the same order as a single statistical estimator on the full three-dimensional domain. Hence, it is highly advantageous to keep all computed samples for the aforementioned post-processing flexibility purposes.

Figure 3: Scheme of the OF-MLMC implementation library PyMLMC.

In the present work (see section 3), we verified the efficiency of the (nested) parallelization of the OF-MLMC coupled with the Cubism MPCF solver on the entire MIRA supercomputer (Argonne National Laboratory) consisting of half a million cores. We note that large (exa-)scale simulations on massively parallel computing platforms are subject to processor failures at run-time [11]. Exploiting the natural fault tolerance in OF-MLMC-FVM due to independent sampling, a Fault Tolerant MLMC (FT-MLMC) method was implemented in [53]

and was shown to perform in agreement with theoretical analysis in the presence of simulated, compound Poisson distributed, random hard failures of compute cores. Such FT-mechanisms are also available in PyMLMC, and have successfully excluded one corrupted sample on the coarsest level in the simulations reported in

section 3.

3 Numerical simulations and analysis

The initialization of the cavities employs experimental findings indicating log-normal distribution for their radii [47], whereas the position vectors are generated according to a uniform distribution as there is no prior knowledge.

3.1 Spherical cloud of 500 gas cavities with log-normally distributed radii

For a cubic domain , we consider a cloud of bubbles located at the center of the domain with a radius of . The log-normal distribution for the radii of the cavity is clipped so as to contain bubbles only within the range of to . The resulting cloud gas volume content (w.r.t. to the volume of the sphere with radius ) is approximately and the resulting cloud interaction parameter is approximately , where with cloud gas volume fraction , cloud radius and average cavity radius (refer to [9] for a derivation). We note that both of these quantities depend on a statistical realization of the random cloud. An illustration of the cloud geometry is shown in Figure 4.

The cloud is initially at pressure equilibrium with the surrounding water phase at . Throughout the entire domain, the density of the gas phase is set to and the density of the liquid is set to . Starting away from the surface of the cloud, there is a smooth pressure increase towards the prescribed ambient pressure of , following the setup proposed in [71]. The resulting pressure ratio is . At the boundaries, non-reflecting, characteristic-based boundary conditions are applied, together with a penalty term for the prescribed far-field pressure of [59]. A single statistical realization (sample) of the pressure field and cavity interfaces across the two-dimensional slice at computed using Cubism-MPCF with resolution of mesh cells is depicted in Figure 4. Cavities are observed to collapse inwards, emitting pressure waves that focus near the center of the cloud at time .

Figure 4: Single statistical realization (sample) of the pressure field and cavity interfaces in the two-dimensional slice at at the initial (left, ) and at the collapse (right, ) stages. Cavities collapse inwards, emitting pressure waves that focus near the center of the cloud. There is a two orders of magnitude difference in pressures at initial and collapse stages.

We consider four levels of spatial and temporal resolutions. The coarsest mesh consists of cells with two intermediate meshes of and resolutions, and the finest mesh with cells. The time-step size decreases according to a prescribed CFL condition with CFL number set to , resulting in approximately and time steps for the coarsest and finest levels, respectively.

We note that the number of uncertainty sources in this simulation is very large: for each realization of a cloud, random three-dimensional spatial coordinates together with a random positive radius for all cavities are needed, comprising in total independent sources of uncertainty.

For each statistical sample of a collapsing cloud configuration and on each resolution level, simulations were performed for approximately in physical time. Depending on the random configuration of the cloud, the main collapse occurred at approximately , followed by rebound and relaxation stages after . The obtained results are discussed in the following sections.

3.2 Performance of the OF-MLMC

We quantify the computational gains of the OF-MLMC method by comparing it to standard MLMC and plain MC sampling methods. The chosen quantity of interest for (28) is the pressure as sampled by a sensor placed at the center of the cloud and emulated as

(37)

where pressure is averaged over a sphere around the center of the cloud located at with radius ,

(38)

For this particular choice of the QoI , estimated correlations between levels, implicitly used in (31), and the resulting optimal control variate coefficients from (31) are depicted in Figure 5. Due to relatively high correlations between resolution levels, optimal control variate coefficients exhibit only moderate deviations from unity, with the largest being at level 1 with deviation of 30%.

Figure 5: Estimated correlations between levels (left) and the resulting optimal control variate coefficients (right) as in (31). In left plot, circles depict measurements of correlations with the associated error bars, and solid line depicts the resulting inferred values. Relatively high correlations and moderate deviations (up to 30%) in optimal control variate coefficients are observed.

Estimated variances of level differences, required in (31), and sampling errors for each level, computed in (33), are depicted in Figure 6. Variances of differences are decreasing for finer levels of resolution, which requires a smaller number of MC samples in order to reduce the statistical error on finer resolution levels, where sampling entails computationally very expensive numerical simulations. Measurements of variances of differences are plotted as circles, with the associated error bars, estimated from the variance of the estimator and the number of samples used in the warm-up stage. These measurements, together with assumed Gaussian error model from the error bars, are used within Bayesian inference framework to fit an exponential decay model for w.r.t. “difference” levels . Solid line depicts the resulting inferred values, which are later used in (31), whereas dashed line depicts the adjusted values from (33), where optimal control variate coefficients are applied. The resulting statistical errors at both MLMC iterations (warm-up and final) are decreasing w.r.t. the increasing resolution level. At the final MLMC iteration, the errors are significantly decreased on all levels when compared to the warm-up iteration, resulting in the total statistical error estimate of approximately .

Figure 6: Estimated variances of differences (left) required in (31) and the resulting estimated sampling errors (right) defined in (33) on each level. In left plot, circles depict measurements with the associated error bars, solid line depicts the resulting inferred values, and dashed line depicts the adjusted values from (33), where optimal control variate coefficients are applied. In right plot, brown line indicates estimated errors on each level after the initial MLMC warmup iteration, whereas orange line and the corresponding region indicate the estimated error improvements after the final MLMC iteration. The final total statistical error is estimated to be approximately .

For the optimization of samples numbers on each level, a prescribed budget of 16 million core hours was used and the optimal number of samples was determined by (35). Estimated number of samples for the warm-up and final iterations of the OF-MLMC algorithm are depicted in Figure 7, together with the resulting estimated computational budget on each level. We observe that a significantly larger number of samples is required on the coarser levels of resolution owing to a strong reduction in level difference variances , which are also highest at the coarsest resolution levels. However, the required computational budget is comparable across all levels; such multi-level approach achieves a significant (more than two orders of magnitude) reduction in statistical error (i.e., in the variance of the statistical estimators), while at the same time keeping the deterministic error (bias) small, which is controlled by the resolution of the finest level.

Figure 7: Estimated number of samples for the warm-up and final iterations of the OF-MLMC algorithm (left) and the resulting estimated computational budget (right) on each level. Brown lines and the corresponding regions indicate number of samples an computational budget during the initial MLMC warmup iteration, whereas green lines indicate the total accounts of both quantities at the final MLMC iteration; green regions indicating the differences between these two iterations. We observe that despite thousand of samples required on coarser levels of resolution, the required corresponding computational budget is comparable among all levels.

In Table 1, a comparison of MC, MLMC, and OF-MLMC methods and estimated computational speedups is provided for a target statistical error of . Note, that the number of samples on the same resolution as the finest level in (OF-)MLMC and resulting computational budget required for MC simulations are estimated by

(39)

OF-MLMC is estimated to be more than two orders of magnitude faster than the plain MC method, and even more than three times faster than standard MLMC method without optimized control variate coefficients. The overall computational budget of OF-MLMC was only approximately 8 times larger than a single FVM solve on the finest resolution level.

budget error speedup
OF-MLMC 6400, 384, 40, 2 16.6 M CPU hours 176.8
MLMC 4352, 258, 32, 3 50 M CPU hours 50.6
MC 2 million 2 B CPU hours -
Table 1: Comparison of MC, MLMC, and OF-MLMC methods and estimated computational speedups over standard MC. OF-MLMC is estimated to be more than two orders of magnitude faster than plain MC, and even more than three times faster than standard MLMC.

3.3 Statistics for temporal quantities of interest

Multi-level Monte Carlo statistical estimates are depicted in Figure 8, Figure 9 and Figure 10

. The statistical spread of the maximum (in physical domain) pressure is especially wide at its peak (in time) value, implying a large uncertainty in the damage potential of the cavitation collapse. To the best of our knowledge, such full Probability Density Functions (PDFs) are reported here for the first time when using the MLMC methodology for non-linear systems of conservations laws. To obtain such estimates, level-dependent kernel density estimators were used, with maximum bandwidth determined using a well-known “solve-the-equation” method

[66, 63]. Such empirical PDFs are significantly more valuable in engineering, compared to less informative mean/median and deviations/percentiles estimators, especially for bi-modal probability distributions often encountered in such non-linear systems of conservations laws due to the presence of shocks [43].

Figure 8: Uncertainties in the cloud gas volume (mean values with 50% and 90% confidence intervals, left) and global maximum pressure (scaled empirical histogram, right) within the cloud during the collapse. Since all initial cloud configurations contain the same number of equally-sized cavities, very low uncertainties are observed in the evolution of the total gas volume. However, the statistical spread of the peak pressure is especially wide at its maximum value, implying a large uncertainty in the damage potential of the cavitation collapse.

In Figure 8, uncertainties in the cloud gas volume (represented by gas fraction sensor #5, located at the center with radius (hence containing the entire cloud) and global maximum pressure within the cloud are measured during the entire collapse of . As all initial cloud configurations contain the same number of equally-sized cavities, very low uncertainties are observed in the evolution of the total cloud gas volume. However, the statistical spread of the peak pressure is especially wide at its maximum value, indicating a strong necessity for uncertainty quantification in such complex multi-phase flows.

Figure 9: Uncertainties (mean values with 50% and 90% confidence intervals) in the gas volume (left) and pressure (right) sensor at the center of the cloud during the collapse. Notice, that the statistical spread of the peak sensor pressure is especially wide at its maximum value and the post-collapse increase in the gas fraction during the rebound stage.

In Figure 9, uncertainties are measured in the gas volume fraction sensor and pressure sensor at the center of the cloud as defined in (37). Similarly as for the observations of peak pressure behavior, the statistical spread of the peak sensor pressure is especially wide at its maximum value. The post-collapse increase in the gas fraction indicates the presence of a rebound stage. During this stage, the post-collapse gas fraction consistently (in terms of confidence intervals) exceeds pre-collapse values, indicating the presence of induced outgoing rarefaction waves.

Figure 10: Uncertainties (probability density functions) in the maximum speed of sound (left) and peak pressure location (right) during the collapse. Dashed line indicates the initial location of the cloud surface.

In Figure 10, uncertainties in the maximum speed of sound and the peak pressure location distance from the center of the cloud are measured during the entire collapse of . The uncertainty in the maximum speed of sound is a direct consequence of large uncertainties in the global peak pressure. However, on the contrary, the uncertainty in the distance of the peak pressure location from the cloud center is much smaller, i.e., the temporal-spatial profile of the pressure wave evolution as it travels from the surface of the cloud towards the center has a much lower uncertainty (when compared to the large observed uncertainties in the global maximum pressure estimates).

3.4 Statistics for spatial quantities of interest

In this section, we plot the statistical estimates of QoIs extracted along one-dimensional lines that are cut as a straight line through the center of the cloud in the three-dimensional computational domain. We note that radial symmetry is assumed in the statistical distribution of cavities within the cloud, and hence such one-dimensional statistical estimates through the center of the cloud are sufficient to represent the global uncertainty in the entire three-dimensional domain. The objective of extracted one-dimensional line plots is to provide a better insight into uncertainty structures at the center of the cloud by plotting all statistical estimates in a single plot. The line is cut at a specific physical simulation time, when the peak pressure is observed, and hence is slightly different for every sample. In order to reduce volatility in global maximum pressure measurements and hence the choice of the collapse time, we smoothen the observed peak pressure measurements with a Gaussian kernel of width

by the means of fast Fourier transform. Statistical estimates for such extracted one-dimensional lines for pressure at different stages of the collapse process are provided in

Figure 11.

Figure 11: MLMC-estimated uncertainties (PDFs) of the extracted pressure line at the pre-collapse time (, left) and at the time of largest peak pressure (, right). The resulting uncertainty in encountered pressures increases significantly at the final collapse stage, with largest spreads observed near the epicenter of the cloud cavitation collapse.

The uncertainties are estimated using the MLMC methodology in the extracted pressure along the line in -direction (with coordinates and fixed) at the pre-collapse time and at the time of largest peak pressure, which occurs approximately at

. The time of largest peak pressure depends on the initial could configuration and hence is a random variable, varying in each statistical realization. We observe that the resulting uncertainty in encountered pressures increases significantly at the final collapse stage, and largest spreads are observed near the epicenter of the cloud cavitation collapse, where the damage potential is the highest.

3.5 Analysis of linear and non-linear dependencies

Statistical estimates reported in the previous sections indicate that even though the initial cloud setup is very similar for all realizations, i.e., equal count of cavity bubbles, identical cloud radius and cavity radii ranges (which resulted in very small uncertainties for the cloud volume reported in Figure 8), and equal initial gas and liquid pressures, the resulting peak pressure uncertainty is very large, as seen in Figure 9 and Figure 11.

Hence, it is only the actual configuration (or distribution) of the cavities inside the cloud that can have such amplifying (or attenuating) effect on the peak pressures. The main scope of this section is to investigate various quantities of interest that could potentially explain the cause of such non-linear effects. The set of selected candidate metrics for the cloud configuration includes skewness (asymmetry) of the initial spatial cavity distribution inside the cloud, cloud interaction parameter

, and distance (from the center of the cloud) of the central cavity (i.e. the cavity closest to the center). Cloud skewness is a measure of asymmetry of the cloud and is estimated by a statistical centered third moment of the distribution of cavity locations along each of the three spatial axes. All quantities from this set of candidate metrics are tested for linear statistical correlations with the observed values of peak pressure, peak pressure distance from the center of the cloud, peak pressure at the sensor at the center of the domain, and collapse time (when largest peak pressure occurs). In addition, we have also tested for statistical correlations among QoIs themselves, such as peak pressure location and the location of the center-most cavity in the cloud. The results are provided as a Hinton diagram in Figure 12.

Figure 12: Hinton diagram indicating pair-wise linear statistical correlations between candidate metrics and selected quantities of interest. White squares indicate positive and black squares indicate negative correlations. The size of the square indicates the magnitude of the correlation in the interval . Cavities within the entire cloud and only in the core of the cloud are considered in the left and right plots, respectively.

We observe multiple significant direct and inverse linear statistical correlations between candidate metrics and QoIs:

  • mild inverse correlation between cloud skewness and pressure sensor readings, mainly a consequence of the central sensor placement within the cloud;

  • strong correlation between the location of central cavity and location of peak pressure (w.r.t. cloud center), confirming prior observations in [62] that peak pressures in the cloud are observed within cavities that are near the center of the cloud;

  • strong inverse correlations between peak pressure location and peak pressure magnitude, indicating that highest peak pressures are most probable near the center of a cloud;

  • moderate correlation between and collapse time, since large values can be a consequence of large gas fraction in the cloud.

Despite numerous correlations explaining the statistical spread of observed pressures, the influence of cloud interaction parameter remains undetermined. To this end, we consider cloud skewness and only for the core of the cloud. We have identified the core of the cloud to be a region around the center of the cloud where uncertainties in peak pressure are largest, resulting in a spherical core with radius of , based on results in Figure 11. In this case, correlations involving respective metrics such as cloud skewness and for the core of the cloud, are observed to be more significant:

  • mild direct correlation between and pressure sensor readings, indicating stronger collapse for clouds with higher cloud interaction parameter due to stronger pressure amplification;

  • mild inverse correlation between cloud skewness and .

Overall, such insight into statistical correlations provides a very informative description of inter-dependencies between cloud properties and observed collapse pressures, suggesting direct causal relations w.r.t. cloud non-skewness, interaction parameter , and proximity of the central cavity to the cloud center.

Due to a non-linear nature of the multi-phase equations and the collapse process, non-linear dependencies among candidate metrics and QoIs could be present, which might be inaccurately captured by the estimates of linear statistical correlations in Figure 12. We tested this hypothesis by estimating the full joint probability distribution for the pairs of significantly correlated candidate metrics and pressure behavior observations. In Figure 13 and Figure 14, we provide results for a selected subset of tested correlation pairs, where strongest and most relevant correlations were observed.

Figure 13: Joint PDFs of cloud interaction parameter with collapse time (left) and the resulting pressure sensor reading (right). Higher values of cloud interaction parameter are more likely to cause larger and more delayed collapse pressures.
Figure 14: Joint PDFs of location of central cavity and location of peak pressure (left), and location of peak pressure and peak pressure magnitude (right). The location of the central cavity bubble correlates strongly with peak pressure location, which itself exhibits strong inverse correlations with the magnitude of the collapse pressures, explaining a wide confidence intervals observed in Figure 8 and Figure 11.

Joint probability distributions are consistent with linear correlation estimates obtained in Figure 12, and additionally provide valuable insight into non-linear dependencies among QoIs. Obtained results provide a good global overview of causal links between cloud structure and collapse pressures and motivate further analysis to determine the complex mechanisms governing the dynamics of such large and complex cloud cavitation phenomena. We would also like to refer to an ongoing extensive (deterministic) parameter study [62] which investigates such causal links for a wider range of cloud sizes, cavity counts and cloud interaction parameter values.

4 Summary and conclusions

We have presented a non-intrusive multi-level Monte Carlo methodology for uncertainty quantification in multi-phase cloud cavitation collapse flows, together with novel optimal control variate coefficients which maintain the efficiency of the algorithm even for weak correlations among resolution levels and delivers significant variance reduction improvements. We have conducted numerical experiments for cavitating clouds containing cloud cavities, which are randomly (uniformly) distributed within the specified radius, and the radii of the cavity follow a log-normal distribution. The results of these numerical experiments have revealed significant uncertainties in the magnitude of the peak pressure pulse, emphasizing the relevance of uncertainty quantification in cavitating flows. Furthermore, statistical correlation and joint probability density function estimates have revealed potential underlying causes of this phenomenon. In particular, spatial arrangement characteristics of the cloud and its core, such as skewness, cloud interaction parameter , and the position of the central cavity have been observed to have a significant influence on the resulting pressure amplification intensities during the collapse process.

All numerical experiments were performed by coupling an open source PyMLMC framework with Cubism-MPCF, a high performance peta-scale finite volume solver. Evolution of collapsing clouds has been simulated by explicit time stepping subject to a CFL stability constraint on a hierarchy of uniform, structured spatial meshes. Efficient multi-level Monte Carlo sampling has been observed to exhibit more than two orders of magnitude in estimated computational speedup when compared to standard Monte Carlo methods, with an additional factor 3 estimated speedup due to optimal control variate coefficients. In the present work, we have observed the efficient scaling of the proposed hybrid OF-MLMC-FVM method up to the entire MIRA supercomputer consisting of half a million cores. Considering that fault-tolerance mitigation mechanisms are implemented in PyMLMC and have been successfully used, we expect it to scale linearly and be suitable for the exa-scale era of numerical computing.

The proposed OF-MLMC-FVM can deal with a very large number of sources of uncertainty. In the problems presented here, 2’000 sources of uncertainty are needed to fully describe the random initial configuration of the collapsing cloud. To the best of our knowledge, currently no other methods (particularly deterministic methods such as quasi Monte Carlo, stochastic Galerkin, stochastic collocation, PGD, ANOVA, or stochastic FVM) are able to handle this many sources of uncertainty, in particular for non-linear problems with solutions which exhibit path-wise low regularity and possibly non-smooth dependence on random input fields. Furthermore, the proposed methodology is well suited for multi-level extensions of Markov Chain Monte Carlo methods for Bayesian inversion [30, 36].

The present multi-level methodology is a powerful general purpose technique for quantifying uncertainty in complex flows governed by hyperbolic systems of non-linear conservation laws such as cloud cavitation flow problems.

Appendix A Discussion

Here, we wish to comment on possible shortcomings in the progress of the adaptive OF-MLMC algorithm described in subsubsection 2.4.4. These remarks are meant to assist the application of OF-MLMC and are not directly relevant for the numerical results presented in section 3.

a.1 Number of “warm-up” samples on the finest resolution level

Notice, that in order to have empirical estimates of , , and , at least two samples would be required on each level . Enforcing might be very inefficient in the cases when only one sample is actually needed, since in presently considered applications the most computationally expensive samples are actually at the finest level . To avoid this, initially only one “warm-up” sample on the finest mesh level could be computed, i.e., . For subsequent optimization steps of each , variance of level difference for level is inferred from available measurements on lower resolution levels using Bayesian inference, as described in step (3) of the OF-MLMC algorithm. If more than one sample is actually required on the finest resolution level, optimization step (7) of the adaptive OF-MLMC algorithm above will adjust to the correct optimal value and additional empirical estimate would be available for even more accurate inference.

a.2 Iterative control of indicator errors

Final OF-MLMC error could be underestimated by and be actually larger than the prescribed tolerance , since we only ensure the estimated total error . Since is based on the randomized statistical estimators, it is also random and has a spread around its mean . We note, that probability of the resulting MLMC error being larger than the estimated error (which is forced to be smaller than the prescribed tolerance ) can be reduced by sufficiently increasing the number of samples according to the estimates of

fourth centered moment (kurtosis)

of the level differences. Such estimates would provide the variance estimates of the empirical variance estimators . Then, increasing

by several standard deviations

of , the required percentile level of confidence interval of can be reduced below the prescribed tolerance , this way ensuring the required high probability of true MLMC error not exceeding the prescribed tolerance . A continuation MLMC method incorporating similar techniques for estimation and control of error confidence intervals was already proposed in [15]. However, updating estimates after each computed sample could be very inefficient for large HPC application, since such incremental techniques require heavy synchronization and would make efficient load balancing on distributed many-core systems very challenging.

a.3 Sample “recycling”

Alternative optimal coefficients for each level in MLMC estimator was suggested in [54, 55], where a multi-fidelity Monte Carlo method is described. There, some statistical realizations (samples) are re-used, i.e., the same result is used in both estimates and , each contributing to a separate difference in the telescoping MLMC estimator (32). Such “recycling” strategy requires less sampling, however, error analysis complexity is highly increased due to additional statistical dependencies, absent in OF-MLMC method. On the other hand, for “recycled” sampling, the resulting error minimization problem is separable in terms of coefficients and number of samples , and hence no linear system (31) needs to be solved [54]. The linear system (31) is, however, very small, sparse, and needs to be solved only a few times, hence is not a bottleneck of this algorithm.

a.4 Truly optimal number of MC samples on each resolution level

A discrete optimization problem could be formulated, avoiding round-off operations in (34) or (35), and providing truly optimal integer , as suggested in [52]. We note, however, that such round-offs do not influence the efficiency of the method on coarser levels, where the number of samples is large. Furthermore, round-off inefficiencies are most probably over-shadowed by the used approximate estimators for .

Acknowledgments

Authors acknowledge support from the following organizations: Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, for awarding computer time under the project CloudPredict; Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357, for providing access and support for MIRA, CETUS, and COOLEY systems. Partnership for Advanced Computing in Europe (PRACE) projects PRA091 and Pra09_2376, together with Jülich and CINECA Supercomputing Centers; Swiss National Supercomputing Center (CSCS) for computational resources grant under project ID s500. JŠ would like to thank Stephen Wu for his contribution and fruitful discussions on the optimal variance reduction techniques in the multi-level Monte Carlo estimators, and Timothy J. Barth for hosting him as a scientific visitor at NASA Ames Research Center (California, United States) and for collaborations on robust kernel density estimators.

References

  • [1] N. A. Adams and S. J. Schmidt, Shocks in cavitating flows, in Bubble Dynamics and Shock Waves, C. F. Delale, ed., vol. 8 of Shock Wave Science and Technology Reference Library, Springer Berlin Heidelberg, 2013, pp. 235–256.
  • [2] M. R. Baer and J. W. Nunziato, A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials, International Journal of Multiphase Flow, 12 (1986), pp. 861–889.
  • [3] A. Barth, C. Schwab, and N. Zollinger, Multi-level MC method for elliptic PDEs with stochastic coefficients, Numerische Mathematik, 119 (2011), pp. 123–161.
  • [4] J. R. Blake, M. C. Hooton, P. B. Robinson, and R. P. Tong, Collapsing cavities, toroidal bubbles and jet impact, Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 355 (1997), pp. 537–550.
  • [5] L. Bonfiglio, P. Perdikaris, S. Brizzolara, and G. E. Karniadakis, A multi-fidelity framework for investigating the performance of super-cavitating hydrofoils under uncertain flow conditions, AIAA Non-Deterministic Approaches, (2017).
  • [6] N. Bremond, M. Arora, C.-D. Ohl, and D. Lohse, Controlled multibubble surface cavitation, Physical Review Letters, 96 (2006).
  • [7] C. E. Brennen, Cavitation and bubble dynamics, Oxford University Press, USA, 1995.
  • [8] C. E. Brennen, Cavitation in medicine, Interface Focus, 5 (2015), p. 20150022.
  • [9] C. E. Brennen, G. Reisman, and Y.-C. Wang, Shock waves in cloud cavitation. Twenty-First Symposium on Naval Hydrodynamics, 1997.
  • [10] E. A. Brujan, T. Ikeda, and Y. Matsumoto, Shock wave emission from a cloud of bubbles, Soft Matter, 8 (2012), p. 5777.
  • [11] F. Cappello, Fault tolerance in petascale/exascale systems: Current knowledge, challenges and research opportunities, International Journal of High Performance Computer Applications, 23 (2009), pp. 212–226.
  • [12] G. L. Chahine, Pressures generated by a bubble cloud collapse, Chemical Engineering Communications, 28 (1984), pp. 355–367.
  • [13] Q. Y. Chen, D. Gottlieb, and J. S. Hesthaven, Uncertainty analysis for steady flow in a dual throat nozzle, Journal of Computational Physics, 204 (2005), pp. 378–398.
  • [14] K. A. Cliffe, M. B. Giles, R. Scheichl, and A. L. Teckentrup, Multilevel Monte Carlo methods and applications to elliptic PDEs with random coefficients, Computing and Visualization in Science, 14 (2011), pp. 3–15.
  • [15] N. Collier, A.-L. Haji-Ali, F. Nobile, E. von Schwerin, and R. Tempone, A continuation multilevel Monte Carlo algorithm, BIT Numerical Mathematics, 55 (2015), pp. 399–432.
  • [16] P. M. Congedo, E. Goncalves, and M. G. Rodio, About the uncertainty quantification of turbulence and cavitation models in cavitating flows simulations, European Journal of Mechanics - B/Fluids, 53 (2015), pp. 190 – 204.
  • [17] C. C. Coussios and R. A. Roy, Applications of acoustics and cavitation to noninvasive therapy and drug delivery, Annual Review of Fluid Mechanics, 40 (2008), pp. 395–420.
  • [18] D. P. Schmidt et al., Assessment of the prediction capabilities of a homogeneous cavitation model for the collapse characteristics of a vapour-bubble cloud, in WIMRC 3rd International Cavitation Forum, Coventry, U.K., 2011.
  • [19] T. J. Dodwell, C. Ketelsen, R. Scheichl, and A. L. Teckentrup, A hierarchical multilevel Markov Chain Monte Carlo algorithm with applications to uncertainty quantification in subsurface flow, SIAM/ASA Journal on Uncertainty Quantification, 3 (2015), pp. 1075–1108.
  • [20] D. Elsakout, M. Christie, and G. Lord, Multilevel Markov Chain Monte Carlo (MLMCMC) for uncertainty quantification, in Society of Petroleum Engineers - SPE North Africa Technical Conference and Exhibition 2015, NATC 2015, 2015.
  • [21] M. B. Giles, Multi-level Monte Carlo path simulation, Oper. Res., 56 (2008), pp. 607–617.
  • [22] M. B. Giles, Multilevel Monte Carlo methods, Acta Numerica, (2015).
  • [23] M. B. Giles and C. Reisinger, Stochastic finite differences and multilevel Monte Carlo for a class of SPDEs in finance, Preprint, Oxford-Man Institute of Quantitative Finance and Mathematical Institute, University of Oxford, (2011).
  • [24] D. Gottlieb and D. Xiu, Galerkin method for wave equations with uncertain coefficients, Commun. Comput. Phys., 3 (2008), pp. 505–518.
  • [25] S. Gottlieb and C.-W. Shu, Total variation diminishing Runge-Kutta schemes, Mathematics of Computation of the American Mathematical Society, 67 (1998), pp. 73–85.
  • [26] P. E. Hadjidoukas, P. Angelikopoulos, C. Papadimitriou, and P. Koumoutsakos, 4U: A high performance computing framework for Bayesian uncertainty quantification of complex models, Journal of Computational Physics, 284 (2015), pp. 1–21.
  • [27] P. E. Hadjidoukas, D. Rossinelli, B. Hejazialhosseini, and P. Koumoutsakos, From 11 to 14.4 PFLOPs: Performance optimization for finite volume flow solver, in Proceedings of the 3rd International Conference on Exascale Applications and Software, 2015.
  • [28] P. E. Hadjidoukas, D. Rossinelli, F. Wermelinger, J. Sukys, U. Rasthofer, C. Conti, B. Hejazialhosseini, and P. Koumoutsakos, High throughput simulations of two-phase flows on Blue Gene/Q, in Parallel Computing: On the Road to Exascale, Proceedings of the International Conference on Parallel Computing, ParCo 2015, 1-4 September 2015, Edinburgh, Scotland, UK, 2015, pp. 767–776.
  • [29] S. Heinrich, Multilevel Monte Carlo methods, Large-scale scientific computing, Third international conference LSSC 2001, Sozopol, Bulgaria, 2001, Lecture Notes in Computer Science, Springer Verlag, 2170 (2001), pp. 58–67.
  • [30] V. H. Hoang, C. Schwab, and A. M. Stuart, Sparse MCMC gpc finite element methods for bayesian inverse problems, Inverse Problems, (2014).
  • [31] X. Y. Hu, B. C. Khoo, N. A. Adams, and F. L. Huang, A conservative interface method for compressible flows, Journal of Computational Physics, 219 (2006), pp. 553–578.
  • [32] G.-S. Jiang and C.-W. Shu, Efficient implementation of weighted ENO schemes, Journal of computational physics, 126 (1996), pp. 202–228.
  • [33] E. Johnsen and T. Colonius, Implementation of WENO schemes in compressible multicomponent flow problems, Journal of Computational Physics, 219 (2006), pp. 715 – 732.
  • [34] E. Johnsen and T. Colonius, Numerical simulations of non-spherical bubble collapse, Journal of Fluid Mechanics, 629 (2009), pp. 231–262.
  • [35] A. K. Kapila, R. Menikoff, J. B. Bdzil, S. F. Son, and D. S. Stewart, Two-phase modeling of deflagration-to-detonation transition in granular materials: Reduced equations, Physics of Fluids, 13 (2001), pp. 3002–3024.
  • [36] C. Ketelsen, R. Scheichl, and A. L. Teckentrup, A hierarchical multilevel Markov Chain Monte Carlo algorithm with applications to uncertainty quantification in subsurface flow, arXiv:1303.7343, (2013).
  • [37] E. Lauer, X. Y. Hu, S. Hickel, and N. A. Adams, Numerical investigation of collapsing cavity arrays, Physics of Fluids, 24 (2012), p. 052104.
  • [38] S. Li, Tiny bubbles challenge giant turbines: Three gorges puzzle, Interface Focus, 5 (2015), p. 20150020.
  • [39] G. Lin, C. H. Su, and G. E. Karniadakis, The stochastic piston problem, Proc. Natl. Acad. Sci. USA, 101 (2004), pp. 15840–15845.
  • [40] X. Ma and N. Zabaras, An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations, J. Comp. Phys., 228 (2009), pp. 3084–3113.
  • [41] R. Menikoff and B. J. Plohr, The Riemann problem for fluid flow of real materials, Reviews of Modern Physics, 61 (1989), pp. 75–130.
  • [42] S. Mishra and C. Schwab, Sparse tensor multi-level Monte Carlo finite volume methods for hyperbolic conservation laws with random initial data, Math. Comp., 280 (2012), pp. 1979–2018.
  • [43] S. Mishra, C. Schwab, and J. Šukys, Multi-level Monte Carlo finite volume methods for nonlinear systems of conservation laws in multi-dimensions, Journal of Computational Physics, 231 (2012), pp. 3365–3388.
  • [44] S. Mishra, C. Schwab, and J. Šukys, Multi-level Monte Carlo finite volume methods for shallow water equations with uncertain topography in multi-dimensions, SIAM J. Sci. Comput., 24 (2012), pp. B761–B784.
  • [45] S. Mishra, C. Schwab, and J. Šukys, Multi-level Monte Carlo finite volume methods for uncertainty quantification of acoustic wave propagation in random heterogeneous layered medium, Journal of Computational Physics, 312 (2016), pp. 192–217.
  • [46] K. A. Mørch, On the collapse of cavity clusters in flow cavitation, in Cavitation and Inhomogeneities in Underwater Acoustics, W. Lauterborn, ed., Springer, 1980, pp. 95–100.
  • [47] K. A. Mørch, Energy considerations on the collapse of cavity clusters, Applied Scientific Research, 38 (1982), pp. 313–321.
  • [48] F. Müller, Stochastic methods for uncertainty quantification in subsurface flow and transport problems, Dissertation ETH No. 21724, 2014.
  • [49] F. Müller, P. Jenny, and D. W. Meyer, Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media, Journal of Computational Physics, 250 (2013), pp. 685–702.
  • [50] A. Murrone and H. Guillard, A five equation reduced model for compressible two phase flow problems, Journal of Computational Physics, 202 (2005), pp. 664–698.
  • [51] L. Parussini, D. Venturi, P. Perdikaris, and G. E. Karniadakis, Multi-fidelity Gaussian process regression for prediction of random fields, Journal of Computational Physics, 336 (2017), pp. 36–50.
  • [52] S. Pauli, Optimal number of multilevel Monte Carlo levels and their fault tolerant application, Dissertation ETH, 2014.
  • [53] S. Pauli, M. Kohler, , and P. Arbenz, A fault tolerant implementation of multi-level monte carlo methods, Parallel Computing – Accelerating Computational Science and Engineering (CSE), Advances in Parallel Computing, IOS Press, 23 (2014), pp. 471–480.
  • [54] B. Peherstorfer, K. Willcox, and M. Gunzburger, Optimal model management for multifidelity Monte Carlo estimation, ACDL Technical Report TR15-2, (2015).
  • [55] B. Peherstorfer, K. Willcox, and M. Gunzburger, Survey of multifidelity methods in uncertainty propagation , inference , and optimization, Preprint, (2016), pp. 1–57.
  • [56] P. Perdikaris, D. Venturi, J. O. Royset, and G. E. Karniadakis, Multi-fidelity modelling via recursive co-kriging and Gaussian – Markov random fields, Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 471 (2015).
  • [57] G. Perigaud and R. Saurel, A compressible flow model with capillary effects, Journal of Computational Physics, 209 (2005), pp. 139–178.
  • [58] G. Poette, B. Després, and D. Lucor, Uncertainty quantification for systems of conservation laws, J. Comput. Phys., 228 (2009), pp. 2443–2467.
  • [59] T. J. Poinsot and S. K. Lele, Boundary conditions for direct simulations of compressible viscous flows, Journal of Computational Physics, 101 (1992), pp. 104–129.
  • [60] M. Raissi, P. Perdikaris, and G. E. Karniadakis, Inferring solutions of differential equations using noisy multi-fidelity data, Journal of Computational Physics, 335 (2017), pp. 736–746.
  • [61] U. Rasthofer, F. Wermelinger, P. Hadijdoukas, and P. Koumoutsakos, Large scale simulation of cloud cavitation collapse, In proceedings of The International Conference on Computational Science (accepted), (2017).
  • [62] U. Rasthofer, F. Wermelinger, J. Šukys, P. Hadijdoukas, and P. Koumoutsakos, Parameter study in cloud cavitation collapse, Manuscript in progress, (2017).
  • [63] V. C. Raykar and R. Duraiswami, Fast optimal bandwidth selection for kernel density estimation, In Proceedings of the sixth SIAM International Conference on Data Mining, Bethesda, (2006), pp. 524–528.
  • [64] D. Rossinelli, B. Hejazialhosseini, P. E. Hadjidoukas, C. Bekas, A. Curioni, A. Bertsch, S. Futral, S. J. Schmidt, N. A. Adams, and P. Koumoutsakos, 11 PFLOP/s simulations of cloud cavitation collapse, in Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’13, New York, NY, USA, 2013, ACM, pp. 3:1–3:13.
  • [65] D. P. Schmidt and M. L. Corradini, The internal flow of diesel fuel injector nozzles: A review, International Journal of Engine Research, 2 (2001), pp. 1–22.
  • [66] S. J. Sheather and M. C. Jones, A reliable data-based bandwidth selection method for kernel density estimation, Journal of the Royal Statistical Society. Series B (Methodological), 53 (1991), pp. 683–690.
  • [67] J. Šukys, Adaptive load balancing for massively parallel multi-level Monte Carlo solvers, PPAM 2013, Part I, LNCS. Springer, Berlin Heidelberg, 8384 (2014), pp. 47–56.
  • [68] J. Šukys, Robust multi-level Monte Carlo Finite Volume methods for systems of hyperbolic conservation laws with random input data, Dissertation ETH No. 21990, 2014.
  • [69] J. Šukys, PyMLMC, Available from: https://github.com/cselab/PyMLMC, (2017).
  • [70] J. Šukys, S. Mishra, and C. Schwab, Static load balancing for multi-level Monte Carlo finite volume solvers, PPAM 2011, Part I, LNCS. Springer, Heidelberg, 7203 (2012), pp. 245–254.
  • [71] A. Tiwari, J. B. Freund, and C. Pantano, A diffuse interface model with immiscibility preservation, Journal of Computational Physics, 252 (2013), pp. 290–309.
  • [72] A. Tiwari, C. Pantano, and J. B. Freund, Growth-and-collapse dynamics of small bubble clusters near a wall, Journal of Fluid Mechanics, 775 (2015), pp. 1–23.
  • [73] E. F. Toro, M. Spruce, and W. Speares, Restoration of the contact surface in the HLL-Riemann solver, Shock Waves, 4 (1994), pp. 25–34.
  • [74] J. Tryoen, O. L. Maitre, M. Ndjinga, and A. Ern, Intrusive projection methods with upwinding for uncertain non-linear hyperbolic systems, J. Comput. Phys., 229 (2010), pp. 6485–6511.