Advances in Bayesian Probabilistic Modeling for Industrial Applications

03/26/2020 ∙ by Sayan Ghosh, et al. ∙ General Electric 6

Industrial applications frequently pose a notorious challenge for state-of-the-art methods in the contexts of optimization, designing experiments and modeling unknown physical response. This problem is aggravated by limited availability of clean data, uncertainty in available physics-based models and additional logistic and computational expense associated with experiments. In such a scenario, Bayesian methods have played an impactful role in alleviating the aforementioned obstacles by quantifying uncertainty of different types under limited resources. These methods, usually deployed as a framework, allows decision makers to make informed choices under uncertainty while being able to incorporate information on the the fly, usually in the form of data, from multiple sources while being consistent with the physical intuition about the problem. This is a major advantage that Bayesian methods bring to fruition especially in the industrial context. This paper is a compendium of the Bayesian modeling methodology that is being consistently developed at GE Research. The methodology, called GE's Bayesian Hybrid Modeling (GEBHM), is a probabilistic modeling method, based on the Kennedy and O'Hagan framework, that has been continuously scaled-up and industrialized over several years. In this work, we explain the various advancements in GEBHM's methods and demonstrate their impact on several challenging industrial problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reliability of industrial applications necessitates physical experiments of complex engineering systems. In conjunction with their cost, measurements generated from these complex physical tests can be limited, or even sometimes infeasible, in practice due to lack of accessibility and repeatability. Advances in numerical algorithms and the availability of high-performance computing platforms, on the other hand, facilitate solution of complex engineering systems with less cost, in general, than the physical experiments. For credible numerical simulations, computers models should be verified, validated and calibrated. The numerical simulations can only be predictive simulations once uncertainties due to model and input parameters are quantified [29]. In abstract sense, both the physical experiments and predictive computer models can be viewed as data generation mechanisms. As the complexity of the industrial applications increases, a unified framework linking experimental measurements and numerical simulations is becoming more critical than ever.

To this end, the Bayesian paradigm provides a rich probabilistic approach to combine information from physical experiments and computer simulations in a way that permits engineers to continuously update their prior belief on the process under consideration [29]. In Bayesian framework, a probabilistic meta-model can be constructed utilizing data from physical experiments as well as low and high fidelity computer simulations. Optimally, design of computer experiments (DoE) is performed to guide the construction of such meta-models. The DoE algorithms seek to balance between the cost and accuracy of the computer simulations [10, 21]. Industrial-scale computer simulations often have large input dimensions. The input space may include control variables that are set to control the desired process and physical parameters reflecting the media of operation [38]

. Therefore, sensitivity analysis can estimate the relative influence of each input parameter on the variability of the prediction. The most influential parameters are then calibrated under uncertainty for accurate representation of important aspects of the physical process 

[15]. Once calibrated, uncertainties due to modeling errors and data are propagated to predict the quantities of interest with confidence bounds.

For the Bayesian framework, building surrogate models utilizing both high and low-fidelity simulations as well as experimental data are often based on Gaussian Processes (GP) which has become an indispensable tool in the Bayesian paradigm [17]. In this direction, GP surrogates have been applied for various engineering problems [7, 52], such as optimization of composite laminates[58], structural optimization[13], combustion synthesis of composite materials [39], design applications [56, 4] and reliability analysis [16], and additive manufacturing [49], to name a few.

The Kennedy and O’Hagan approach [17] can be viewed as one of the well established unified frameworks for Bayesian calibration, meta-modeling utilizing both simulation model and observed data. A computer implementation (named gpmsa) to this approach was released by Los Alamos National Lab  [14, 8]. Deploying gpmsa

to challenging industrial problems is not a straightforward task, as many technical challenges exist, such as the curse of dimensionality, model-identifiability, transient problems with multiple outputs, lack of data and extrapolation, multiple sets of experimental data, etc. To overcome these challenges, we at GE Research have built the GE Bayesian Hybrid Modeling (GEBHM) framework (shown in Fig. 

1) to carry out Bayesian analysis for industrial problems. The core components of GEBHM are based on gpmsa, but with several significant enhancements on top that are required for robust employment to many industrial problems.

Presented in Fig. 1, the distinguished features of GEBHM are: (1) using full-Bayesian approach (2) an explicitly formulated model discrepancy (3) building the GP surrogates during the calibration and updating process (4) calibration of dynamic models with multiple outputs (5) forward uncertainty quantification and propagation (6) model validation (7) real time visualization of the calibration process and outcomes (8) scaling to high dimensional problems (i.e., hundreds of parameters) (8) parallelization on high-performance computing platforms. These capabilities of GEBHM have been successfully demonstrated on several challenging engineering applications, such as multi-stage engine Blade Row Models, transient model calibration , engine system-level thermal model calibration and validation, combined cycle plant (gas turbine and steam turbine), control model calibration, combustor global sensitivity, predictive emissions modeling and optimization, airfoil cooling system uncertainty quantification and model validation, engine cycle deck performance data matching, Airframe Digital Twin, and more.

In this work, we present the GEBHM framework and demonstrate its capability in tackling many industrial applications. In particular, Section 2 briefly reviews the theoretical background of the framework and an application on structural dynamic problem. In Section 3 we show GEBHM applicability to transient problem, robust design, sensitivity analysis and portable GP, multi-source legacy data modeling, and parallel MCMC. Section 4 we discuss some of our future enhancements to GEBHM.

Figure 1: GE’s Bayesian Hybrid Modeling (GEBHM) framework.

2 GE’s Bayesian Hybrid Modeling (GEBHM)

Before delving into the advancement, we will first discuss the the Kennedy and O’Hagan theoretical framework.

In this section, we discuss the Kennedy and O’Hagan theoretical framework framework for data assimilation. In this framework, let denote the outputs from either a physical process or a high fidelity simulator, and

represents a design vector of

dimension. The training output , at training  input point can be expressed as:

(1)

where is the number of training data points, is the simulation model, are the true values of calibrated parameters, is the discrepancy model between the calibrated model and experimentally observed data, and is noise in the observed data. Generally, the data from the simulation model are available on a set of design and calibration parameter combination for , which may not be at the same setting of experimentally observed data. As proposed by Kennedy and O’Hagan [17], and as described by Higdon et al. [14], the simulation output and the model discrepancy are modeled as GP models. The simulator model () is approximated as a GP model using known simulator output at design locations with a zero mean and covariance matrix given by,

(2)

where is the combined vector of design variables and calibration parameter , the parameters and

characterize the marginal data variance captured by the model and by the residuals, respectively, and

characterizes the strength of dependence of the outputs on the design variables. The squared exponential function ensures that the GP model is smooth and is infinitely differentiable. Similarly, the experimental observation is modeled as zero mean and covariance matrix given by,

(3)

where is the combined vector of observed design variables of experiments and calibration parameters and is the prior values of calibration parameter. The parameters , , and are for observation GP model. To represent the correlation between simulator output and experimental observations, the cross covariance matrix is given as

(4)

The discrepancy parameter is modeled as

(5)

where, and are the parameters associated with discrepancy model GP.

Instead of operating on the raw data from the two sources namely, a) experimental, and b) simulation, we scale the observations to a standard normal using the mean and standard deviation of the simulation data. This is done in order to remain consistent with the zero mean function of the GP based models for the simulation and the discrepancy terms. Another important feature of the data-processing phase is the transformation of the experimental normalized observed data into two parts: a) underlying function and, b) discrepancy term. The transformed simulation data is denoted by

. The underlying function part from the transformed experimental data is denoted by and the discrepancy part by . This is done using a linear basis transformation, the details of which are more involved and outside the scope of the discussion here. More interested readers are referred to Section 3.2 of [8] where the aforementioned data preprocessing steps are exhaustively discussed. Intuitively, the and , both of which are assumed to be samples from multivariate zero-meanGaussian distributions, serve as the training data for the simulation and the discrepancy GP models, respectively.

The likelihood of combined data is then given as follows:

(6)

where and For multiple outputs

, additional processing such as Singular Value Decomposition (SVD), support kernels for computing

, etc. are required for which readers can refer to [14]. The posterior distribution of all the hyper-parameters is given by

(7)

where

on the right-hand side of the equation is the prior distribution of the parameters, where all the hyperparameters are assumed to be independent. The target posterior distribution is evaluated using Metropolis-Hastings within the Gibbs algorithm with univariate proposal distributions for the MCMC posterior updates

[53, 47, 17, 14]. The covariance matrices are updated with current realizations of the hyperparameters at every MCMC step.

It is important to emphasize a fundamental principle of the mathematical formulation in the calibration framework, which is based on the so-called Kennedy O’Hagan formulation [18, 14]. This formulation models the simulation model, , as a GP regression model using all the available data i.e. simulation data and observation data. This is why the likelihood formulation, in Eq. 6, has a cross-covariance between the observed data and simulation data. This approach is different from other model calibration methods that directly calibrate the known physical parameters of the simulation model using the observed data.

For a test input point matrix , the predicted mean and standard deviation for the th output is given as,

(8)

where is the observed data matrix and is the vector the observed output used for training the model.

Application on Structural Dynamics Problem

This section presents a new demonstration for the GEBHM calibration capability. In particular, we consider a physical experiment to model the underplatform dampers used widely in turbine blades and investigate the damping effectiveness of different damper geometries over a wide range of contact preload and dynamic excitation levels. Fig. 2 illustrates the experiment setup, which consists of two rectangular beams placed adjacent to each other. The beams have a platform at roughly one third of the beam length from the fixed end. A vibration damper is placed between them such that it contacts the underside of the beam platforms. The damper is held in place by two threaded rods which are attached to surrounding stationary frame. The damper can be loaded up against the platforms by pretensioning the rods and this load is measured by loadcells between the pretensioning nuts and the stationary frame. Dynamic excitation is provided to one of blades using an electrodynamic shaker. Analytical simulation of the experiment is performed by generating a Craig-Bampton type reduced order model from the finite element model of the test setup shown in Fig. 2 and performing a nonlinear vibration analysis using a multi-harmonic balance hybrid frequency time domain solver similar to that described by Poudou et al. [32] This helps improve the solution times by several orders of magnitude over the traditional approach of using a full FEA model with a transient time stepping analysis.

The problem has two design variables: 1) the static damper preload and 2) the dynamic excitation. Furthermore, there are two unknown calibration parameters that are 1) the friction coefficient and 2) backround structural damping which is applied as purely stiffness proportional damping with the proportionality constant . The design variables preload and excitation are denoted by and , respectively.

The friction coefficient and the background damping are the parameters being inferred. Both the test and simulation provide response as a function of excitation frequency. The frequency responses are parameterized by two parameters. The maximum response and the damping expressed as Q-factor. The former captures the height of the response peaks while the latter captures the width of the frequency response function peaks. We make use of the test data obtained for 24 designs and simulation data at 30 designs. Fig. 3. shows the posterior densities of the calibration parameters and obtained using GEBHM calibration framework. Whereby, GEBHM estimates the posterior distributions by fusing information from the two sources and using the fully-Bayesian MCMC discussed in Sec. 2.

Figure 2: Vibration test setup for structural dynamic problem
Figure 3: Calibration results for the two parameters Subfigure (a) coefficient of friction, and Subfigure (b) background damping.

The nonlinear vibration analysis is run at the test conditions using the mean values of the calibrated parameters and . Fig. 4 shows the comparison of the calibrated model results and the test data at two different preloads (50 lbf and 100 lbf). For the 50 lbf case, analytical responses obtained with the calibrated parameters match test data well in terms of overall frequency response for three out of four dynamic excitation levels (0.05, 0.1, 0.5 lbf). For the 0.2 lbf excitation the analytical response overpredicts the test data by 30. The frequency at which the maximum response occurs is predicted accurately for all excitation levels. For the the 100 lbf case, good matches on the maximum response are obtained for three out of four forcing levels. For the 0.5 lbf forcing level the model overpredicts the max response by 37. In all other cases, maximum response prediction errors are less than 5. It is possible to draw samples from the distribution of and and run the forced response analysis on these samples to generate confidence bounds on model predictions if the computational cost does not become prohibitive. This is illustrated in Fig. 4(b) where it can be seen that most of the test data falls within 95 model prediction bounds. Overall, with the calibrated parameters good match between data and model was observed over a wide range of preload and dynamic excitation levels.

Figure 4: Validation results for the two cases Subfigure (a) 50 lbf preload, and Subfigure (b) 100 lbf preload.

3 Advances and Applications

3.1 Transient Response

In this section, we incorporate the ability to handle multiple transient responses in the GEBHM framework. We only present a brief introduction to the capability here and refer the interested reader to [23] for more information on the theory and industrial applications. Similar to the original GEBHM framework, we utilized Higdon et al’s [14] implementation of the transient calibration capability. Instead of dealing with the outputs directly in the original GEBHM implementation, we initially perform a singular value decomposition (SVD) of the outputs to identify the principal components to reduce the overall dimensionality of dealing with large transients variables, as follows:

(9)

where are the number of time-points in the transient outputs, are the number of simulations, and are the calibration parameters of the simulation model , and are the left and right singular vectors, and is the diagonal matrix of singular values. Then we retain only certain number of principal components (say ) to approximate the transient simulation model as,

(10)

which can then be further linearized as,

(11)

where is and is . Next we build a GP model for the principal components term. The simulation output, the experimental output can also be linearized as,

(12)

where

is computed by interpolating

from the simulation time grid onto the observed time grid. This ensures generality of the overall method such that the simulation and observed data are measured at different points in time. The components of the observed data is modeled as a GP with covariance matrix as shown in Eq. 3.

In order to model the discrepancy between the simulation model and observations for the transient outputs, we utilize a specific number of Gaussian basis for the linear model as,

(13)

Discrepancy components are modeled again as a GP similar to that shown in Eq. 5. Essentially, we have utilized linearization to represent the transient problem using principal components for the simulation model, and Gaussian basis components for the discrepancy model in order to develop the GP models. Once these covariance functions are constructed, the hyperparameters can be inferred in a similar manner as described in the earlier sections. We have enhanced Los Alamos’s original implementation to handle multiple transient outputs and have demonstrated good success in applying it to industrial applications, as shown in [23].

Figure 5: Schematic of the transient thermal model showing the location of the observed data

Application: Transient Response

We provide a brief description of one of the problems addressed in [23], which is a transient heat transfer problem on a 1D rod. Let us consider a rod of length , as shown in Fig. 5 with a heat flux on the left end and an initial temperature condition. Our aim here is to calibrate the heat flux and initial temperature based on the temperatures monitored on five different locations along the length rod. Fig. 6(a) shows the calibrated posterior distributions of heat flux and the initial temperatures - we can clearly see that the GEBHM method can identify the strong correlation between the flux and initial temperature clearly.

Figure 6: (a) Histogram and scatter plot showing the posterior distributions of the two calibration parameters, and (b) Plot showing the calibrated simulator model and discrepancy model for two locations.(figures reproduced from [23])

Figure 6(b) shows the transient temperature signals with the discrepancy correction. We have summarized the multiple transient calibration capability that is part of the overall GEBHM framework here, and interested readers are referred to our publication here: [23].

3.2 Robust Gaussian Process

In many industrial problem, the input cannot be observed directly and the uncertainty around it is modeled with a Gaussian distribution as , with mean and covariance . In such a scenario is it important to quantify the effect of input uncertainty on the output, which is generally done by uncertainty propagation using a meta-model such as a GP. The resulting predictive distribution of the the output,

, when the input is a random variable

given by , is obtained by marginalizing over the input distribution:

(14)
(15)

where is the training data, and are the predicted mean and standard deviation of output.

Traditionally, marginalization is estimated by Monte-Carlo sampling approach. The sampling approach using GP is fast, however, in a scenario like uncertainty-based design optimization, the double-loop process (optimization loop and sampling loop) can make the overall process expensive when the system has a large number of random input variables. Robust GP using GEBHM overcomes this challenge by collapsing the double-loop uncertainty based design optimization problem into a single-loop problem. This is accomplished by exactly calculating the uncertainty measures and

. The robust GP can directly, with one function evaluation, estimate the first and second moments of the output with the consideration the effect of input uncertainty around. An example of this distinction is shown in Fig. 

7. Note and are probabilistic GP model predictions and cover model uncertainty only, not the added uncertainty due to uncertain inputs.

Figure 7: Example on uncertainty prediction using standard GP and robust GP (Standard GP: , Robust GP: , ). (figures reproduced from [35])

Assuming that

is a normally distributed random variable, GEBHM can be used to obtain exact analytical expressions for the mean,

, and variance, , of the marginalized predictive distribution of output. This can enable the formation of single-level uncertainty-based optimization problems, where the objective is some function of and/or .

The method to solve and/or implemented in GEBHM is based on work developed by Quinonero-Candela, et al. [34, 33, 12]. Using the law of iterated expectations, and the law of total variance:

(16)
(17)

where , denote the expectation and variance wrt. , respectively. When using Gaussian kernels in GPs, the expressions for are Gaussian functions of and the expressions for are products of Gaussian functions of . Therefore, the integrand involved in determining and are products of Gaussian functions, which allows for an analytical calculation. The analytical form of the mean is given as:

(18)

Elements of vector are given by,

(19)

where and is a by identity matrix. The variance is given by,

(20)

where, is the covariance matrix constructed using eq. 2 for new design point , . is the trace operator and is a by matrix with elements given by the following:

(21)

where we define .

Application: Robust Gaussian Process

An application presented here is a customized subset of the proposed NASA UQ Challenge [5]. The optimization problem is to minimize the instability with a constraint on the magnitude of the instability uncertainty. The instability is a function of four Gaussian random variables (). The design variables are the respective mean values of (

). For optimization, a Genetic Algorithm (GA) was used with a population of 100, and set to run for 10 generations. At each optimization step, the single-loop approach calculates

and directly from a robust GP with a single function evaluation. In comparison, with the double-loop approach, uncertainty metrics are calculated using Monte-Carlo samples from a standard GP. The time-history of GA’s iterations for the double-loop and single-loop approaches, are provided in Figure 8. The robust optimum was found between generations 3 and 4 for both approaches. The single-loop approach takes 336 function calls, while the double-loop approach reaches the minimum instability value after 347,000 function calls. The double-loop approach costs about 1000 times more than the single-loop approach to find the robust optimum. For more details on theory and the application on robust GP, please refer to [35, 11].

Figure 8: Comparison of generation history of Genetic Algorithm for robust design optimization problem between double-loop approach and single-loop approach (figures reproduced from [35])

3.3 Bayesian Sobol Sensitivity Analysis

Design and analysis of complex engineering systems often presses for a thorough know-how of the relative order of importance of the various inputs. In this work, the model of an engineering system could be a physical experiment and/or a computer simulation which can be evaluated at a specific set of input conditions. Such models are known as expensive black-box functions. Additional challenges usually associated with such models are a high-dimensional input space and a lack of gradient of information. The inputs to the model can include some design variable(s) and some unknown parametric constant(s) of the model. Knowledge of the relative sensitivity of the inputs is critical for designers while making decisions under uncertainty, as it helps in expediting design exploration, identifying a set of critical variables and making simulation decisions under limited-data.

The variance-based methods, commonly known as the Bayesian Sobol sensitivity analysis (BSSA), are more flexible to undertake the task of obtaining sensitivity information for high dimensional problems with limited data under uncertainty. The BSSA investigates how the uncertainty in the black-box model of a system can be divided among the different uncertain inputs and their interactions. An addendum to the above facet of the Sobol sensitivities is the ability to infer sensitivity of the underlying physical response to higher-order interactions. All expressions for the main-order and higher-order effects are computed using analytical expressions.

We assume that the inputs are uncorrelated and have the following distributions.

(22)

where is the dimensionality of the input space and

is a standard uniform distribution. Since we do not have full access to the true response, we resort to using a probabilistic representation of

as described in Eq. (1

). Denoting the posterior predictive mean of the predictive simulator by

, where represents the design variable(s) and denotes the calibration parameter(s), we make use of the high-dimensional model representation (HDMR) [40] to represent as follows:

(23)

where,

(24)

To avoid confusion between design variables and calibration parameters we concatenate and to represent the set of variables as . This is consistent with the formulation since the BSSA follows the same analytic formulation for both sets of variables. The terms denoted by in Eq. (23) are known as effect functions and are orthogonal under the assumption that the distribution over the input space is uniform and that the true response is square-integrable [40]. This property simplifies the derivation and computation of the BSSA indices for the scenario with uncorrelated inputs.

(25)
(26)
(27)

Analytical tractability of the Gaussian form of , derived in [28, 43], allows one to further represent the BSSA indices as follows:

(28)

and two-way interactions as:

(29)

where, is the expectation with respect to the probabilistic surrogate model.

Similarly, we can define the BSSA for higher-order interactions as:

(30)

Application: Bayesian Sobol Sensitivity Analysis

The structural dynamics problem discussed in the previous section is used as a test-bed to demonstrate the BSSA capability of GEBHM. The BSSA obtained with respect to the design variables and the calibration parameters are shown in Fig. 9.

Figure 9: The Bayesian Sobol sensitivity analysis (BSSA) for the (a) deflection and (b) damping in the vibration problem.

3.4 Bayesian Sobol Sensitivity Analysis with Correlated Inputs

The work discussed in the previous sections assumes that inputs are uncorrelated. In that case, the HDMR has orthogonal effect functions, and one may compute each Sobol index independently. Presence of correlated inputs mandates sampling from joint distributions of the inputs which increases the computational cost tremendously. At the same time, ignoring correlations can have a detrimental effect on the inferred BSSA indices as has been highlighted in

[37].

Saltelli et. al. [37] and other authors have described this effect to be carried over due to correlation and stress the need to be careful when making conclusions with Sobol indices when inputs are correlated. The BSSA approach developed in Xu et. al. [55] splits the contribution of an individual input to the uncertainty of the model output into two components: correlated and uncorrelated. This approach was carried forward by Li et al. [24] with a relaxed HDMR concept to compute variance contribution due to correlation and problem structure for non-linear models. Chastaing et al. [3] develop a hybrid method for decomposing the variance by utilizing the hierarchical orthogonality of component functions and the Gram–Schmidt method. A hybrid method using polynomial chaos expansions and copula method was proposed in [48].

We proceed with the framework proposed by Li and Rabitz [24, 25] which enforces orthogonality only on the lower-order component functions, a technique commonly known as hierarchical orthogonality. In other words, is only required to be orthogonal to and and not to any other component functions. Following the above approach, the output variance can be decomposed as follows:

Here, and are multi-indices that denote the different interaction terms. The total index is only a sum of the structural and correlative contribution.

(31)
(32)

where . Note that, the structural and correlative component functions cannot be estimated independently of each other in the presence of input correlation and more advanced methods would be needed to infer the BSSA indices for additive models.

The use of GEBHM’s BSSA with uncorrelated and correlated inputs has been successfully demonstrated on several industrial applications[41, 42, 22]. The usefulness of this framework is best highlighted in problems with reasonably high-dimensionality and limited observed or simulation data under uncertainty, while being fully integrated with GEBHM’s model calibration framework. We demonstrate the approach on industrial applications in the following sections.

Application: Bayesian Sobol Sensitivity Analysis with Correlated Inputs

To highlight the impact of the correlated BSSA capability of GEBHM we apply the framework on a problem of engine vibration. The problem has one output and six design variables. The correlation exists between inputs denoted by and . The impact of including the correlative term on the overall ordering of the BSSA is highlighted in Fig. 10 (a) and (b). The structural BSSA is unable to incorporate the effect of the correlation between the two. However, the final BSSA, with the correlative part included, reorders some of the interaction effects involving and .

Figure 10: The total (a) and the structural (b) Bayesian Sobol sensitivity analysis (BSSA) of the vibration problem

3.5 Portable Gaussian Process

It is often the case that a small number of effects as identified in Sec. 3.3 explain the majority of the GEBHM metamodel’s predictive trends. In addition, we have encountered a number of applications where the storage footprint and computational cost associated with computing predictions with the full GEBHM make it infeasible to apply. In this section, we turn our attention to a method for approximating the posterior mean of the GEBHM model by leveraging the observation that it may be approximated by the combination of a relatively small number of effect functions; each of these effect functions is approximated through a generalized linear model. Doing so eliminates the undesirable nonparameteric characteristics of the GEBHM while incurring a negligible penalty in predictive performance. Due to the lightweight nature of this model, we refer to it as a portable Gaussian process or portable Bayesian Hybrid Model (pBHM)[19]. The remainder of this section describes the methodology by which one constructs a pBHM.

The approach takes as input a trained GP model from GEBHM and the set over which one expects to make predictions, equipped with a density . We consider a scalar posterior mean of the GP model, recognizing that one may create a multi-output model by repeating the process outlined in this section for the other output dimensions. The output of the approach is a parametric approximation of that may be used within .

Carrying out the sensitivity analysis of Sec. 3.3, we compute the Sobol indices of the HDMR of Eq. (23) and identify those whose magnitude are greatest; let the set of Sobol indices be represented by , where is the set of dimensions associated with an index. We retain the effect functions corresponding to the largest Sobol indices, picking the number of terms such that the sum of the associated indices exceeds some threshold (typically 99%), ensuring that this truncated series captures the majority of the variation in over . Note that if the GP kernel function is of the form in Eqn. 2 and factorizes into uniform or Gaussian marginals in each of the input dimensions, then the Sobol indices may be computed analytically.

As shown in Sec. 3.3, if is parametric with an exponentiated quadratic form [e.g. as in Eq. (5)] and

is either uniform or Gaussian, then the effect functions may be computed analytically. We now consider the task of fitting a parametric model to a given effect function

. The following task is conducted independently for all contained in and may therefore be trivially parallelized. We sample points from some experimental design over the input space in the dimensions contained in . This may come from a space-filling design (e.g. Latin Hypercube) or simply by sampling from the marginal . We collect the inputs and corresponding outputs under the effect function as a training dataset . We seek to learn a generalized linear model

(33)

where contains a set of basis functions and the corresponding coefficients. We have used polynomial basis functions as well as splines in previous work [19]. We determine by minimizing a mean squared error loss on with an additional penalty to induce sparsity in ; the magnitude of the regularization hyperparameter is tuned by cross-validation. Note again that it is comparably cheap to generate data from the effect function to be approximated, so cross-validation is not subject to troubles arising from data scarcity.

Application: Portable Gaussian Process

Here, we demonstrate the application of portable Gaussian process or pBHM to model crack growth for an airframe component. We focus on constructing a surrogate model for a stress intensity factor (SIF), a key quantity driving crack growth, as a function of load and geometry. Because the surrogate will be used in a many-query setting, quick predictions are highly desired.

A GP model was trained using GEBHM on data generated via finite element simulations, parameterized by inputs. A sensitivity analysis using GEBHM reveals that third- and higher-order interactions are negligible. We consider several parametric fits for the effect functions, eventually choosing spline fits. Fig. 11 shows the fit obtained for a representative main effect function.

Figure 11: pBHM SIF modeling: Comparison of the effect function for the SIF predicted by the source BHM model and several candidate parametric approximations. From [19].

The resulting spline model is tested on 360 held-out validation data. The percent errors of the original GP model from GEBHM model and the spline version are shown in Fig. 12. The mean absolute error is 1.9% for the GP and 2.9% for the spline model. The root-mean-square-error is 5.2% for the GP and 11.9% for the spline model. In exchange for a modest decrease in accuracy, the the spline model enjoys a 50x speedup over the GP model.

Figure 12: pBHM crack growth modeling: Comparison of the effect function for the stress intensity factory predicted by the source BHM model and several candidate parametric approximations used by the pBHM model. From [19].

3.6 Multi-Source Legacy Data Modeling

For a new industrial design, typically only sparse data is available due to cost and time associated with high-fidelity analysis and experimentations. For such complex systems, sparse data may not be sufficient to build an accurate model of design performance. Typically, a multi-fidelity approach is sought when one can generate relatively-abundant data from a cheaper lower-fidelity model and integrate it with the sparse high-fidelity data. However, in an industrial setting, high-fidelity data are generally available from legacy designs which are not exactly same as the new design, but they belong to a similar family. In this case, we extend the capability to handle data from multiple legacy sources to improve the predictive capability of the new system. This enhancement can be applied even if low-fidelity data are available from legacy systems.

In multi-source legacy modeling with GEBHM, separate models are built for each legacy data by combining the legacy data with new system data. For legacy data, the legacy model is built as , by treating the data in legacy data source as simulation data and new system data as experimental or high-fidelity data. It is assumed that the input variables (design, operational, etc.) and the output variables (performance, cost, etc.) are the same for the new system and all the legacy systems. In this scenario, when legacy-data has more variables than the new system, i.e. , then the extra variables in legacy data can be treated as calibration parameters.

If legacy models are generated, then the multi-source prediction at new location is evaluated as

(34)

where is the model validity which is a function of the input variables such that for any value of .

The model validity associated with a legacy model signifies relative confidence of legacy models at a given design location

. There can be multiple ways to evaluate model validity like expert opinion, Bayes’ factor, relative age of the legacy data, etc. In GEBHM, we use two statistical metrics to evaluate model validity; likelihood-based model validity and uncertainty-based model validity. In the likelihood-based model validity, the validity of a legacy model at a input setting

where data for the new design is available is proportional to the probability of the legacy model to predict the output or performance of the new design. For example, the likelihood of the

legacy model, at a location , for which output of new design is known to be is given as

(35)

where and are the predictive mean and standard deviation of the legacy model at . In the uncertainty-based model validity, the model validity at is inversely proportional to the predictive uncertainty as

(36)

The overall model validity of the legacy model at a given design point is then given as

(37)

where is a proportionality factor which is constant across all legacy model at a given . GP models of model validity, , are then built for each legacy model by using the overall model validity evaluated at design points at which new system data are available. For more details, please refer to [9].

Application: Multi-Source Legacy Data Modeling

In one of the application, we have used multi-source legacy data modeling to predict pump efficiency for a new pump design. These pumps are required for heavy duty services in refinery, petro-chemical and other industrial services, pumping fluids under wide range of pressures and particularly at high temperatures. The goal is to build a predictive model of a maximum efficiency of new pump design as a function of nine input parameters such as blade angle, eye diameter, hub diameter, pump speed, number of impeller blades, etc. For the new pump design we have performance data and different setting of input parameters. We also have data available from five legacy pumps, with number of data varying between to . We used all these data to build a multi-source legacy data model for the new design. We carried out -fold cross validation to demonstrate the improvement in accuracy of using multi-source legacy data modeling over other models. Fig. 13

shows the cross-validation error against standard error of cross-validation for different models. The “New-Sys” is the model built on new system data only, “Legacy-k” (for

) are model based on multi-fidelity modeling by combining pump legacy data with new system data, and “MS-Legacy” is the model based on multi-source legacy modeling. As seen, using multi-fidelity modeling using data from legacy pump-3 or pump-4 improves the model prediction when compared to model just build on new system data. However, using multi-source modeling using all the legacy data, improves the model significantly.

Figure 13: Cross validation error vs standard error for legacy data model compared with other models for pump maximum efficiency

3.7 Parallelizing GEBHM’s MCMC

While the GP does not assume a functional form of the problem, it is defined by a set of parameters, called hyperparameters, that need to be learned from the data. The hyperparameters define the characteristics of the objective function, such as smoothness, magnitude, periodicity, etc. Accurately estimating these hyperparameters is a key ingredient in developing a reliable and generalizable surrogate model. Markov chain Monte Carlo (MCMC) is a ubiquitously used Bayesian method to estimate these hyperparameters. GEBHM is very effective on problems of small and medium size, typically less than 1000 training points. However, traditional GP models do not scale well with respect to dimension and size of the dataset. For some challenging industry applications, the predictive capability of the GP is required but each second during the training of the GP costs thousands of dollars. In this work, we apply a scalable MCMC-based methodology enabling the modeling of large-scale industry problems. To this end, we extend and implement in GEBHM an Adaptive Sequential Monte Carlo (ASMC) methodology for training the GP. This implementation saves computational time (especially for large-scale problems) while not sacrificing predictability over the current MCMC implementation. For more details on the theoretical workings on the method we refer interested readers to 

[30, 2]. We demonstrate the effectiveness and accuracy of GEBHM with ASMC on on a two-objective challenging industry application in Sec. 3.7.

Application: Parallelizing GEBHM’s MCMC

The problem is a GE combustion test data problem, where we aim to build a GP model to predict two emission quantities as a function of three measured temperatures, three fuel flow parameters, and air flow. Thus, there are nine hyperparameters that need to be estimated for each individual objective. The total number of hyperparameters for the GP model is 19. The MCMC settings for this problem are: 10000 steps with 1000 steps during the initialization.

Figure 14: Subfigure (a) Number of particles on a workstation (red dots) are 6, 12, 30, and 60, and Subfigure (b) number of particles on the HPC (red dots) are 24, 48, 120, 240, and 480, across the time axis, respectively for the first objective of the combustion problem. From [30]
Figure 15: Subfigure (a) Number of particles on a workstation (red dots) are 6, 12, 30, and 60, and Subfigure (b) number of particles on the HPC (red dots) are 24, 48, 120, 240, and 480, across the time axis, respectively for the second objective of the combustion problem. From [30]

The significance of the results in Fig. 14 and Fig. 15 is noticeable from the perspective of the number of objectives. Not only does the scaled-up HPC ASMC do well in terms of computational time but it improves the predictive accuracy of the GP model compared to GEBHM’s MCMC. The number of steps per ASMC particle is one for the workstation and five for the HPC runs. The number of sequential distributions has been fixed to 10 for both the computing environments.

This provides a holistic overview of ASMC’s performance on different types of problems. Problems of varying input dimensionality, varying number of objectives, and different training data sizes have been treated with the extended ASMC and compared to the current MCMC implementation. The ASMC does equally well or better compared to the MCMC in terms of predictive accuracy, while saving time by half or more in most of the challenging problems.

4 Conclusion and future direction

We discussed GE’s Bayesian probabilistic modeling framework, GEBHM, and various advanced methods developed for industrial applications. Although we only discussed the core capabilities of GEBHM in this work, these capabilities have also enabled development on other frontiers such as intelligent/adaptive design optimization  [21, 26, 20]. The calibration and sensitivity analysis capabilities of GEBHM are demonstrated on a new challenging industrial problem of structural dynamics as discussed. A summary of the numerous enhancements to the GEBHM framework are provided in Table (1). With the increase in complexity of industrial applications and demand of further improvement in probabilistic modeling methods, some future directions of enhancement are as follows:

Enhancement Capability enabled Current deficiency Demonstrated application
Transient problem Handling of temporal responses None 1-D rod heat transfer
Robust Gaussian Process Handling input uncertainty Handles only Gaussian noise on inputs Airplane flight controller (NASA UQ challenge 2014)
Bayesian sensitivity with correlated inputs Handling sensitivity for correlated inputs Handles only linear correlations Structural vibration problem
Portable Gaussian Process Parametric representation of GEBHM meta-modeling Limited predictive uncertainty representation Crack growth problem
Multi-Source legacy modeling Integrating multiple sources of data Input dimensionality need to be same for all Pump design problem
Parallelizing GEBHM’s MCMC Enabling GEBHM to handle large scale industrial problems Slower than deterministic approaches Combustion test problem
Table 1: Enhancements on GEBHM’s core capabilities.

Preconditioned Conjugate Gradient Method

The demanding computational cost and memory requirement constitute a significant challenge for GP to train and predict based on large datasets. The complexity stems from the requirement of multiple solutions of a large-scale system of linear equations involving the kernel matrix. Inspired by the concepts of divide-and-conquer, we will develop a scalable GP algorithm based on domain decomposition methods. The proposed algorithm will partition the hard-to-solve problem into smaller easy-to-solve ones which can be tackled concurrently. The solutions of the local problems will be utilized to construct a scalable preconditioner for the preconditioned conjugate gradient method (PCGM) to tackle the global problem. For complex industrial applications, a multi-level and multi-fidelity preconditioner will be essential for cost-effective performance and robustness of the algorithm [44, 45, 46].

Scalable Gaussian process

With the rapid development of high-performance computing, Big Data and Internet of Things, it has become necessary to scale up GPs for modeling large dataset. Various frameworks have been proposed to scale up GPs such as sparse Gaussian Process for global approximation and distributed Gaussian Process for local approximation. Most existing schemes focus on speeding up the training while balancing the overall prediction accuracy. However, there are certain cases requiring special treatment. For example, the dataset could be from non-uniform distribution in which the dataset are scattered as clusters among the input space. For modeling large datasets with non-uniform distributions, the scalable schemes might overlook the clusters with relatively less data. This will deteriorate the predicative performance at the overlooked area. It’s possible to fix the issue with intelligent selection of a subset for GP development with a reasonable coverage among the whole dataset  [57]. The intelligent selection should take into account both input variables and the output values. A small subset is expected to be adequate while assuming the large dataset contains more-than-enough data for model development.

Deep Gaussian processes

Gaussian process models have remarkable data efficiency properties due to the very narrow, yet reasonable, prior beliefs that they place on the function being learned. A core element of this prior is the GP kernel, which encodes beliefs about the regularity and differentiability of the function being modeled. However, traditional kernels are often selected out of convenience rather than out of the belief that they accurately reflect one’s prior beliefs. There are many systems that are known to violate familiar assumptions such as stationarity, homoscedasticity, and Gaussian marginal densities. One way of overcoming these limitations is by learning new representations of the data being modeled to enter as inputs to the Gaussian process simultaneously with training the GP itself. In this spirit, recent model architectures including deep kernel learning [54] and deep Gaussian processes [6] are have been shown to improve predictive performance and are becoming increasingly accessible due to the advent of advanced approximate inference techniques [51, 50, 36], software platforms [1, 31, 27], and modern computing hardware. We see considerable opportunity as the community continues to experiment with combining these techniques with traditional modeling approaches.

Modeling multiple datasets

Up to this point, our discussion has focused mainly on modeling a single input-output relationship. However, there are many general classes of problems inside which small permutations alter the details of the input-output relationship being modeled. One could imagine that the similar problems previously solved might be used to inform the model for the problem at hand. We are investigating the use of compositional models in which task labels are embedded as latent variables in multi-dimensional Euclidean space; these latent variables augment the input space, allowing one to simultaneously learn a single model over the augmented input space while learning an embedding of tasks that enables a parsimonious functional representation. We recognize that while one may assemble very large datasets by combining tasks, the complexity of the input space will grow at the same time. Thus, there remains a strong case to be made for incorporating uncertainty associated with both the learned embedding as well as the function relating the augmented input space to the outputs. By following the Bayesian modeling groundwork laid out in the previous sections, we anticipate that one will be able to learn credible models that can be trained with even fewer data than what the BHM framework enables today.

Acknowledgments

The authors acknowledge the contributions from Dr. Ankur Srivastava, Dr. Felipe Viana, Dr. Arun K. Subramaniyan, Dr. Issac Asher, Dr. You Ling, Dr. Kevin M. Ryan, and Dr. Jesper Kristensen for their critical contribution in development and enhancement of GEBHM.

References

  • [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. (2016) Tensorflow: a system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. Cited by: §4.
  • [2] I. Bilionis, B. A. Drewniak, and E. M. Constantinescu (2015) Crop physiology calibration in the clm. Geoscientific Model Development 8 (4), pp. 1071–1083. Cited by: §3.7.
  • [3] G. Chastaing, F. Gamboa, and C. Prieur (2015) Generalized sobol sensitivity indices for dependent variables: numerical methods. Journal of Statistical Computation and Simulation 85 (7), pp. 1306–1333. Cited by: §3.4.
  • [4] A. Chaudhuri, R. T. Haftka, P. Ifju, K. Chang, C. Tyler, and T. Schmitz (2015) Experimental flapping wing optimization and uncertainty quantification using limited samples. Structural and Multidisciplinary Optimization 51 (4), pp. 957–970. Cited by: §1.
  • [5] L.G. Crespo, S.P. Kenny, and D.P. Giesy (2014-05) The nasa langley multidisiplinary uncertainty quantification challenge. ASME V and V Symposium. Cited by: §3.2.
  • [6] A. Damianou and N. Lawrence (2013) Deep gaussian processes. In Artificial Intelligence and Statistics, pp. 207–215. Cited by: §4.
  • [7] A. I. Forrester and A. J. Keane (2009) Recent advances in surrogate-based optimization. Progress in aerospace sciences 45 (1-3), pp. 50–79. Cited by: §1.
  • [8] J. Gattiker, K. Myers, B. Williams, D. Higdon, M. Carzolio, and A. Hoegh (2016) Gaussian process-based sensitivity analysis and bayesian model calibration with gpmsa. Handbook of uncertainty quantification, pp. 1–41. Cited by: §1, §2.
  • [9] S. Ghosh, I. Asher, J. Kristensen, Y. Ling, K. Ryan, and L. Wang (2018) Bayesian multi-source modeling with legacy data. In 2018 AIAA Non-Deterministic Approaches Conference, pp. 1663. Cited by: §3.6.
  • [10] S. Ghosh, J. Kristensen, Y. Zhang, W. Subber, and L. Wang (2019) A strategy for adaptive sampling of multi-fidelity gaussian process to reduce predictive uncertainty. arXiv preprint arXiv:1907.11739. Cited by: §1.
  • [11] S. Ghosh, P. Pandita, W. Subber, Y. Zhang, and L. Wang (2020) Efficient bayesian inverse method using robust gaussian processes for design under uncertainty. In AIAA Scitech 2020 Forum, pp. 1877. Cited by: §3.2.
  • [12] A. Girard (2004-10) Approximate methods for propagation of uncertainty with gaussian process models. PhD Thesis, University of Glasgow. Cited by: §3.2.
  • [13] P. Hao, B. Wang, and G. Li (2012) Surrogate-based optimum design for stiffened shells with adaptive sampling. AIAA journal 50 (11), pp. 2389–2407. Cited by: §1.
  • [14] D. Higdon, C. Nakhleh, and B. Williams (2008) A bayesian calibration approach to the thermal problem. Computer Methods in Applied Mechanics and Engineering 197. Cited by: §1, §2, §2, §2, §3.1.
  • [15] M. C. Hill (2000) Methods and guidelines for effective model calibration. In Building Partnerships, pp. 1–10. Cited by: §1.
  • [16] Z. Hu and S. Mahadevan (2016) A single-loop kriging surrogate modeling for time-dependent reliability analysis. Journal of Mechanical Design 138 (6), pp. 061406. Cited by: §1.
  • [17] M. Kennedy and A. O’Hagan (2001) Bayesian calibration of computer models (with discussion). Journal of the Royal Statistical Society (Series B) 68. Cited by: §1, §1, §2, §2.
  • [18] M. C. Kennedy and A. O’Hagan (2001) Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63 (3), pp. 425–464. Cited by: §2.
  • [19] J. Kristensen, I. Asher, and L. Wang (2018) Polynomial representation of the gaussian process. In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Cited by: Figure 11, Figure 12, §3.5, §3.5.
  • [20] J. Kristensen, Y. Ling, I. Asher, and L. Wang (2016) Expected-improvement-based methods for adaptive sampling in multi-objective optimization problems. In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. V02BT03A024–V02BT03A024. Cited by: §4.
  • [21] J. Kristensen, W. Subber, Y. Zhang, S. Ghosh, N. C. Kumar, G. Khan, and L. Wang (2019) Industrial applications of intelligent adaptive sampling methods for multi-objective optimization. In Design Engineering and Manufacturing, Cited by: §1, §4.
  • [22] N. C. Kumar, A. K. Subramaniyan, L. Wang, and G. Wiggs (2013) Calibrating transient models with multiple responses using bayesian inverse techniques. In ASME Turbo Expo 2013: Turbine Technical Conference and Exposition, Cited by: §3.4.
  • [23] N. C. Kumar, A. K. Subramaniyan, L. Wang, and G. Wiggs (2013) Calibrating transient models with multiple responses using bayesian inverse techniques. In ASME Turbo Expo 2013: Turbine Technical Conference and Exposition, Cited by: Figure 6, §3.1, §3.1, §3.1, §3.1.
  • [24] G. Li, H. Rabitz, P. E. Yelvington, O. O. Oluwole, F. Bacon, C. E. Kolb, and J. Schoendorf (2010) Global sensitivity analysis for systems with independent and/or correlated inputs. The journal of physical chemistry A 114 (19), pp. 6022–6032. Cited by: §3.4, §3.4.
  • [25] G. Li and H. Rabitz (2012) General formulation of hdmr component functions with independent and correlated variables. Journal of Mathematical Chemistry 50 (1), pp. 99–130. Cited by: §3.4.
  • [26] Y. Ling, S. Ghosh, J. Kristensen, I. Asher, K. Ryan, and L. Wang (2018) Intelligent sampling framework for multi-objective optimization in high dimensional design space. In 59th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, AIAA SciTech, Kissimmee, Florida. Cited by: §4.
  • [27] D. G. Matthews, G. Alexander, M. Van Der Wilk, T. Nickson, K. Fujii, A. Boukouvalas, P. León-Villagrá, Z. Ghahramani, and J. Hensman (2017) GPflow: a gaussian process library using tensorflow. The Journal of Machine Learning Research 18 (1), pp. 1299–1304. Cited by: §4.
  • [28] J. E. Oakley and A. O’Hagan (2004) Probabilistic sensitivity analysis of complex models: a bayesian approach. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66 (3), pp. 751–769. Cited by: §3.3.
  • [29] J. T. Oden, R. Moser, and O. Ghattas (2010) Computer predictions with quantified uncertainty, part ii. SIAM News 43 (10), pp. 1–4. Cited by: §1, §1.
  • [30] P. Pandita, J. Kristensen, and L. Wang (2019) Towards scalable gaussian process modeling. arXiv preprint arXiv:1907.11313. Cited by: Figure 14, Figure 15, §3.7.
  • [31] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017)

    Automatic differentiation in pytorch

    .
    Cited by: §4.
  • [32] O. Poudou and C. Pierre (2003) Hybrid frequency-time domain methods for the analysis of complex structural systems with dry friction damping. In 44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, pp. 1411. Cited by: §2.
  • [33] J. Quinonero-Candela, A. Girard, J. Larsen, and C. E. Rasmussen (2003-04) Propagation of uncertainty in bayesian kernels models - application to multiple-step ahead forecasting. International Conference on Acoustics, Speech and Signal Processing. Cited by: §3.2.
  • [34] J. Quinonero-Candela, A. Girard, and C. E. Rasmussen (2003-10) Prediction at an uncertain input for gaussian processes and relevance vector machines – application to multiple-step ahead time-series forecasting. Technical report Technical University of Denmark. Cited by: §3.2.
  • [35] K. M. Ryan, J. Kristensen, Y. Ling, S. Ghosh, I. Asher, and L. Wang (2018) A gaussian process modeling approach for fast robust design with uncertain inputs. In ASME Turbo Expo 2018: Turbomachinery Technical Conference and Exposition, Cited by: Figure 7, Figure 8, §3.2.
  • [36] H. Salimbeni and M. Deisenroth (2017) Doubly stochastic variational inference for deep gaussian processes. In Advances in Neural Information Processing Systems, pp. 4588–4599. Cited by: §4.
  • [37] A. Saltelli and S. Tarantola (2002) On the relative importance of input factors in mathematical models: safety assessment for nuclear waste disposal. Journal of the American Statistical Association 97 (459), pp. 702–709. Cited by: §3.4, §3.4.
  • [38] T. J. Santner, B. J. Williams, W. Notz, and B. J. Williams (2003) The design and analysis of computer experiments. Vol. 1, Springer. Cited by: §1.
  • [39] M. Shabouei, W. Subber, C. W. Williams, K. Matouš, and J. M. Powers (2019) Chemo-thermal model and gaussian process emulator for combustion synthesis of ni/al composites. Combustion and Flame 207, pp. 153–170. Cited by: §1.
  • [40] I. M. Sobol (2001) Global sensitivity indices for nonlinear mathematical models and their monte carlo estimates. Mathematics and computers in simulation 55 (1-3), pp. 271–280. Cited by: §3.3, §3.3.
  • [41] A. Srivastava, A. K. Subramaniyan, and L. Wang (2014) Hybrid bayesian solution to nasa langley research center multidisciplinary uncertainty quantification challenge. Journal of Aerospace Information Systems 12 (1), pp. 114–139. Cited by: §3.4.
  • [42] A. Srivastava, A. K. Subramaniyan, and L. Wang (2015) Variance based global sensitivity analysis for uncorrelated and correlated inputs with gaussian processes. In ASME Turbo Expo 2015: Turbine Technical Conference and Exposition, Cited by: §3.4.
  • [43] A. Srivastava, A. K. Subramaniyan, and L. Wang (2017) Analytical global sensitivity analysis with gaussian processes. AI EDAM 31 (3), pp. 235–250. Cited by: §3.3.
  • [44] W. Subber and S. Loisel (2014) Schwarz preconditioners for stochastic elliptic pdes. Computer Methods in Applied Mechanics and Engineering 272, pp. 34–57. Cited by: §4.
  • [45] W. Subber and A. Sarkar (2013) Dual-primal domain decomposition method for uncertainty quantification. Computer Methods in Applied Mechanics and Engineering 266, pp. 112–124. Cited by: §4.
  • [46] W. Subber and A. Sarkar (2014) A domain decomposition method of stochastic pdes: an iterative solution techniques using a two-level scalable preconditioner. Journal of Computational Physics 257, pp. 298–317. Cited by: §4.
  • [47] A. K. Subramaniyan, N. C. Kumar, L. Wang, D. Beeson, and G. Wiggs (2012-03) Enhancing high-dimensional physics models for accurate predictions with bayesian calibration. Propulsion-Safety and Affordable Readiness Conference. Cited by: §2.
  • [48] B. Sudret (2008) Global sensitivity analysis using polynomial chaos expansions. Reliability engineering & system safety 93 (7), pp. 964–979. Cited by: §3.4.
  • [49] G. Tapia, A. Elwany, and H. Sang (2016) Prediction of porosity in metal-based additive manufacturing using spatial gaussian process models. Additive Manufacturing 12, pp. 282–290. Cited by: §1.
  • [50] M. Titsias and M. Lázaro-Gredilla (2014) Doubly stochastic variational bayes for non-conjugate inference. In International conference on machine learning, pp. 1971–1979. Cited by: §4.
  • [51] M. Titsias (2009) Variational learning of inducing variables in sparse gaussian processes. In Artificial Intelligence and Statistics, pp. 567–574. Cited by: §4.
  • [52] G. G. Wang and S. Shan (2007) Review of metamodeling techniques in support of engineering design optimization. Journal of Mechanical design 129 (4), pp. 370–380. Cited by: §1.
  • [53] L. Wang, X. Fang, A. Subramaniyan, G. Jothiprasad, M. Gardner, A. Kale, S. Akkaram, D. Beeson, G. Wiggs, and J. Nelson (2011-06) Challenges in uncertainty, calibration, validation and predictability of engineering analysis models. Proceedings of ASME Turbo Expo 2011. Cited by: §2.
  • [54] A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing (2016) Deep kernel learning. In Artificial Intelligence and Statistics, pp. 370–378. Cited by: §4.
  • [55] C. Xu and G. Z. Gertner (2008) Uncertainty and sensitivity analysis for models with correlated parameters. Reliability Engineering & System Safety 93 (10), pp. 1563–1573. Cited by: §3.4.
  • [56] S. Zhang, P. Zhu, and W. Chen (2013) Crashworthiness-based lightweight design problem via new robust design method considering two sources of uncertainties. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 227 (7), pp. 1381–1391. Cited by: §1.
  • [57] Y. Zhang, J. Kristensen, W. Subber, S. Ghosh, G. Khan, and L. Wang (2020) Remarks for scaling up a general gaussian process to model large dataset with sub-models. In AIAA Scitech 2020 Forum, pp. 0678. Cited by: §4.
  • [58] Y. Zhang, J. Meeker, J. Schutte, N. Kim, and R. Hafkta (2016) On approaches to combine experimental strength and simulation with application to open-hole-tension configuration. In Proceedings of the American Society for Composites: Thirty-First Technical Conference, Cited by: §1.