Conditional deep surrogate models for stochastic, high-dimensional, and multi-fidelity systems

01/15/2019 ∙ by Yibo Yang, et al. ∙ 0

We present a probabilistic deep learning methodology that enables the construction of predictive data-driven surrogates for stochastic systems. Leveraging recent advances in variational inference with implicit distributions, we put forth a statistical inference framework that enables the end-to-end training of surrogate models on paired input-output observations that may be stochastic in nature, originate from different information sources of variable fidelity, or be corrupted by complex noise processes. The resulting surrogates can accommodate high-dimensional inputs and outputs and are able to return predictions with quantified uncertainty. The effectiveness our approach is demonstrated through a series of canonical studies, including the regression of noisy data, multi-fidelity modeling of stochastic processes, and uncertainty propagation in high-dimensional dynamical systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The analysis of complex systems can often be significantly expedited through the use of surrogate models that aim to minimize the need of repeatedly evaluating the true data-generating process, let that be a costly experimental assay or a large-scale computational model. The task of building surrogate models in essence defines a supervised learning problem in which one aims to distill the governing relation between system inputs and outputs. A successful completion of this task yields a simple and cheap mechanism for predicting the system’s response for a new, previously unobserved input, which can be subsequently used to accelerate downstream tasks such as optimization loops, uncertainty quantification, and sensitivity analysis studies.

Fueled by recent developments in data analytics and machine learning, data-driven approaches to building surrogate models have been gaining great popularity among diverse scientific disciplines. We now have a collection of techniques that have enabled progress across a wide spectrum of applications, including design optimization

Forrester et al. (2007); Robinson et al. (2008); Alexandrov et al. (2001), the design of materials Sun et al. (2010, 2011) and supply chains Celik et al. (2010), model calibration Perdikaris and Karniadakis (2016); Perdikaris (2015); Perdikaris and Karniadakis (2015), and uncertainty quantification Eldred and Burkardt (2009); Ng and Eldred (2012); Padron et al. (2014); Biehler et al. (2015); Peherstorfer et al. (2016a, b, c); Narayan et al. (2014); Zhu et al. (2014); Bilionis et al. (2013); Parussini et al. (2017); Perdikaris et al. (2016). Such approaches are built on the premise of treating the true data-generating process as a black-box, and try to construct parametric surrogates of some form directly from observed input-output pairs . Except perhaps for Gaussian process regression models Rasmussen (2004) that rely on a probabilistic formulation for quantifying predictive uncertainty, most existing approaches use theoretical error bounds to assess the accuracy of the surrogate model predictions and are formulated based on rather limiting assumptions on the form (e.g., linear or smooth nonlinear). Despite the growing popularity of data-driven surrogate models, a key challenge that still remains open pertains to cases where the entries in and

are high-dimensional objects with multiple modalities: vectors with hundreds/thousands of entries, images with thousands/millions of pixels, graphs with thousands of nodes, or even continuous functions and vector fields. Even less well understood is how to build surrogate models for stochastic systems, and how to retain predictive robustness in cases where the observed data is corrupted by complex noise processes.

In this work we aim to formulate, implement, and study novel probabilistic surrogate models models in the context of probabilistic data fusion and multi-fidelity modeling of stochastic systems. Unlike existing approaches to surrogate and multi-fidelity modeling, the proposed methods scale well with respect to the dimension of the input and output data, as well as the total number of training data points. The resulting generative models provide enhanced capabilities in learning arbitrarily complex conditional distributions and cross-correlations between different data sources, and can accommodate data that may be corrupted by correlated and non-Gaussian noise processes. To achieve these goals, we put forth a regularized adversarial inference framework that goes beyond Gaussian and mean field approximations, and has the capacity to seamlessly model complex statistical and functional dependencies in the data, remain robust with respect to non-Gaussian measurement noise, discover nonlinear low-dimensional embeddings through the use of latent variables, and is applicable across a wide range of supervised tasks.

This paper is structured as follows. In section 2.1 we provide a comprehensive description of conditional generative models and recent advances in variational inference that have enabled their scalable training. In section 2.2, we review recent findings that pinpoint the limitations of mean-field variational inference approaches and motivate the use of implicit parametrizations and adversarial learning schemes. Sections 2.2.1-2.2.4 provide a comprehensive discussion on how such schemes can be trained on paired input-output observations to yield effective approximations of the conditional density . In section 3 we will test effectiveness our approach on a series of canonical studies, including the regression of noisy data, multi-fidelity modeling of stochastic processes, and uncertainty propagation in high-dimensional dynamical systems. Finally, section 4 summarizes our concluding remarks, while in A we provide a comprehensive collection of systematic studies that aim to elucidate the robustness of the proposed algorithms with respect to different parameter settings. All data and code accompanying this manuscript will be made available at https://github.com/PredictiveIntelligenceLab/CADGMs.

2 Methods

The focal point of this work is formulating, learning, and exploiting general probabilistic models of the form

. One one hand, conditional probability models

aim to capture the statistical dependence between realizations of deterministic or stochastic input/output pairs , and encapsulate a broad class of problems generally referred to as supervised learning problems. Take for example the setting in which we would like to characterize the properties of a material using molecular dynamics simulations. There, corresponds to a thermodynamically valid configuration of all the particles in the system, is the Boltzmann distribution, and is a collection of correlated quantities of interest that characterize the macroscopic behavior of the system (e.g., Young’s modulus, ion mobility, optical spectrum properties, etc.). Given some realizations , , our goal is to learn a conditional probability model that not only allows us to accurately predict for a new

(e.g., by estimating the expectation

), but, more importantly, it characterizes the complete statistical dependence of on , thus allowing us to quantify the uncertainty associated with our predictions. As is typically very large, this defines a challenging high-dimensional regression problem. Coming to our rescue, is our ability to extract a meaningful and robust representations of the original data that exploits its structure through the use of latent variables.

2.1 Variational inference for conditional deep generative models

The introduction of latent variables allows us to express as an infinite mixture model,

(1)

where is a prior distribution on the latent variables. Essentially, this construction postulates that every output in the observed physical space is generated by a transformation of the inputs and a set of latent variables , , i.e. , where is a parametrized nonlinear transformation (see figure 1).This construction generalizes the classical observation model used in regression, namely , which can be viewed as a simplified case corresponding to an additive noise model.

Equation 1 resembles a mixture model as for every possible value of , we add another conditional distribution to , weighted by its probability. Now, it is interesting to ask what the latent variables are, given an input/output pair . Namely, from a Bayesian standpoint, we would like to know the posterior distribution . However, in general, the relationship between and can be highly non-linear and both the dimensionality of our observations , and the dimensionality of the latent variables

, can be quite large. Since both marginal and posterior probability distributions require evaluation of the integral in equation

1, they are intractable.

Figure 1: Building probabilistic surrogates using conditional generative models: We assume that each observed data pair in the physical space is generated by a deterministic nonlinear transformation of the inputs and a set of latent variables , i.e. . This construction generalizes the classical observation model used in regression, namely , which can be viewed as a simplified case corresponding to an additive noise model.

The seminal work of Kingma and Welling Kingma and Welling (2013) introduced an effective framework for approximating the true underlying conditional with a parametrized distribution that depends on a set of parameters . Specifically, they introduced a parametrized approximating distribution to approximate the true intractable posterior , and derived a computable variational objective for estimating the model parameters using stochastic optimization Kingma and Welling (2013). This objective, often referred to as the evidence lower bound (ELBO), provides a tractable lower bound to the marginal likelihood of the model, and takes the form Sohn et al. (2015)

(2)

where

denotes the Kullback-Leibler divergence between the approximate posterior

and the prior over the latent variables Kingma and Welling (2013); Sohn et al. (2015)

. Due to the resemblance of this approach to neural network auto-encoders

Vincent et al. (2008, 2010), the model proposed by Kingma and Welling has been coined as the variational auto-encoder, and the resulting approximate distributions and are usually referred to as the encoder and decoder distributions, respectively.

In a short period of time, this line of work has sparked great interest, and has led to remarkable results in very diverse applications – ranging from the design optimization of light emitting diodes Gómez-Bombarelli (2016), to the design of new molecules Gómez-Bombarelli et al. (2018), to the calibration of cosmological surveys Ravanbakhsh et al. (2017), to RNA sequencing Lopez et al. (2017), to analyzing cancer gene expressions Way and Greene (2017) – all involving the approximation of very high-dimensional probability densities. It has also led to many fundamental studies that aim to further elucidate the capabilities and limitations of such models Bousquet et al. (2017); Pu et al. (2017); Rosca et al. (2018); Zheng et al. (2018); Kingma et al. (2016); Rezende and Mohamed (2015), enhance the interpretability of their results Higgins et al. (2016); Zhao et al. (2017); Chen et al. (2018), as well as establish formal connections with well studied topics in mathematics and statistics, including importance sampling Burda et al. (2015); Klys et al. (2018) and optimal transport Genevay et al. (2017); Villani (2008); El Moselhy and Marzouk (2012).

In the original work of Kingma and Welling Kingma and Welling (2013) the encoder and decoder distributions, and

, respectively, were both assumed to be Gaussian with a mean and a diagonal covariance that were parametrized using feed-forward neural networks. Although this facilitates a straightforward evaluation of the lower bound in equation

2, it can result in a poor approximation of the true posterior when the latter is non-Gaussian and/or multi-modal, as well as a poor reconstruction of the observed data Rezende and Mohamed (2015). To this end, several methods have been proposed to overcome these limitations, including more expressive likelihood models van den Oord et al. (2016), more flexible variational approximations Rezende and Mohamed (2015); Kingma et al. (2016); Burda et al. (2015), as well as reformulations that aim to make the variational bound of equation 2 more tight Liu and Wang (2016); Mescheder et al. (2017); Makhzani et al. (2015); Tolstikhin et al. (2017); Pu et al. (2017)

. Overall, we must underline that such variational inference techniques are trading the rigorous asymptotic convergence guarantees that sampling-based methods like Markov Chain Monte Carlo enjoy, in favor of enhanced computational efficiency and performance, although new unifying ideas are aiming to bridge the gap between the two formulations

Titsias (2017); Blei et al. (2017). This trade-off becomes critical in tackling realistic large-scale problems, but it mandates careful validation of these tools to systematically assess their performance.

In the next section we will revisit recent ideas in adversarial learning that enable us to overcome the limitations of classical mean field approximations Wainwright et al. (2008); Blei et al. (2017), and allow us to perform variational inference with arbitrarily flexible approximating distributions. These developments are unifying two of the most pioneering contributions in modern machine learning, namely variational auto-encoders and generative adversarial networks Mescheder et al. (2017); Pu et al. (2017); Goodfellow et al. (2014). Then, we will show how these techniques can be adapted to form the foundations of the proposed work, namely probabilistic data fusion and multi-fidelity modeling, and demonstrate how these tools can be used to accelerate the computational modeling of complex systems.

2.2 Adversarial learning with implicit distributions

The recent works of Pu et. al. Pu et al. (2017) and Rosca et. al. Rosca et al. (2018) revealed some drawbacks in the original formulation of Kingma and Welling Kingma and Welling (2013) are attributed to the form of the variational objective in equation 2. Specifically, they showed that minimizes an upper bound on , where is the marginal posterior over the latent variables , and is the distribution of the observed data. By bringing closer to , the model distribution is brought closer to the marginal reconstruction distribution . Variational inference models learn to sample by maximizing reconstruction quality – via the likelihood term – and reducing the gap between samples and reconstructions – via the term in equation 2. Failure to match and results in regions in latent space that have high mass under but not under . This means that prior samples passed through the decoder to obtain a model sample, are likely to be far in latent space from inputs the decoder saw during training. It is this distribution mismatch that results in poor generalization performance from the decoder, and hence bad model samples.

Additional findings Li et al. (2018)

suggest that these shortcomings can be overcome by introducing a new variational objective that aims to match the joint distribution of the generative model

with the joint empirical distribution of the observed data . Matching the joint implies that that the respective marginal and conditional distributions are also encouraged to match. Here, we argue that matching the joint distribution of the generated data with the joint distribution of the observed data by minimizing the reverse Kullback-Leibler divergence is a promising approach to train the conditional generative model presented in equation 1. To this end, the reverse Kullback-Leibler divergence reads as

(3)

where denotes the entropy of the conditional generative model. The second term can be further decomposed as

(4)

where and denote the support of the distributions and , respectively, while denotes the complement of . Notice that by minimizing the Kullback-Leibler divergence in equation 3 we introduce a mechanism that is trying to balance the effect of two competing objectives. Specifically, maximization of the entropy term encourages to spread over its support set as wide, while the second integral term in equation 4 introduces a strong (negative) penalty when the support of and do not overlap. Hence, the support of is encouraged to spread only up to the point that , implying that . When the pathological issue of “mode-collapse” (commonly encountered in the training of generative adversarial networks Goodfellow et al. (2014)) is manifested Salimans et al. (2016). A visual sketch of this argument is illustrated in figure 2.

The issue of mode collapse will also be present if one seeks to directly minimize the reverse Kullback-Leibler objective in equation 3, as this provides no control on the relative importance of the two terms in the right hand side of equation 3. As discussed in Li et al. (2018), we may rather minimize , with to allow for control of how much emphasis is placed on mitigating mode collapse. It is then clear that the entropic regularization introduced by provides an effective mechanism for controlling and mitigating the effect of mode collapse, and, therefore, potentially enhancing the robustness adversarial inference procedures for learning .

Figure 2: Joint distribution matching: Schematic illustration of the proposed inference objective for joint distribution matching via minimization of the reverse KL-divergence. Penalizing a lower bound of the generative model entropy provides a mechanism for mitigating the pathology of mode collapse in training adversarial generative models.

Minimization of equation 3 with respect to the generative model parameters presents two fundamental difficulties. First, the evaluation of both distributions and typically involves intractable integrals in high dimensions, and we may only have samples drawn from the two distributions, not their explicit analytical forms. Second, the differential entropy term is intractable as is not known a-priori. In the next sections we revisit the unsupervised formulation put forth in Li et al. (2018) and derive a tractable inference procedure for learning from scattered observation pairs , .

2.2.1 Density ratio estimation by probabilistic classification

By definition, the computation of the reverse Kullback-Leibler divergence in equation 3 involves computing an expectation over a log-density ratio, i.e.

In general, given samples from two distributions, we can approximate their density ratio by constructing a binary classifier that distinguishes between samples from the two distributions. To this end, we assume that

data points are drawn from and are assigned a label . Similarly, we assume that samples are drawn from and assigned label . Consequently, we can write these probabilities in a conditional form, namely

where and are the class probabilities predicted by a binary classifier . Using Bayes rule, it is then straightforward to show that the density ratio of and can be computed as

(5)

This simple procedure suggests that we can harness the power of deep neural network classifiers to obtain accurate estimates of the reverse Kullback-Leibler divergence in equation 3 directly from data and without the need to assume any specific parametrization for the generative model distribution .

2.2.2 Entropic regularization bound

Here we follow the derivation of Li et. al Li et al. (2018) to construct a computable lower bound for the entropy

. To this end, we start by considering random variables

under the joint distribution

where , and is the Dirac delta function. The mutual information between and satisfies the information theoretic identity

where , are the marginal entropies and , are the conditional entropies Akaike (1998). Since in our setup is a deterministic variables independent of , and samples of are generated by a deterministic function , it follows that . We therefore have

(6)

where does not depend on the generative model parameters .

Now consider a general variational distribution parametrized by a set of parameters . Then,

(7)

Viewing as a set of latent variables, then is a variational approximation to the true intractable posterior over the latent variables . Therefore, if is introduced as an auxiliary inference model associated with the generative model , for which and , then we can use equations 6 and 7 to bound the entropy term in equation 3 as

(8)

Note that the inference model plays the role of a variational approximation to the true posterior over the latent variables, and appears naturally using information theoretic arguments in the derivation of the lower bound.

2.2.3 Adversarial training objective

By leveraging the density ratio estimation procedure described in section 2.2.1 and the entropy bound derived in section 2.2.2

, we can derive the following loss functions for minimizing the reverse Kullback-Leibler divergence with entropy regularization

(9)
(10)
(11)

where

is the logistic sigmoid function. For supervised learning tasks we can consider an additional penalty term controlled by the parameter

that encourages a closer fit to the observed individual data points. Notice how the binary cross-entropy objective of equation 9 aims to progressively improve the ability of the classifier to discriminate between “fake” samples obtained from the generative model and “true” samples originating from the observed data distribution . Simultaneously, the objective of equation 11 aims at improving the ability of the generator to generate increasingly more realistic samples that can “fool” the discriminator . Moreover, the encoder not only serves as an entropic regularization term than allows us to stabilize model training and mitigate the pathology of mode collapse, but also provides a variational approximation to true posterior over the latent variables. The way it naturally appears in the objective of equation 11 also encourages the cycle-consistency of the latent variables ; a process that is known to result in disentangled and interpretable low-dimensional representations of the observed data Friedman et al. (2001).

In theory, the optimal set of parameters correspond to the Nash equilibrium of the two player game defined by the loss functions in equations 9,11, for which one can show that the exact model distribution and the exact posterior over the latent variables can be recovered Goodfellow et al. (2014); Pu et al. (2017). In practice, although there is no guarantee that this optimal solution can be attained, the generative model can be trained by alternating between optimizing the two objectives in equations 9,11

using stochastic gradient descent as

(12)
(13)

2.2.4 Predictive distribution

Once the model is trained we can characterize the statistics of the outputs by sampling latent variables from the prior and passing them through the generator to yield conditional samples that are distributed according to the predictive model distribution

. Note that although the explicit form of this distribution is not known, we can efficiently compute any of its moments via Monte Carlo sampling. The cost of this prediction step is negligible compared to the cost of training the model, as it only involves a single forward pass through the generator function

. Typically, we compute the mean and variance of the predictive distribution at a new test point

as

(14)
(15)

where , , and corresponds to the total number of Monte Carlo samples.

3 Results

Here we present a diverse collection of demonstrations to showcase the broad applicability of the proposed methods. Moreover, in A we provide a comprehensive collection of systematic studies that aim to elucidate the robustness of the proposed algorithms with respect to different parameter settings. In all examples presented in this section we have trained the models for 20,000 stochastic gradient descent steps using the Adam optimizer Kingma and Ba (2014) with a learning rate of , while fixing a one-to-five ratio for the discriminator versus generator updates. Unless stated otherwise, we have also fixed the entropic regularization and the residual penalty parameters to and

, respectively. The proposed algorithms were implemented in Tensorflow v1.10

Abadi et al. (2016), and computations were performed in single precision arithmetic on a single NVIDIA Tesla P100 GPU card. All data and code accompanying this manuscript will be made available at https://github.com/PredictiveIntelligenceLab/CADGMs.

3.1 Regression of noisy data

We begin our presentation with an example in which the observed data is generated by a deterministic process but the observations are stochasticaly perturbed by random noise. Specifically, we consider the following three distinct cases:

  1. Gaussian homoscedastic noise:

    (16)

    where corresponds to uncorrelated zero-mean Gaussian noise.

  2. Gaussian heteroscedastic noise:

    (17)

    where , and .

  3. Non-additive, non-Gaussian noise:

    (18)

    where , and .

In all cases, we assume access to training pairs , randomly sampled in the interval according to the empirical data distribution . Then, our goal is to approximate the conditional distribution using a generative model , , that combines the original inputs and a set of latent variables to predict the outputs .

As described in section 2, the outputs are generated by pushing the inputs and the latent variables through a deterministic generator function , typically parametrized by deep neural networks. Moreover, a discriminator network is used to minimize the reverse KL-divergence between the generative model distribution and the empirical data distribution . Finally, we introduce an auxiliary inference network to model the approximate posterior distribution over the latent variables, namely that encodes the observed data into a latent space using a deterministic mapping , also modeled using a deep neural network.

The proposed conditional generative model is constructed using fully connected feed-forward architectures for the encoder and generator networks with 3 hidden layers and 100 neurons per layer, while the discriminator architecture has 2 hidden layers with 100 neurons per layer. All activation use a hyperbolic tangent non-linearity, and we have not employed any additional modifications such as L2 regularization, dropout or batch-normalization

Goodfellow et al. (2016)

. During model training, for each epoch we train the discriminator for two times, and encoder and generator for one time using stochastic gradient updates with the Adam optimizer

Kingma and Ba (2014) and a learning rate of using the entire data batch. Finally, we set the entropic regularization penalty parameter .

Figure 3 summarizes our results for all cases obtained using:

  1. The proposed conditional generative model described above.

  2. A simple Gaussian process model with a Gaussian likelihood and a squared exponential covariance function trained using exact inference Rasmussen (2004).

  3. A Bayesian neural network having the same architecture as the generator network described above and trained using mean-field stochastic variational inference Neal (2012)

We observe that the proposed conditional generative model returns robust predictions with sensible uncertainty estimates for all cases. On the other hand, the basic Gaussian process and Bayesian neural network models perform equally well for the simple uncorrelated noise case, but suffer from over-fitting and fail to return reasonable uncertainty estimates for the more complex heteroscedastic and non-additive cases. These predictions could in principle be improved with the use of more elaborate priors, likelihoods and inference procedures, however such remedies often hamper the practical applicability of these methods. In contrast, the proposed conditional generative model appears to be robust across these inherently different cases without requiring any modifications or specific assumptions regarding the nature of the noise process.

Figure 3: Regression under homoscedastic noise:

Training data (black crosses) and the exact noise-free solution (blue solid line) versus the predictive mean (red dashed line)and two standard deviations (orange shaded region) obtained by: (a) the proposed conditional generative model, (b) a Gaussian process regression model, and (c) a Bayesian neural network.

Top row panels: Gaussian homoscedastic noise, Middle row panels: Gaussian heteroscedastic noise, Bottom row panels: Non-additive, non-Gaussain noise.

3.2 Multi-fidelity modeling of stochastic processes

In this section we demonstrate how the proposed methodology can be adapted to accommodate the setting of supervised learning from data of variable fidelity. Let it be a synthesis of expensive experiments and simplified analytical models, multi-scale/multi-resolution computational models, or historical data and expert opinion, the concept of multi-fidelity modeling lends itself to enabling effective pathways for accelerating the analysis of systems that are prohibitively expensive to evaluate. As discussed in section 1, these methods have been successful in a wide spectrum of applications including design optimization Forrester et al. (2007); Robinson et al. (2008); Alexandrov et al. (2001); Sun et al. (2010, 2011), model calibration Perdikaris and Karniadakis (2016); Perdikaris (2015); Perdikaris and Karniadakis (2015), and uncertainty quantification Eldred and Burkardt (2009); Ng and Eldred (2012); Padron et al. (2014); Biehler et al. (2015); Peherstorfer et al. (2016a, b, c); Narayan et al. (2014); Zhu et al. (2014); Bilionis et al. (2013); Parussini et al. (2017); Perdikaris et al. (2016).

Except perhaps for Gaussian process regression models, most existing approaches to multi-fidelity modeling are trying to construct deterministic surrogates of some form , and use theoretical error bounds to quantify the accuracy of the surrogate model predictions. For instance, a multi-fidelity problem can be formulated by considering and , where is the output of our highest fidelity information source, are predictions of lower fidelity models, is a vector of space-time coordinates, and is a vector of uncertain parameters. Despite their growing popularity, the applicability of multi-fidelity modeling techniques is typically limited to systems that are governed by deterministic input-output relations. To the best of our knowledge, this is the first attempt of applying the concept of multi-fidelity modeling to expedite the statistical characterization of correlated stochastic processes.

Without loss of generality, and to keep our presentation clear, we will focus on a setting involving two correlated stochastic processes. Intuitively, one can think of the following example scenario. We want to characterize the statistics of a random quantity of interest (e.g., velocity fluctuations of a turbulent flow near a wall boundary) by recording its value at a finite set of locations and for a finite number of random realizations. However, these recordings may be hard/expensive to obtain as they may require a set of sophisticated and well calibrated sensors, or a set of fully resolved computational simulations. At the same time, it might be easier to obtain more measurements either by probing the same quantity of interest using a set of cheaper/uncalibrated sensors (or simplified/coarser computational models), or by probing an auxiliary quantity of interest that is statistically correlated to our target variable but is much easier to record (e.g., sensor measurements of pressure on the wall boundary). Then our goal is to synthesize these measurements and construct a predictive model that can fully characterize the statistics of the target stochastic process.

More formally, we assume that we have access to a number of high-fidelity input-output pairs corresponding to a finite number of realizations of the target stochastic process, measured at a handful input locations using high-fidelity sensors. Moreover, we also have access to low-fidelity input-output pairs corresponding to a finite number of realizations of either the target stochastic process or an auxiliary process that is statistically correlated with the target, albeit probed for a much larger collection of inputs. Then our goal is to learn the conditional distribution using a generative model , .

We will illustrate this work-flow using a synthetic example involving data generated from two correlated Gaussian processes in one input dimension

(19)

with a mean and covariance functions given by

(21)
(22)
(23)
(24)
(25)

Here and correspond to two different sets of hyper-parameters of a square exponential kernel

(26)

Moreover, is a parameter that controls the degree to which the two stochastic processes exhibit linear correlations Kennedy and O’Hagan (2000); Perdikaris et al. (2017). In this example we have considered and , and generated a training data-set consisting of 50 realizations of and recorded using a set of sensors fixed at locations (see figure 4(a)).

We employ a conditional generative model constructed using simple feed-forward neural networks with 3 hidden layers and 100 neurons per layer for both the generator and the encoder, and 2 hidden layers with 100 neurons per layer for the discriminator. The activation function in all cases is chosen to be a hyperbolic tangent non-linearity. Moreover, we have chosen a one-dimensional latent space with a standard normal prior, i.e.

. Model training is performed using the Adam optimizer Kingma and Ba (2014) with a learning rate of for all the networks. For each stochastic gradient descent iteration, we update the discriminator for 1 time and the generator for 5 times, while we fix the entropic regularization penalty parameter to . Notice that during model training the algorithm only requires to see joint observations of and at a fixed set of input locations (see figure 4(a)). However, during prediction at a new test point one needs to first sample , and then use the generative model to produce samples , .

The results of this experiment are summarized in figures 4(b) and 5. Specifically, in figure 4(b) we observe a qualitative agreement between the second order sufficient statistics for the predicted and the exact high-fidelity processes. The effectiveness of the multi-fidelity approach becomes evident when we compare our results against a single-fidelity conditional generative model trained only on the high-fidelity data. The result of this experiment is presented in 5(a) where it is clear that the generative model fails to correctly capture the target stochastic process. To make this comparison quantitative, we have estimated the forward and reverse Kullback-Leibler divergence for a collection of one-dimensional marginal distributions corresponding to different spatial locations in

. To this end, we have employed a Gaussian approximation for the predicted marginal densities of the generative model and compared them against the exact Gaussian marginal densities of the target high-fidelity process using the analytical expression for the KL-divergence between two Gaussian distributions

and ,

(27)

The result of this comparison is shown in figure 5(b) for both the single- and multi-fidelity cases. Clearly, the appropriate utilization of the low-fidelity data results in significant accuracy gains for the multi-fidelity case, while the single-fidelity model is not able to generalize well and suffers from large errors in KL-divergence in all locations away from the training data.

Figure 4: Multi-fidelity modeling of stochastic processes: (a) Sample realizations of the low- and high-fidelity stochastic processes (red and blue lines, respectively) along with the sensor measurements at used to train the generative model (black and green crosses, respectively). (b) Predicted mean (red dashed line) and two standard deviations (yellow band) for the high-fidelity stochastic process versus the exact solution (blue solid line and green band, respectively).
Figure 5: Multi-fidelity modeling of stochastic processes: (a) Predicted mean (red dashed line) and two standard deviations (yellow band) of a single-fidelity conditional generative model versus the exact solution (blue solid line and green band, respectively). (b) Comparison of the KL-divergence and Reverse-KL-divergence between the exact marginal densities and the predictions of the single- and multi-fidelity conditional generative models.

3.3 Uncertainty propagation in high-dimensional dynamical systems

In this section we aim to demonstrate how the proposed inference framework can leverage modern deep learning techniques to tackle high-dimensional uncertainty propagation problems involving complex dynamical systems. To this end, we will consider the temporal evolution of the non-linear time-dependent Burgers equation in one spatial dimension, subject to random initial conditions. The equation and boundary conditions read as

(28)

where the viscosity parameter is chosen as Burgers (1948). We will evolve the system starting from a random initial condition generated by a conditional Gaussian process Rasmussen (2004) that constrains the initial sample paths to satisfy zero Dirichlet boundary conditions, i.e. , with

(29)

where and are column vectors corresponding to zero data near the domain boundaries, and is a covariance matrix constructed by evaluating the square exponential kernel (see equation 26) with fixed variance and length-scale hyper-parameters and , respectively (see figure 6). The resulting solution to this problem is a continuous spatio-temporal random field

whose statistical description defines a non-trivial infinite-dimensional uncertainty propagation problem. As we will describe below, we will leverage the capabilities of convolutional neural networks in order to construct a scalable surrogate model that is capable of providing a complete statistical characterization of the random field

for any time and for a finite collection of spatial locations .

Figure 6: Uncertainty propagation in high-dimensional dynamical systems: 100 representative samples of a conditional Gaussian process used as initial conditions for the Burgers equation.

We generate a data-set consisting of 100 sample realizations of the system in the interval using a high-fidelity Fourier spectral method Kassam and Trefethen (2005) on a regular spatial grid consisting of 128 points and 256 time-steps. Our goal here is to use a subset of this data to train a deep generative model for approximating the conditional density , , where the vector corresponds to the collocation of the continuous field at the 128 spatial grid-points for a given temporal snapshot at time . We use data from 64 randomly selected temporal snapshots to train the generative model, and the rest will be used for validating our results.

To exploit the gridded structure of the data we employ 1d-convolutional neural networks Krizhevsky et al. (2012) which allow us to construct a multi-resolution representation of the data that can capture local spatial correlations LeCun et al. (2015); Mallat (2016). To this end, the generator network is constructed using 5 transposed convolution layers with channel sizes of

, kernel size 4, stride 2, and a hyperbolic tangent activation function in all layers except the last. For the encoder we use 5 convolutional layers with channel sizes of

, each with a kernel size of 5, stride 2, followed by a batch normalization layer Ioffe and Szegedy (2015) and a hyperbolic tangent activation. The last layer of the encoder is a fully connected layer that returns outputs with the same dimension of . Here, we choose the latent space dimension to be 32, i.e. , with an isotropic normal prior, . Finally, for the discriminator we use 4 convolution layers with the channel sizes of , each with kernel size of 5, stride 2, and a hyperbolic tangent activation function in all layers except the last. The last layer of the discriminator is a fully connected layer to convert the final output into scalar class probability predictions that aim to correctly distinguish between real and generated samples in the 128-dimensional output space.

Notice that the time variable is treated as a continuous label corresponding to each time instant , and it is incorporated in our work-flow as follows. For the discriminator and the encoder, we broadcast time as a vector having the same size of the data and treat it as an additional input channel. For the decoder, we broadcast time as a vector having the same size of the latent variable and concatenate them together. We use the Adam Kingma and Ba (2014) optimizer with the learning rate for all the networks. For each epoch, we train the discriminator for 1 time and the generator for 1 time. Finally, we set the entropic regularization penalty to and the data fit penalty to (see equation 11).

Figure 7 provides a visual comparison between reference trajectory samples obtained by high-fidelity simulations of equation 28 and trajectories generated by sampling the trained conditional generative model . A more detailed comparison is provided in figure 8 in terms of one-dimensional slices taken at four distinct time instances that were not used during model training. In both figures we observe a very good qualitative agreement between the reference and the predicted solutions, indicating that the conditional generative model is able to correctly capture the statistical structure of the system. These results are indicative of the ability of the proposed method to approximate a non-trivial 128-dimensional distribution using only scattered measurements from 100 sample realizations of the system.

Figure 7: Uncertainty propagation in high-dimensional dynamical systems: (a) Exact sample trajectories of the Burgers equation. (b) Samples generated by the conditional generative model . The comparison corresponds to 16 different temporal snapshots and depicts 10 samples per snapshot. Each sample is a 128-dimensional vector.
Figure 8: Uncertainty propagation in high-dimensional dynamical systems: Mean (solid blue line) and two standard deviations (green shaded region) of reference simulated trajectories of the Burgers equation versus the predictions of the conditional generative model (red dashed line and yellow shaded region, respectively). Results are reported for four temporal instances that were not used during model training: (a) , (b) , (c) , and (d) .

4 Discussion

We have presented a statistical inference framework for constructing scalable surrogate models for stochastic, high-dimensional and multi-fidelity systems. Leveraging recent advances in deep learning and stochastic variational inference, the proposed regularized inference procedure goes beyond mean-field and Gaussian approximations, it can accommodate the use of implicit models that are capable of approximating arbitrarily complex distributions, and is able to mitigate the issue of mode collapse that often hampers the performance of adversarial generative models. These elements enable the construction of conditional deep generative models that can be effectively trained on scattered and noisy input-output observations, and provide accurate predictions and robust uncertainty estimates. The latter, not only serves as a measure for a-posteriori error estimation, but it is also a key enabler of downstream tasks such as active learning

Cohn et al. (1996) and Bayesian optimization Shahriari et al. (2016). Moreover, the use of latent variables adds flexibility in learning from data-sets that may be corrupted by complex noise processes, and offers a general platform for nonlinear dimensionality reduction. Taken all together, these developments aspire to provide a new set of probabilistic tools for expediting the analysis of stochastic systems, as well as act as unifying glue between experimental assays and computational modeling.

Our goal for this work is to present a new viewpoint on building surrogate models with a particular emphasis on the methodological foundations of the proposed algorithms. To this end, we confined the presentation to a diverse collection of canonical studies that were designed to highlight the broad applicability of the proposed tools, as well as to provide a test bed for systematic studies that elucidate their practical performance. In the process of gaining a deeper understanding of their advantages and limitations, future studies will focus on realistic large-scale applications in computational mechanics and beyond.

Acknowledgements

This work received support from the US Department of Energy under the Advanced Scientific Computing Research program (grant DE-SC0019116) and the Defense Advanced Research Projects Agency under the Physics of Artificial Intelligence program.

References

  • Forrester et al. (2007) A. I. Forrester, A. Sóbester, A. J. Keane, Multi-fidelity optimization via surrogate modelling, in: Proceedings of the royal society of london a: mathematical, physical and engineering sciences, volume 463, The Royal Society, pp. 3251–3269.
  • Robinson et al. (2008) T. Robinson, M. Eldred, K. Willcox, R. Haimes, Surrogate-based optimization using multifidelity models with variable parameterization and corrected space mapping, AIAA Journal 46 (2008) 2814–2822.
  • Alexandrov et al. (2001) N. M. Alexandrov, R. M. Lewis, C. R. Gumbert, L. L. Green, P. A. Newman, Approximation and model management in aerodynamic optimization with variable-fidelity models, Journal of Aircraft 38 (2001) 1093–1101.
  • Sun et al. (2010) G. Sun, G. Li, M. Stone, Q. Li, A two-stage multi-fidelity optimization procedure for honeycomb-type cellular materials, Computational Materials Science 49 (2010) 500–511.
  • Sun et al. (2011) G. Sun, G. Li, S. Zhou, W. Xu, X. Yang, Q. Li, Multi-fidelity optimization for sheet metal forming process, Structural and Multidisciplinary Optimization 44 (2011) 111–124.
  • Celik et al. (2010) N. Celik, S. Lee, K. Vasudevan, Y.-J. Son, Dddas-based multi-fidelity simulation framework for supply chain systems, IIE Transactions 42 (2010) 325–341.
  • Perdikaris and Karniadakis (2016) P. Perdikaris, G. E. Karniadakis, Model inversion via multi-fidelity Bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond, Journal of The Royal Society Interface 13 (2016) 20151107.
  • Perdikaris (2015) P. Perdikaris, Data-Driven Parallel Scientific Computing: Multi-Fidelity Information Fusion Algorithms and Applications to Physical and Biological Systems, Ph.D. thesis, Brown University, 2015.
  • Perdikaris and Karniadakis (2015) P. Perdikaris, G. Karniadakis, Calibration of blood flow in simulations via multi-fidelity Bayesian optimization, in: APS Meeting Abstracts.
  • Eldred and Burkardt (2009) M. Eldred, J. Burkardt, Comparison of non-intrusive polynomial chaos and stochastic collocation methods for uncertainty quantification, in: 47th AIAA Aerospace Sciences Meeting including The New Horizons Forum and Aerospace Exposition, p. 976.
  • Ng and Eldred (2012) L. W.-T. Ng, M. Eldred, Multifidelity uncertainty quantification using non-intrusive polynomial chaos and stochastic collocation, in: 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference 20th AIAA/ASME/AHS Adaptive Structures Conference 14th AIAA, p. 1852.
  • Padron et al. (2014) A. S. Padron, J. J. Alonso, F. Palacios, M. F. Barone, M. S. Eldred, Multi-fidelity uncertainty quantification: application to a vertical axis wind turbine under an extreme gust, in: 15th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, p. 3013.
  • Biehler et al. (2015) J. Biehler, M. W. Gee, W. A. Wall, Towards efficient uncertainty quantification in complex and large-scale biomechanical problems based on a Bayesian multi-fidelity scheme, Biomechanics and modeling in mechanobiology 14 (2015) 489–513.
  • Peherstorfer et al. (2016a) B. Peherstorfer, K. Willcox, M. Gunzburger, Optimal model management for multifidelity monte carlo estimation, SIAM Journal on Scientific Computing 38 (2016a) A3163–A3194.
  • Peherstorfer et al. (2016b) B. Peherstorfer, T. Cui, Y. Marzouk, K. Willcox, Multifidelity importance sampling, Computer Methods in Applied Mechanics and Engineering 300 (2016b) 490–509.
  • Peherstorfer et al. (2016c) B. Peherstorfer, K. Willcox, M. Gunzburger, Survey of multifidelity methods in uncertainty propagation, inference, and optimization, Preprint (2016c) 1–57.
  • Narayan et al. (2014) A. Narayan, C. Gittelson, D. Xiu, A stochastic collocation algorithm with multifidelity models, SIAM Journal on Scientific Computing 36 (2014) A495–A521.
  • Zhu et al. (2014) X. Zhu, A. Narayan, D. Xiu, Computational aspects of stochastic collocation with multifidelity models, SIAM/ASA Journal on Uncertainty Quantification 2 (2014) 444–463.
  • Bilionis et al. (2013) I. Bilionis, N. Zabaras, B. A. Konomi, G. Lin, Multi-output separable Gaussian process: Towards an efficient, fully Bayesian paradigm for uncertainty quantification, Journal of Computational Physics 241 (2013) 212–239.
  • Parussini et al. (2017) L. Parussini, D. Venturi, P. Perdikaris, G. Karniadakis, Multi-fidelity Gaussian process regression for prediction of random fields, J. Comput. Phys. 336 (2017) 36 – 50.
  • Perdikaris et al. (2016) P. Perdikaris, D. Venturi, G. E. Karniadakis, Multifidelity information fusion algorithms for high-dimensional systems and massive data sets, SIAM J. Sci. Comput. 38 (2016) B521–B538.
  • Rasmussen (2004) C. E. Rasmussen, Gaussian processes in machine learning, in: Advanced lectures on machine learning, Springer, 2004, pp. 63–71.
  • Kingma and Welling (2013) D. P. Kingma, M. Welling, Auto-encoding variational Bayes, arXiv preprint arXiv:1312.6114 (2013).
  • Sohn et al. (2015) K. Sohn, H. Lee, X. Yan, Learning structured output representation using deep conditional generative models, in: Advances in Neural Information Processing Systems, pp. 3483–3491.
  • Vincent et al. (2008) P. Vincent, H. Larochelle, Y. Bengio, P.-A. Manzagol,

    Extracting and composing robust features with denoising autoencoders,

    in: Proceedings of the 25th international conference on Machine learning, ACM, pp. 1096–1103.
  • Vincent et al. (2010) P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, Journal of Machine Learning Research 11 (2010) 3371–3408.
  • Gómez-Bombarelli (2016) R. e. al.. Gómez-Bombarelli, Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach, Nature materials 15 (2016) 1120–1127.
  • Gómez-Bombarelli et al. (2018) R. Gómez-Bombarelli, J. N. Wei, D. Duvenaud, J. M. Hernández-Lobato, B. Sánchez-Lengeling, D. Sheberla, J. Aguilera-Iparraguirre, T. D. Hirzel, R. P. Adams, A. Aspuru-Guzik, Automatic chemical design using a data-driven continuous representation of molecules, ACS Central Science 4 (2018) 268–276.
  • Ravanbakhsh et al. (2017) S. Ravanbakhsh, F. Lanusse, R. Mandelbaum, J. G. Schneider, B. Poczos, Enabling dark energy science with deep generative models of galaxy images., in: AAAI, pp. 1488–1494.
  • Lopez et al. (2017) R. Lopez, J. Regier, M. Cole, M. Jordan, N. Yosef, A deep generative model for single-cell RNA sequencing with application to detecting differentially expressed genes, arXiv preprint arXiv:1710.05086 (2017).
  • Way and Greene (2017) G. P. Way, C. S. Greene, Extracting a biologically relevant latent space from cancer transcriptomes with variational autoencoders, bioRxiv (2017) 174474.
  • Bousquet et al. (2017) O. Bousquet, S. Gelly, I. Tolstikhin, C.-J. Simon-Gabriel, B. Schoelkopf, From optimal transport to generative modeling: the VEGAN cookbook, arXiv preprint arXiv:1705.07642 (2017).
  • Pu et al. (2017) Y. Pu, L. Chen, S. Dai, W. Wang, C. Li, L. Carin, Symmetric variational autoencoder and connections to adversarial learning, arXiv preprint arXiv:1709.01846 (2017).
  • Rosca et al. (2018) M. Rosca, B. Lakshminarayanan, S. Mohamed, Distribution matching in variational inference, arXiv preprint arXiv:1802.06847 (2018).
  • Zheng et al. (2018) H. Zheng, J. Yao, Y. Zhang, I. W. Tsang, Degeneration in VAE: in the light of fisher information loss, arXiv preprint arXiv:1802.06677 (2018).
  • Kingma et al. (2016) D. P. Kingma, T. Salimans, R. Jozefowicz, X. Chen, I. Sutskever, M. Welling, Improved variational inference with inverse autoregressive flow, in: Advances in Neural Information Processing Systems, pp. 4743–4751.
  • Rezende and Mohamed (2015) D. J. Rezende, S. Mohamed, Variational inference with normalizing flows, arXiv preprint arXiv:1505.05770 (2015).
  • Higgins et al. (2016) I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, A. Lerchner, beta-VAE: Learning basic visual concepts with a constrained variational framework (2016).
  • Zhao et al. (2017) S. Zhao, J. Song, S. Ermon, InfoVAE: Information maximizing variational autoencoders, arXiv preprint arXiv:1706.02262 (2017).
  • Chen et al. (2018) T. Q. Chen, X. Li, R. Grosse, D. Duvenaud, Isolating sources of disentanglement in variational autoencoders, arXiv preprint arXiv:1802.04942 (2018).
  • Burda et al. (2015) Y. Burda, R. Grosse, R. Salakhutdinov, Importance weighted autoencoders, arXiv preprint arXiv:1509.00519 (2015).
  • Klys et al. (2018) J. Klys, J. Bettencourt, D. Duvenaud, Joint importance sampling for variational inference (2018).
  • Genevay et al. (2017) A. Genevay, G. Peyré, M. Cuturi, GAN and VAE from an optimal transport point of view, arXiv preprint arXiv:1706.01807 (2017).
  • Villani (2008) C. Villani, Optimal transport: old and new, volume 338, Springer Science & Business Media, 2008.
  • El Moselhy and Marzouk (2012) T. A. El Moselhy, Y. M. Marzouk, Bayesian inference with optimal maps, Journal of Computational Physics 231 (2012) 7815–7850.
  • van den Oord et al. (2016) A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves, et al., Conditional image generation with pixelcnn decoders, in: Advances in Neural Information Processing Systems, pp. 4790–4798.
  • Liu and Wang (2016) Q. Liu, D. Wang, Stein variational gradient descent: A general purpose bayesian inference algorithm, in: Advances In Neural Information Processing Systems, pp. 2378–2386.
  • Mescheder et al. (2017) L. Mescheder, S. Nowozin, A. Geiger, Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks, arXiv preprint arXiv:1701.04722 (2017).
  • Makhzani et al. (2015) A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, B. Frey, Adversarial autoencoders, arXiv preprint arXiv:1511.05644 (2015).
  • Tolstikhin et al. (2017) I. Tolstikhin, O. Bousquet, S. Gelly, B. Schoelkopf, Wasserstein auto-encoders, arXiv preprint arXiv:1711.01558 (2017).
  • Titsias (2017) M. K. Titsias, Learning model reparametrizations: Implicit variational inference by fitting mcmc distributions, arXiv preprint arXiv:1708.01529 (2017).
  • Blei et al. (2017) D. M. Blei, A. Kucukelbir, J. D. McAuliffe, Variational inference: A review for statisticians, Journal of the American Statistical Association 112 (2017) 859–877.
  • Wainwright et al. (2008) M. J. Wainwright, M. I. Jordan, et al., Graphical models, exponential families, and variational inference, Foundations and Trends® in Machine Learning 1 (2008) 1–305.
  • Goodfellow et al. (2014) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in neural information processing systems, pp. 2672–2680.
  • Li et al. (2018) C. Li, J. Li, G. Wang, L. Carin, Learning to sample with adversarially learned likelihood-ratio (2018).
  • Salimans et al. (2016) T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, Improved techniques for training gans, in: Advances in Neural Information Processing Systems, pp. 2234–2242.
  • Akaike (1998) H. Akaike, Information theory and an extension of the maximum likelihood principle, in: Selected papers of hirotugu akaike, Springer, 1998, pp. 199–213.
  • Friedman et al. (2001) J. Friedman, T. Hastie, R. Tibshirani, The elements of statistical learning, volume 1, Springer series in statistics New York, NY, USA:, 2001.
  • Kingma and Ba (2014) D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014).
  • Abadi et al. (2016) M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., Tensorflow: A system for large-scale machine learning., in: OSDI, volume 16, pp. 265–283.
  • Goodfellow et al. (2016) I. Goodfellow, Y. Bengio, A. Courville, Y. Bengio, Deep learning, volume 1, MIT press Cambridge, 2016.
  • Neal (2012) R. M. Neal, Bayesian learning for neural networks, volume 118, Springer Science & Business Media, 2012.
  • Kennedy and O’Hagan (2000) M. C. Kennedy, A. O’Hagan, Predicting the output from a complex computer code when fast approximations are available, Biometrika 87 (2000) 1–13.
  • Perdikaris et al. (2017) P. Perdikaris, M. Raissi, A. Damianou, N. Lawrence, G. Karniadakis, Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling, in: Proc. R. Soc. A, volume 473, The Royal Society, p. 20160751.
  • Burgers (1948) J. M. Burgers, A mathematical model illustrating the theory of turbulence, in: Advances in applied mechanics, volume 1, Elsevier, 1948, pp. 171–199.
  • Kassam and Trefethen (2005) A.-K. Kassam, L. N. Trefethen, Fourth-order time-stepping for stiff pdes, SIAM Journal on Scientific Computing 26 (2005) 1214–1233.
  • Krizhevsky et al. (2012) A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in neural information processing systems, pp. 1097–1105.
  • LeCun et al. (2015) Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (2015) 436–444.
  • Mallat (2016) S. Mallat, Understanding deep convolutional networks, Phil. Trans. R. Soc. A 374 (2016) 20150203.
  • Ioffe and Szegedy (2015) S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, arXiv preprint arXiv:1502.03167 (2015).
  • Cohn et al. (1996) D. A. Cohn, Z. Ghahramani, M. I. Jordan, Active learning with statistical models, Journal of artificial intelligence research (1996).
  • Shahriari et al. (2016) B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, N. De Freitas, Taking the human out of the loop: A review of Bayesian optimization, Proceedings of the IEEE 104 (2016) 148–175.
  • Glorot and Bengio (2010) X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in: Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256.
  • Yang and Perdikaris (2018) Y. Yang, P. Perdikaris, Adversarial uncertainty quantification in physics-informed neural networks, arXiv preprint arXiv:1811.04026 (2018).

Appendix A Sensitivity studies

Here we provide results on a series of comprehensive systematic studies that aim to quantify the sensitivity of the resulting predictions on:

  1. the entropic regularization penalty parameter .

  2. the generator, discriminator and encoder neural network architectures.

  3. the the adversarial training procedure.

To this end, we consider a simple benchmark corresponding to the approximation of a Gaussian process , where corresponds to the high-fidelity mean function defined in equation 22 and is a squared exponential kernel with hyper-parameters , as defined in equation 26. Figure 9(a) shows representative samples generated by this reference stochastic process. In all cases we have employed simple feed-forward neural network architectures as described below. The comparison metric used in all sensitivity studies is the average discrepancy between the predicted and the exact one-dimensional marginal densities, as measured by the reverse Kullback-Leibler divergence

(30)

where is the conditional distribution predicted by the generative model, is the conditional distribution of the exact solution, and is the distribution of uniformly sampled test locations in the interval . For a given , we facilitate a tractable computation of the reverse KL-divergence using equation 27, by performing a Gaussian approximation of , while, by definition, is a known uni-variate Gaussian density.

Figure 9: Sensitivity studies on approximating a one-dimensional stochastic process: (a) Representative samples generated by this reference stochastic process, along with the observed data used for model training. (b) Representative samples generated by a conditional generative model with . Blue lines are the exact reference samples, red crosses are the training data, and red lines are the generated samples.

a.1 Sensitivity with respect to the entropic regularization penalty parameter

In this study we aim to quantify the sensitivity of our predictions with respect to the penalty parameter in equation 11. To this end, we have fixed the architecture for generator and encoder neural networks to include 3 hidden layers with 100 neurons each, and the discriminator neural network to include 2 hidden layers with 100 neurons each. In all cases, we have used a hyperbolic tangent non-linearity and a normal Xavier initialization (Glorot and Bengio, 2010). For each iteration, we train the discriminator for 3 times and the generator for 1 time. We use a batch size of data-points per stochastic gradient update, and the total number of training points is .

In table 1 we report the reverse KL-divergence between the predicted data and the ground truth for different values of , , , , , , and . Recall that for our model has a direct correspondence with generative adversarial networks Li et al. (2018), while for we obtain a regularized adversarial model that introduces flexibility in mitigating the issue of mode collapse. A manifestation of this pathology is evident in figure 10(a) in which the model with collapses to a degenerate solution that severely underestimates the diversity observed in the true stochastic process samples, despite the fact that the model training dynamics seem to converge to a stable solution (see figure 10(b)). This is also confirmed by the computed average discrepancy in KL-divergence which is roughly an order of magnitude larger compared to the regularized models with . We also observe that model predictions remain robust for all values , while our best results are typically obtained for which is the value used throughout this paper (see figure 9(b) for representative samples generated by the conditional generative model with ).

1.0 1.2 1.5 1.8 2.0 5.0
Reverse-KL 2.8e+00 2.2e-01 2.2e-01 4.2e-01 5.4e-01 3.4e-01
Table 1: Sensitivity with respect to the entropic regularization penalty parameter : Average reverse KL-divergence between the predicted and the ground truth one-dimensional marginals in for different values of the entropic regularization penalty in equation 11.
Figure 10: Sensitivity with respect to the entropic regularization penalty parameter : (a) Manifestation of mode collapse for . Blue lines are exact samples from the reference stochastic process, red lines are samples produced by the conditional generative model. (b) Generator and discriminator loss values as a function of the number of training iterations.

a.2 Sensitivity with respect to the neural network architecture

In this study we aim to quantify the sensitivity of our predictions with respect to the architecture of the neural networks that parametrize the generator, the discriminator, and the encoder. Here, we choose the number of layers for the discriminator to always be one less than the number of layers for the generator and the encoder (e.g., if the number of layers for the generator is two then the number of layers for the discriminator is one, etc.). In all cases, we fix and we use a hyperbolic tangent non-linearity, and a normal Xavier initialization (Glorot and Bengio, 2010). In table 2 we report the computed average reverse KL-divergence between the predicted data and the ground truth for different feed-forward architectures for the generator, discriminator, and encoder (i.e., different number of layers and number of nodes in each layer). We denote the number of neurons in each layer as and the number of layers for the generator and the encoder as .

The results of this sensitivity study are summarized in table 2. Overall, we observe that model predictions remain robust for all neural network architectures considered.

20 50 100
2 6.0e-01 6.0e-01 7.4e-01
3 9.6e-02 3.3e-01 2.2e-01
4 1.5e-01 4.0e-01 2.7e-01
Table 2: Sensitivity with respect to the neural network architecture: Average reverse KL-divergence between the predicted data and the ground truth for different feed-forward architectures for the generator, encoder, and the discriminator. The total number of layers of the latter is always chosen to be one less than the number of layers for generator.

a.3 Sensitivity with respect to the adversarial training procedure

As discussed in Yang and Perdikaris (2018), the adversarial training procedure plays a key role in the effectiveness of adversarial generative models, and it often requires a careful tuning of the training dynamics to ensure robustness in the model predictions. To this end, here we test the sensitivity of the proposed conditional generative model with respect to the relative frequency in which the generator and discriminator networks are updated during model training. To this end, we fix we the entropic regularization penalty to , use the neural network architecture to be the same as the one described in section A.1, and vary the total number of training steps for the generator and the discriminator within each stochastic gradient descent iteration.

The results of this study are presented in table 3 where we report the average reverse KL-divergence between the predicted data and the ground truth. These results reveal the high sensitivity of the training dynamics on the interplay between the generator and discriminator networks, and pinpoint the well known peculiarity of adversarial inference procedures which require a careful tuning of and for achieving stable performance in practice. Overall we observe that a one-to-three or one-to-five ratio of relative updates for the generator and discriminator, respectively, is the setting that typically works best in practice, although we must underline that this also depends on the capacity of the underlying neural network architectures as discussed in Yang and Perdikaris (2018).

Finally, figure 11 depicts the convergence of the training algorithm for the case and . According to Goodfellow et al. (2014), the theoretical optimal value of the discriminator loss is . As is shown in figure 11, the losses oscillate at the very beginning of the training and quickly converge to the optimal value after approximately 2,000 iterations.

1 3 5
1 9.2e-01 2.2e-01 2.4e-01
3 1.2e+00 8.5e-01 9.4e-01
5 4.3e+00 8.9e-01 5.9e+00
Table 3: Sensitivity with respect to the adversarial training procedure: Average reverse KL-divergence between the predicted data and the ground truth with different number of relative updates between the generator and discriminator in each stochastic gradient descent iteration.
Figure 11: Optimal convergence of the discriminator loss: Convergence of the training algorithm for the case and . The red line depicts the generator loss, the blue line is the discriminator loss, and the black dash line is the theoretical optimal loss of the discriminator.