1 Introduction
Complex physical systems are usually characterized by PDEgoverned processes with multiscale physics. Resolving all the scales of multiscale processes in simulations is still infeasible for many real applications. In practice, simulating those multiscale physical systems often involves closures to model unresolved processes. However, these closure models also account for major sources of uncertainties in simulation results, partly due to neglecting highorder statistics from the true physical process. With the advancement of high performance computing in the last decades, highfidelity simulations of complex multiscale processes such as fully resolved turbulent flow is now becoming available for a growing number of scenarios. More recently, leveraging existing highfidelity simulation datasets to build models that can emulate complex multiscale processes (e.g., turbulence or Earth’s climate) has been made possible by the rapid growth of the capabilities of machine learning.
Several machinelearningbased approaches have been proposed for improving the simulation of complex PDEgoverned systems, e.g., turbulence modeling wang17physicsinformed ; wu2018data ; ling2016reynolds . Machinelearningbased approaches in other complex systems (e.g., weather and climate modeling) is an area of increasing interest brenowitz2018prognostic ; rasp2018deep ; gagne2014machine ; dueben2018challenges ; schneider2017earth . In addition to building closure models, machine learning techniques have been applied to various other science and engineering problems that require advanced data analyses of model output, e.g., classification, detection and segmentation of patterns liu2016application ; racah2017extremeweather ; prabhat2017climatenet . Although machine learning techniques achieved remarkable successes in applications such as image recognition lecun2015deep , the science and engineering communities gradually realize that it is important to inform machine learning some physics instead of merely relying on datadriven discovery for many scientific problems. Ling et al. ling2016reynolds
proposed a tensorial neural network to build an objective functional mapping of tensors by embedding a tensor expansion formula into their neural network. Thomas et al.
thomas2018tensor built a more general tensor network to ensure the important equivariance by introducing spherical harmonics as filters. Wu et al. wu2018data demonstrated that the important invariance (e.g., Galilean invariance) can be preserved by only using invariants as inputs and outputs for machine learning models. Other researchers incorporated known physics to machine learning techniques as additional constraints. For instance, Karpatne et al. karpatne2017theoryincorporated known physical knowledge by adding a penalty term into the machine learning loss function. Rassi et al.
raissi2019physics ; raissi2018hidden proposed physicsinformed neural networks by enforcing the structure of governing equations. Another promising direction is to explore the relations between the machine learning techniques and traditional frameworks for modeling physics, e.g., the analogy between LSTM network and MoriZwanzig formalism, for which preliminary success has been demonstrated by Ma et al. ma2018model . In addition, Lusch et al. lusch2018deepdemonstrated that the Koopman theory can be made practical by using autoencoder to identify the linear embedding of nonlinear dynamics.
Recently, a novel architecture known as generative adversarial networks (GANs) has been proposed to generate data that mimics certain properties of images or behaviors of a given system. Specifically, GANs consist of a discriminator that learns properties from training samples and a generator that generates samples that mimic the learned properties. Recent research has shown how GANs can be used to generate new solutions of PDEgoverned systems by training on existing datasets. For example, King et. al king2017creating ; king2018deep used GANs to create turbulent flow realizations and showed that the generated realizations can capture several statistical constraints of turbulent flows, e.g., Komolgorov’s law and small scale intermittency of turbulence. Moreover, GANs have been utilized in extracting information from highfidelity simulations of other physical systems, e.g., cosmology that involves Nbody simulations mustafa2017creating . Although the standard GANs can capture the true distribution of training data when global minimum of the loss function is achieved goodfellow2014generative ; bousmalis2017unsupervised , it is well known that GANs can be difficult to train and possibly converges to a local minimum when the complexity of true data distribution increases. Therefore, it is unlikely that standard GANs can capture all the statistics of the solutions for a complex PDEgoverned system, indicating that the trained network would be unable to reproduce some important physical and statistical properties of the system. In addition, there are other challenges associated with GANs, e.g., the stability of training radford2015unsupervised , the noise in generated samples arjovsky2017wasserstein and assessment of sample quality salimans2016improved . To improve GANs performances for physical problems, Xie et al. xie2018tempogan
incorporated temporal coherence to GANs to generate superresolution realizations of turbulent flows. Yang et al.
yang2018physics encoded into the architecture of GANs the governing physical laws in the form of stochastic differential equations. Stinis et al. stinis2018enforcing augmented the inputs of the discriminator with residuals and introduced noises into training data as inspired by dynamical systems.Considering the great potential of GANs to physical systems and the limitation of standard GANs, it is worthwhile to investigate a general approach to improve the performance of GANs in emulating physical systems by introducing proper regularization. Inspired by the work of Karpatne et al. karpatne2017theory , we envision to embed both physical constraint (e.g., conservation laws) and statistical constraint (e.g., statistics from the data distribution) into the generator, in order to enhance the robustness and generalization capability of GANsbased physical system emulator. This work focuses on investigating the advantage of incorporating statistical constraints, and the benefits of physical constraints are presented and discussed in a separated work of the authors yang2019enforcing .
In this work, we present a novel approach to improve GANs performance by better preserving highorder statistics when using GANs to build an emulator of complex PDEgoverned systems. We tested the statistical constraint with three relatively simple canonical systems, and discussed extensions that are applicable to more complex systems with the ultimate goal of leveraging existing highfidelity simulation output to emulate unresolved processes, and provide reliable and accurate alternatives to closure models.
2 Methodology
2.1 Generative adversarial networks
Generative adversarial networks (GANs) have been introduced in goodfellow2014generative as a new technique for training machine learning models, originally in the context of computer image synthesis. The original formulation trains models in a purely unsupervised way; yet since their introduction many modifications have been proposed in the literature that make use of available labeled data in a semisupervised way. The goal of GANs, as initially proposed, is to train a function that samples from a complicated, analyticallyunknown distribution of the data that the algorithm sees.
The main innovation of GANs is the formulation of the training procedure as a zerosum game between two networks, a generator and a discriminator
. In the standard setting, the generator receives input as an unstructured random noise vector
drawn from a simple distribution (such as an uniform or Gaussian), which it passes through a succession of deconvolutional layers and nonlinear transforms in a deterministic way and outputs a sample . The role of the discriminatoris to act as a classifier, deciding if a sample
it receives is either real or generated by . At optimality (Nash equilibrium in the game between and ), the generator is provably able to produce “fake” samples that are implicitly drawn from the (unknown) data distribution that seeks to emulate.In this paper, in Fig. 1a and in Fig. 1
c are deep deconvolutional neural network and convolutional neural network described by the weights vectors
and , respectively. The architectural details follow the structure proposed in dcgan_2015 . We train the model in the standard way of alternating between the two optimization problems:(1)  
(2) 
where denotes the expectation and represents the distribution of the training samples. The training procedure corresponds to a minimax twoplayer game:
(3) 
where the ideal objective is to achieve Nash equilibrium, i.e., the discriminator cannot distinguish the generated samples from the training samples. However, the global optimal is usually difficult to achieve by using standard GANs. Therefore, it is unlikely that standard GANs can identify and preserve all the statistics of the training samples embedded in
. Those missing statistics are usually important to PDEgoverned systems, e.g., the second order moments of the instantaneous velocity corresponds to the Reynolds stress in turbulence modeling. Therefore, we present an approach to better preserve these statistics when using GANs to emulate physical systems.
2.2 Constrained generative adversarial networks
The standard formulation above is purely datadriven, i.e., it utilizes no outside knowledge of the problem at hand but that which is encoded implicitly in the data that is used to train the model. As we argued above, constraining the model solution space to physicallyfeasible regions may yield certain benefits in terms of increased solution quality, decreased training time, and improved data efficiency. Constraints can be imposed in a variety of ways, among which:

Hard constraints on network architecture. Convolutional networks are by construction translationalinvariant and ensure spatial locality of information propagation. It has long been recognized that convolutional filters can be interpreted as discretized differential operators in the finite difference form lu2018beyond ; latnet_2017 ; long2018pde . This fact has recently been exploited to train neural networks as surrogate models of differential equations latnet_2017 and to discover PDEs from data long2018pde . Such techniques may be used to constrain the space of feasible solutions of GANs by construction.

Explicit constraints on optimization. Domain knowledge can be incorporated in a straightforward way through additional penalty terms added into the optimization loss function of GANs. This is in effect a form of regularization that can model many relevant constraints, including conservation laws and highorder statistics.
In this work, we focus on the second type of constraints discussed above and propose a constrained loss function as follow:
(4) 
where denotes the covariance structure of a given distribution , denotes the coefficient of the penalty term, and represents a distance measure between two covariance structures. The distance can be measured in Euclidean space by using Frobenius norm:
(5) 
or by using the symmetrized Kullback–Leibler divergence
weickert2005visualization , i.e.,(6) 
where denotes the trace of a matrix, and
represents the identity matrix. It should be noted that the covariance structure must be positive semidefinite and thus is defined on a lowdimensional manifold within the Euclidean space. Therefore, an alternative is to use Riemannian distance
where denotes the
th eigenvalue of a matrix, or by using the JensenBregman LogDet (JBLD) divergence
cherian2011efficient :(7) 
We have compared these different choices of the distance measure and concluded that the Frobenius norm serves as the best compromise in terms of the computational cost and the stability of training constrained GANs. The proposed physicsinformed GAN is implemented on the machine learning platform TensorFlow
abadi2016tensorflow . We used a deep convolutional GAN (DCGAN) radford2015unsupervised for most of the results presented below, but this is still referred to as GAN hereafter for simplicity.
In this work, both the covariance structures of generated and training data are estimated from samples. Specifically,
is estimated from generated samples at every iteration of training the network. Instead of implicitly estimating the probability distribution of training data
as standard GANs, the penalty term in Eq. 4 explicitly estimates the secondorder moment of the distributions and thus better constrains the difference between the distributions of the training data and the generated samples. It should be noted that can also be estimated from the governing equation of the system, since the spatial correlation within the solutions of a PDEgoverned system is related to the differential operators of the governing PDEs. More details about this alternative way of estimating the covariance structure of the training data can be found in a separate work carlos2019constructing .Incorporating physical constraints (either explicitly or implicitly) into GANs may offer certain benefits, of which this paper highlights but several. We posit that appropriately imposing constraints may improve training stability and convergence properties via the regularization these constraints provide. Many hypotheses have been put forth about the source of the training instabilities observed in GANs (mode collapse in particular, as observed initially in goodfellow2014generative ; dcgan_2015 ), including different ways in which to measure the distance between the generated and the real data distributions gulrajani2017improved , penalties on the gradients around real data points gan_convergence_stability_2017 , regularizers on the spectral radius of the Jacobian of the gradient vector numerics_of_gans_2017 , or dynamically choosing the metric to minimize in the objective function via an evolutionary strategy as in evolutionary_gan_2018 .
2.3 Lattice Boltzmann Simulation of RayleighBénard convection
The physical system we investigated in this work is RayleighBénard convection (RBC), which are ultimately driven by buoyancy differences encountered at all spatial scales, ranging from small engineering devices to Earth sciences and even astrophysical phenomena. The RBC setup is a classical prototype for this class of flows. It consists of a slab of fluid, which is bounded by two horizontal surfaces.^{2}^{2}2For simplicity, in analytic or numerical studies the two bounding surfaces are sometimes assumed to extend infinitely in the horizontal directions. It was observed from early on Rayleigh1916 that, when the lower wall is heated (while the upper wall is cooled), the dynamics of the system is especially interesting — a myriad of flow regimes appear, such as stationary or oscillating convection rolls, convection cells of different shapes, as well as turbulent flow. Although the detailed dynamics is stronglydependent on the initial conditions, the selection of the different regimes is mostly dictated by the controlparameters, which for the RBC problem are the Rayleigh number ( — the relative magnitude of the buoyancy velocity to the thermal velocity), the Prandtl number ( — the ratio of viscous diffusivity to thermal diffusivity), and the horizontaltovertical aspect ratio of the domain (). In these definitions, is the coefficient of thermal expansion, is the gravitational acceleration, the spatial scales and are the height and width respectively, and is the temperature difference between the lower and the upper walls.
The dataset we used for training a GAN to emulate the RBC problem consisted of simulation output, produced with a twodimensional fluid dynamics model based on the LBM. The numerical scheme is described in Wang2013a (see Chirila2018 for a more detailed description). The values of the controlparameters were (for air) and . For simplicity (and to facilitate comparison with some theoretical results), periodic boundary condition (BC) were used at the lateral (vertical) walls, with a relatively high aspectratio of , which minimizes unphysical artifacts due to periodicity itself. The spatial resolution was , where the vertical resolution was the same as used e.g. in VanderPoel2013 (based on the theoretical criteria from Shishkina2010 , to provide for sufficient resolution within the nearwall boundary layers). The value for the pseudo Mach number (a modelspecific parameter) was set to , to keep the artificial compressibility errors below .
The model was initially run for 400 eddy turnover times, of which the initial 320 eddy turnover times (necessary for spinup from the rest state) were discarded. To verify that the data used for GAN training did in fact correspond to the final (turbulent) regime, this simulation was later extended until 5330 eddy turnovers. Two physical metrics of the flow (total kinetic energy and Nusselt number close to the lower wall) are shown in Figure 2. Both metrics are already stabilized at the start of the training window, showing that the data used for GAN training was already part of the final turbulent regime. To further support this hypothesis of statistical stationarity of the flow, the average value of the Nusselt number within the training window was found to be , which is close to the values found by other authors – for example, Johnston2009 obtained , for a similar setup, with a lower aspectratio of (which artificially constrains the flow, considering that they also used periodic BC).
For an overview of the complexity of the physical patterns in the training dataset, we show in Figure 3 the temperature field at three times. It should be noted that the training data (and, as a result, also the GAN results) are still not completely physical, because we only treated the twodimensional case. Real engineering applications are all threedimensional, displaying additional physical phenomena (such as vortex stretching), which are beyond the scope of the present study. Nonetheless, twodimensional studies are still useful, as they still share much of the physics with the threedimensional case (and therefore constitute a good test for the present GAN training with constraints). A future version of this study will treat the full, threedimensional case.
3 Results
In this work, the performances of standard GAN and the statistical constrained GAN are compared by investigating three different datasets. The first dataset is generated by sampling from a 2D Gaussian process. The second and third datasets are from the lattice Boltzmann simulations of 2D RayleighBénard convection with different Rayleigh number and spatial resolution.
First, the results show that the constrained GAN better emulates the statistics of the training data by incorporating the constraint of covariance structure, indicating that the statistical constraint leads to the better converging towards the global minimum, where all statistics of training data can be captured by GANs. It should be noted that the similar performance can also be achieved by fine tuning the training process of the standard GANs. However, tuning a GAN largely relies on the experience of users and even may not be feasible for complex systems. Therefore, the constrained GAN is more stable in terms of training, making it a more practical tool for emulating complex systems.
Second, we show that the constrained GAN can achieve even higher quality of results at a significantly lower computational cost (up to 80% reduction of computational cost in model training) compared with the unconstrained model. In effect, the addition of the statistical constraint reduces the space of allowable solutions, forcing the training procedure (here, a standard stochastic gradient descentbased method) to explore only this reduced solution space. One could argue that, even if the model was being trained by randomly selecting parameter configurations from the allowable set of solutions, the fact that the feasible set is now smaller will allow for a faster exploration on average, resulting in a reduced training time. We recognize that this is of course an experimental result on an idealized system, but these results are in line with the benefits expected from regularizing machine learning models.
3.1 2D Gaussian Process
We first compare the performances of standard GANs and the statistical constrained GANs by using the samples from 2D Gaussian process as the training data. Square exponential kernel is used to specify the covariance structure of the Gaussian process.
(8) 
The length scale is chosen as , where denotes the length of side of the square domain. The training dataset is obtained by sampling from the 2D Gaussian process, and we acquired 10000 samples in total. These samples are provided to the discriminator of GANs with the label of 1 (true), and the objective is to use GANs to generate samples that capture the statistics of the training samples. The covariance of training data with regard to the center point is presented in Fig. 4a, in which a symmetrical pattern can be seen and the magnitude of covariance gradually decays from the center to the side.
The comparison of the covariance of the generated samples from GANs demonstrates the superior performance of the statistical constrained GAN as shown in Fig. 4. Specifically, it can be seen in Fig. 4b that the estimated covariance of the generated samples from the standard GAN shows a noticeable asymmetrical pattern. More quantitative comparison of correlation profiles along diagonals in Fig. 5 also confirms that the results from constrained GAN better capture the symmetrical pattern of the correlation field from training data. It indicates that the standard GAN converges to a local minimum and the statistics of the training data is not truthfully reproduced. On the contrary, the covariance of the generated samples from the statistical constrained GAN shown in Fig. 4c demonstrates a better agreement with the training data. It indicates that the statistical constraint guide GANs to better converge toward global minimum.
The main purpose of introducing the statistical constraint can be interpreted in two ways. First, the constraint term serves as a regularization and reduces the loss function to a lower dimensional manifold, in which the convergence to global minimum would be easier to achieve. Second, the constraint term contributes a nonzero gradient when the training process get into local minima and thus the optimization is unlikely to stay at a local minimum. According to these two purposes, it is not necessary to define a precise metric of the difference between the distributions of the training data and the generated samples. Instead, an approximate distance metric may work well enough as long as it vanishes when two distributions are identical to each other. As shown in Eq. 4, the distance between two covariance matrices is defined in Euclidean space by using Frobenius norm. The main reason of using Frobenius norm is the relatively low computational cost, compared with other distance metrics, e.g., the K–L divergence and the Riemannian distance. After obtaining the generated samples, we evaluated different distance metrics to quantify the difference between the statistics of the training data and the generated samples. It can be seen in Fig. 6 that the statistical constraint term generally leads to smaller distance metrics when being implemented into different GANs architectures. It confirms that the constrained GAN indeed better converges toward the global minimum, instead of merely reducing the specific distance metric adopted in the training.
We also investigated highorder statistics to illustrate the advantage of the statistical constrained GAN. The skewness and the kurtosis of the generated samples in Fig.
7 show that the highorder statistics of the samples have a better agreement with the training data when using the statistical constrained GAN, indicating that highorder statistics are also better captured by introducing the covariance constraint. It should be noted that both the skewness and the kurtosis in Fig. 7are normalized by the benchmark values of the multivariate Gaussian distribution, and thus the ideal values should be one. As shown in Fig.
7a, the larger skewness of generated samples from standard GAN indicates that the sample distribution is more asymmetrical, which has been confirmed by the visualization of covariance in Fig. 4.3.2 RayleighBénard convection at a low Rayleigh number
We also studied the performance of the statistical constrained GAN by using training data from the simulations of RayleighBénard convection. The training data is obtained from a further simplified RBC system compared to the one described in Section 2.3, in order to first investigate a less chaotic RBC system (Rayleigh number and Prandtl number ). Specifically, the simulation is performed in a square domain. The top wall is at a low temperature and the bottom wall is at a high temperature. The two side walls have the periodic boundary condition. The training data corresponds to 10000 snapshots of the instantaneous flow field. The turbulent kinetic energy (TKE) of the training data is presented in Fig. 8. It can be seen that the TKE is relatively low near both the top and the bottom walls. In addition, higher TKE can be observed within two horizontal stripeshape regions both above and below the center region. By carefully tuning the learning rate, it can be seen in Fig. 8 that the TKE of generated samples shows a good agreement with the training data by using either the standard GAN or the statistical constrained GAN. It is because the standard GAN is capable of preserving all the statistics of the training data if global minimum is achieved for the training process. Therefore, with a proper choice of training parameters, the performances of the standard GAN and the statistical constrained GAN should be comparable with each other, which has been confirmed in Fig. 8.
However, the optimal training is usually difficult to be achieved, especially for training a GAN to emulate more complex physical systems. In order to illustrate the stability of training GANs, we adopt another learning rate and present the TKE of generated samples in Fig. 9. It can be seen in Fig. 9b that the result of standard GAN changes significantly, showing noticeable difference from the training data. Although the higher TKE regions generated by the standard GAN still locates above and below the horizontal center region, the regions with high TKE become less continuous and do not capture the pattern in Fig. 8. On the contrary, the results of statistical constrained GAN in Fig. 9b demonstrates less changes. It should be noted that the more noisy result in Fig. 9c compared with the one in Fig. 8c is mainly due to the larger learning rate, which introduces more noises during the training. According to the comparison shown in Fig. 9, the statistical constrained GAN is more stable with regard to the change of training parameters.
In order to illustrate the more stable training by using the statistical constrained GAN, we studied the training of GANs with different learning rate and penalty coefficient of the statistical constraint term. The comparison in Fig. 10
demonstrates that better performance can be achieved by introducing the statistical constraint term. Specifically, the mean squared error (MSE) of the TKE field against the number of epochs is presented. It can be seen in Fig.
10a that the performances are comparable at 100 training epochs by using different values of penalty coefficient . However, with the same number of epochs, the mean squared errors of the statistical constrained GAN () are generally smaller than the standard GAN (). By using a larger learning rate as shown in Fig. 10b, it can be seen that the training of standard GAN () is not able to converge and the MSE is noticeably larger than the results from statistical constrained GANs.The comparison of spectrum of TKE in Fig. 11 also confirms that the statistical constrained GAN generates samples that better capture the pattern of the training data. Although such an improvement is marginal with learning rate , it can still be seen in Fig. 11a that the spectrum from the statistical constrained GAN has a better agreement with the training data for relatively small wave numbers, e.g., between 2 to 5. At the region with large wave number, both the standard GAN and the statistical constrained GAN provides energy overpredict the level of energy spectrum, which is mainly because of the small scale noises in the generated data as shown in Fig. 8. If a less proper learning rate is chosen, it can be seen in Fig. 11b that the energy spectrum provided by the statistical constrained GAN shows a much better performance than the result of standard GAN, whose energy spectrum demonstrates disagreement with the training data across the whole range of wave numbers.
3.3 RayleighBénard convection at a high Rayleigh number
We further studied another dataset of RayleighBénard convection simulation at a higher Rayleigh number. More details about the Lattice Boltzmann simulation can be found in Section 2.3. As shown in Figs. 12 and 13, velocity distributions at two typical points confirm that the constrained GAN can better emulate the training data with the same training epochs. Specifically, the results at center point of the training domain is presented in Fig. 12, and the results at the upper right point (referred to as corner point) along the diagonal of training domain are presented in Fig. 13. It can be seen in Fig. 12 that the results of standard GAN at 20 epochs are noticeably biased. Similar biased results of the standard GAN at 20 epochs can also be observed in the results at the corner point as shown in Fig. 13. With 100 epochs, the results of standard GAN demonstrate much better agreement with the training data. On the other hand, the distributions of velocity at these two points from constrained GAN with only 20 epochs are much better than the results of standard GAN with the same training epochs. In addition, the results from the constrained GAN are even comparable to the results from standard GAN with 100 epochs, indicating that the mean velocity field and the TKE of the training data are reasonably captured by using constrained GAN with less computational cost. More quantitative comparisons between distributions in Figs. 12 and 13 are presented in Table 1 by using Wasserstein distance.
Wasserstein distance  

Center  Center  Corner  Corner  
Standard GAN
(20 epochs) 
2455  2795  3287  1070 
Standard GAN
(100 epochs) 
167  491  200  265 
Constrained GAN
(20 epochs) 
463  1171  114  323 
The comparison of mean velocity magnitude in Fig. 14 shows that the standard GAN at 20 epochs noticeably overestimates the velocity magnitude across the whole domain as shown in Fig. 14c, even though the circular flow pattern is qualitatively emulated. With 100 epochs, the mean velocity provided by the standard GAN is getting more similar to the training data in large scale structure. However, there are unphysical small structures existing in Fig. 14d, indicating that the spatial correlation of mean velocity is not well captured by using the standard GAN. Compared to the results of the standard GAN, the result of constrained GAN in Fig. 14b better emulates the training data with only 20 epochs. Although there are still some noises in small scale in Fig. 14b, the noise level is much less than the result of standard GAN with 100 epochs.
The comparison of the TKE fields as shown in Fig. 15 also demonstrates the better performance of the constrained GAN. It should be noted that the result in Fig. 15d from the standard GAN involves 100 epochs, while the result in Fig. 15b from the statistical constrained GAN only involves 20 epochs. With less training epochs, the result of the statistical constrained GAN still shows better agreement with the training data. Specifically, the spatial pattern of the training data around the left bottom corner region is much better captured by the constrained GAN. Also, improvements can be observed at other regions across the whole domain as shown in Fig. 15. The results in Figs. 14 and 15 together demonstrate that the constrained GAN is a more practical tool for preserving important statistics (e.g., mean and secondorder moment) when emulating complex systems.
We also studied the spectrum of TKE in Fig. 16. It can be seen that the result of standard GAN at 20 epochs overestimates energy spectrum across the whole range of wave numbers. With 100 training epochs, the result of the standard GAN better captures the energy spectrum for relatively small wave numbers (wave number is less than 5), but still shows noticeable difference from training data for large wave numbers (small scale structure in space). In addition, the noise level increases with more training epochs by using the standard GAN, indicating that asymptotically improvement of performance may not be achieved with more training epochs. On the other hand, the result of the constrained GAN at 20 epochs successfully captured the energy spectrum of the training data for most wave numbers, with only an exception of very large wave numbers ().
4 Conclusion
Recently, generative adversarial networks (GANs) have shown great potential of emulating and even predicting the solutions of PDEsgoverned physical systems by training on some highfidelity simulation data. However, it is known that GANs are much more difficult to train than regular neural networks, preventing its application to more complex and realistic physical systems. In this work, we introduce a statistical constrained approach to improve the robustness of training GANs. Specifically, the difference between the covariance structures of the generated samples and the training data is quantified and incorporated into the original loss function of the generator in GANs as a penalty term. The performance of the constrained GAN is evaluated against the standard GANs by studying the Gaussian random field and the Rayleigh–Bénard convection. The results demonstrated that the proposed statistical regularization leads to more robust training than standard GANs. Even though similar quality of results can be achieved by carefully tuning the hyperparameters of standard GANs, the constrained GAN significantly reduced (by up to 80%) training cost to reach the solution with similar or even better quality. With the growth of highfidelity simulation databases of physical systems, this work has a great potential of being an alternative to the explicit modeling of closures or parameterizations for unresolved physics. By better preserving highorder statistics from existing highfidelity simulations, this work potentially leads to a class of promising application of GANs in simulating chaotic physical systems, e.g., turbulence or Earth’s climate.
Acknowledgments
This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DEAC0205CH11231.
References
 (1) J.X. Wang, J.L. Wu, H. Xiao, Physicsinformed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data, Physical Review Fluids 2 (3) (2017) 034603.
 (2) J.L. Wu, H. Xiao, E. Paterson, Physicsinformed machine learning approach for augmenting turbulence models: A comprehensive framework, Physical Review Fluids 3 (7) (2018) 074602.
 (3) J. Ling, A. Kurzawski, J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, Journal of Fluid Mechanics 807 (2016) 155–166.
 (4) N. D. Brenowitz, C. S. Bretherton, Prognostic validation of a neural network unified physics parameterization, Geophysical Research Letters 45 (12) (2018) 6289–6298.

(5)
S. Rasp, M. S. Pritchard, P. Gentine, Deep learning to represent subgrid processes in climate models, Proceedings of the National Academy of Sciences 115 (39) (2018) 9684–9689.
 (6) D. J. Gagne, A. McGovern, M. Xue, Machine learning enhancement of stormscale ensemble probabilistic quantitative precipitation forecasts, Weather and Forecasting 29 (4) (2014) 1024–1043.
 (7) P. D. Dueben, P. Bauer, Challenges and design choices for global weather and climate models based on machine learning, Geoscientific Model Development 11 (10) (2018) 3999–4009.
 (8) T. Schneider, S. Lan, A. Stuart, J. Teixeira, Earth system modeling 2.0: A blueprint for models that learn from observations and targeted highresolution simulations, Geophysical Research Letters 44 (24).
 (9) Y. Liu, E. Racah, J. Correa, A. Khosrowshahi, D. Lavers, K. Kunkel, M. Wehner, W. Collins, et al., Application of deep convolutional neural networks for detecting extreme weather in climate datasets, arXiv preprint arXiv:1605.01156.
 (10) E. Racah, C. Beckham, T. Maharaj, S. E. Kahou, M. Prabhat, C. Pal, Extremeweather: A largescale climate dataset for semisupervised detection, localization, and understanding of extreme weather events, in: Advances in Neural Information Processing Systems, 2017, pp. 3402–3413.
 (11) M. Prabhat, J. Biard, S. Ganguly, S. Ames, K. Kashinath, S. Kim, S. Kahou, T. Maharaj, C. Beckham, T. O’Brien, et al., Climatenet: A machine learning dataset for climate science research, in: AGU Fall Meeting Abstracts, 2017.
 (12) Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436.
 (13) N. Thomas, T. Smidt, S. Kearnes, L. Yang, L. Li, K. Kohlhoff, P. Riley, Tensor field networks: Rotationand translationequivariant neural networks for 3D point clouds, arXiv preprint arXiv:1802.08219.

(14)
A. Karpatne, G. Atluri, J. H. Faghmous, M. Steinbach, A. Banerjee, A. Ganguly, S. Shekhar, N. Samatova, V. Kumar, Theoryguided data science: A new paradigm for scientific discovery from data, IEEE Transactions on Knowledge and Data Engineering 29 (10) (2017) 2318–2331.
 (15) M. Raissi, P. Perdikaris, G. Karniadakis, Physicsinformed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics 378 (2019) 686–707.
 (16) M. Raissi, A. Yazdani, G. E. Karniadakis, Hidden fluid mechanics: A NavierStokes informed deep learning framework for assimilating flow visualization data, arXiv preprint arXiv:1808.04327.
 (17) C. Ma, J. Wang, et al., Model reduction with memory and the machine learning of dynamical systems, arXiv preprint arXiv:1808.04258.
 (18) B. Lusch, J. N. Kutz, S. L. Brunton, Deep learning for universal linear embeddings of nonlinear dynamics, Nature communications 9 (1) (2018) 4950.
 (19) R. King, P. Graf, M. Chertkov, From deep to physicsinformed learning of turbulence: Diagnostics, Bulletin of the American Physical Society 62.
 (20) R. King, O. Hennigh, A. Mohan, M. Chertkov, From deep to physicsinformed learning of turbulence: Diagnostics, Workshop on Modeling and DecisionMaking in the Spatiotemporal Domain, NIPS 2018, arXiv preprint arXiv:1810.07785.
 (21) M. Mustafa, D. Bard, W. Bhimji, R. AlRfou, Z. Lukić, Creating virtual universes using generative adversarial networks, arXiv preprint arXiv:1706.02390.
 (22) I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, Y. Bengio, Generative Adversarial Nets, in: Advances in neural information processing systems, 2014, pp. 2672–2680.

(23)
K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, D. Krishnan, Unsupervised pixellevel domain adaptation with generative adversarial networks, in: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1, 2017, p. 7.
 (24) A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv preprint arXiv:1511.06434.
 (25) M. Arjovsky, S. Chintala, L. Bottou, Wasserstein GAN, arXiv preprint arXiv:1701.07875.
 (26) T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, Improved techniques for training GANs, in: Advances in Neural Information Processing Systems, 2016, pp. 2234–2242.

(27)
Y. Xie, E. Franz, M. Chu, N. Thuerey,
tempoGAN: A temporally coherent,
volumetric GAN for superresolution fluid flow, CoRR abs/1801.09710.
arXiv:1801.09710.
URL http://arxiv.org/abs/1801.09710  (28) L. Yang, D. Zhang, G. E. Karniadakis, Physicsinformed generative adversarial networks for stochastic differential equations, arXiv preprint arXiv:1811.02033.
 (29) P. Stinis, T. Hagge, A. M. Tartakovsky, E. Yeung, Enforcing constraints for interpolation and extrapolation in generative adversarial networks, arXiv preprint arXiv:1803.08182.
 (30) Y. Zeng, J.L. Wu, H. Xiao, Enforcing physical constraints on generative adversarial networks with application to fluid flows, in preparation (2019).

(31)
A. Radford, L. Metz, S. Chintala,
Unsupervised representation learning
with deep convolutional generative adversarial networks, CoRR
abs/1511.06434.
arXiv:1511.06434.
URL http://arxiv.org/abs/1511.06434 
(32)
Y. Lu, A. Zhong, Q. Li, B. Dong,
Beyond finite layer neural
networks: Bridging deep architectures and numerical differential equations,
in: J. Dy, A. Krause (Eds.), Proceedings of the 35th International Conference
on Machine Learning, Vol. 80 of Proceedings of Machine Learning Research,
PMLR, Stockholmsmässan, Stockholm Sweden, 2018, pp. 3282–3291.
URL http://proceedings.mlr.press/v80/lu18d.html  (33) O. Hennigh, LatNet: Compressing Lattice Boltzmann Flow Simulations using Deep Neural Networks, ArXiv eprintsarXiv:1705.09036.
 (34) Z. Long, Y. Lu, X. Ma, B. Dong, PDEnet: Learning PDEs from data, in: Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018.
 (35) J. Weickert, H. Hagen, Visualization and processing of tensor fields, Springer Science & Business Media, 2005.
 (36) A. Cherian, S. Sra, A. Banerjee, N. Papanikolopoulos, Efficient similarity search for covariance matrices via the jensenbregman logdet divergence, in: Computer Vision (ICCV), 2011 IEEE International Conference on, IEEE, 2011, pp. 2399–2406.
 (37) M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., Tensorflow: A system for largescale machine learning., in: OSDI, Vol. 16, 2016, pp. 265–283.
 (38) C. Michelen, J.L. Wu, H. Xiao, Constructing pdeinformed covariance for quantification of model uncertaintie, in preparation (2019).
 (39) I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. C. Courville, Improved training of Wasserstein GANs, in: Advances in Neural Information Processing Systems, 2017, pp. 5767–5777.

(40)
N. Kodali, J. D. Abernethy, J. Hays, Z. Kira,
How to train your DRAGAN, CoRR
abs/1705.07215.
arXiv:1705.07215.
URL http://arxiv.org/abs/1705.07215 
(41)
L. M. Mescheder, S. Nowozin, A. Geiger,
The numerics of GANs, CoRR
abs/1705.10461.
arXiv:1705.10461.
URL http://arxiv.org/abs/1705.10461 
(42)
C. Wang, C. Xu, X. Yao, D. Tao,
Evolutionary generative adversarial
networks, CoRR abs/1803.00657.
arXiv:1803.00657.
URL http://arxiv.org/abs/1803.00657  (43) L. Rayleigh, LIX. On convection currents in a horizontal layer of fluid, when the higher temperature is on the under side, Philosophical Magazine Series 6 32 (192) (1916) 529–546. doi:10.1080/14786441608635602.
 (44) J. Wang, D. Wang, P. Lallemand, L.S. S. Luo, Lattice Boltzmann simulations of thermal convective flows in two dimensions, Computers and Mathematics with Applications 65 (2) (2013) 262–286. doi:10.1016/j.camwa.2012.07.001.

(45)
D. B. Chirila,
Towards Lattice
Boltzmann models for climate sciences: The GeLB programming language with
applications, Ph.D. thesis, University of Bremen (2018).
URL https://elib.suub.unibremen.de/peid/D00106468.html  (46) E. P. van der Poel, R. J. A. M. Stevens, D. Lohse, Comparison between two and threedimensional Rayleigh–Bénard convection, Journal of Fluid Mechanics 736 (2013) 177–194. doi:10.1017/jfm.2013.488.
 (47) O. Shishkina, R. J. A. M. Stevens, S. Grossmann, D. Lohse, Boundary layer structure in turbulent thermal convection and its consequences for the required numerical resolution, New Journal of Physics 12 (7) (2010) 075022. doi:10.1088/13672630/12/7/075022.
 (48) H. Johnston, C. R. Doering, A comparison of turbulent thermal convection between conditions of constant temperature and constant heat flux boundaries, Physical Review Letters 102 (6) (2009) 064501. doi:10.1103/PhysRevLett.102.064501.
Comments
There are no comments yet.