Machine learning accelerated computational fluid dynamics

01/28/2021 ∙ by Dmitrii Kochkov, et al. ∙ Google 86

Numerical simulation of fluids plays an essential role in modeling many physical phenomena, such as weather, climate, aerodynamics and plasma physics. Fluids are well described by the Navier-Stokes equations, but solving these equations at scale remains daunting, limited by the computational cost of resolving the smallest spatiotemporal features. This leads to unfavorable trade-offs between accuracy and tractability. Here we use end-to-end deep learning to improve approximations inside computational fluid dynamics for modeling two-dimensional turbulent flows. For both direct numerical simulation of turbulence and large eddy simulation, our results are as accurate as baseline solvers with 8-10x finer resolution in each spatial dimension, resulting in 40-80x fold computational speedups. Our method remains stable during long simulations, and generalizes to forcing functions and Reynolds numbers outside of the flows where it is trained, in contrast to black box machine learning approaches. Our approach exemplifies how scientific computing can leverage machine learning and hardware accelerators to improve simulations without sacrificing accuracy or generalization.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

page 6

page 8

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Simulation of complex physical systems described by nonlinear partial differential equations are central to engineering and physical science, with applications ranging from weather

[44, 8] and climate [46, 40] , engineering design of vehicles or engines [2], to wildfires [4] and plasma physics [51]. Despite a direct link between the equations of motion and the basic laws of physics, it is impossible to carry out direct numerical simulations at the scale required for these important problems. This fundamental issue has stymied progress in scientific computation for decades, and arises from the fact that an accurate simulation must resolve the smallest spatiotemporal scales. A paradigmatic example is turbulent fluid flow [42], underlying simulations of weather, climate, and aerodynamics. The size of the smallest eddy is tiny: for an airplane with chord length of 2 meters, the smallest length scale (the Kolomogorov scale) [21] is . Classical methods for computational fluid dynamics (CFD), such as finite differences, finite volumes, finite elements and pseudo-spectral methods, are only accurate if fields vary smoothly on the mesh, and hence meshes must resolve the smallest features to guarantee convergence. For a turbulent fluid flow, the requirement to resolve the smallest flow features implies a computational cost scaling like , where , with and the typical velocity and length scales and the kinematic viscosity. A tenfold increase in leads to a thousandfold increase in the computational cost. Consequently, direct numerical simulation (DNS) for e.g. climate and weather are impossible. Instead, it is traditional to use smoothed versions of the Navier Stokes equations [39, 38] that allow coarser grids while sacrificing accuracy, such as Reynolds Averaged Navier-Stokes (RANS) [14, 1], and Large-Eddy Simulation (LES) [49, 28]. For example, current state-of-art LES with mesh sizes of to million has been used in the design of internal combustion engines [35], gas turbine engines [54, 20], and turbo-machinery [3]. Despite promising progress in LES over the last two decades, there are severe limits to what can be accurately simulated. This is mainly due to the first-order dependence of LES on the sub-grid scale (SGS) model, especially for flows whose rate controlling scale is unresolved [43].

Figure 1:

Overview of our approach and results. (a) Accuracy versus computational cost with our baseline (direct simulation) and ML accelerated (learned interpolation) solvers. The

axis corresponds to pointwise accuracy, showing how long the simulation is highly correlated with the ground truth, whereas the -axis shows the computational time needed to carry out one simulation time-unit on a single TPU core. Each point is annotated by the size of the corresponding spatial grid: for details see the appendix. (b) Illustrative training and validation examples, showing the strong generalization capabilities of our model. (c) Structure of a single time step for our ”learned interpolation” model, with a convolutional neural net controlling learned approximations inside the convection calculation of a standard numerical solver. and refer to advected and advecting velocity components. For spatial dimensions there are replicates of the convective flux module, corresponding to the flux of each velocity component in each spatial direction.

Here, we introduce a method for calculating the accurate time evolution of solutions to nonlinear partial differential equations, while using an order of magnitude coarser grid than is traditionally required for the same accuracy. This is a novel type of numerical solver that does not average unresolved degrees of freedom, but instead uses discrete equations that give pointwise accurate solutions on an unresolved grid. We discover these algorithms using machine learning, by replacing the components of traditional solvers most affected by the loss of resolution with learned alternatives. As shown in Fig. 

1(a), for a two dimensional direct numerical simulation of a turbulent flow, our algorithm maintains accuracy while using coarser resolution in each dimension, resulting in a fold improvement in computational time with respect to an advanced numerical method of similar accuracy. The model learns how to interpolate local features of solutions and hence can accurately generalize to different flow conditions such as different forcings and even different Reynolds numbers [Fig. 1(b)]. We also apply the method to a high resolution LES simulation of a turbulent flow and show similar performance enhancements, maintaining pointwise accuracy on LES simulations using times coarser grids with fold computational speedup. There has been a flurry of recent work using machine learning to improve turbulence modeling. One major family of approaches uses ML to fit closures to classical turbulence models based on agreement with high resolution direct numerical simulations (DNS) [31, 17, 36, 9]

. While potentially more accurate than traditional turbulence models, these new models have not achieved reduced computational expense. Another major thrust uses “pure” ML, aiming to replace the entire Navier Stokes simulation with approximations based on deep neural networks

[24, 30, 11, 53, 33, 19]. A pure ML approach can be extremely efficient, avoiding the severe time-step constraints required for stability with traditional approaches. Because these models do not include the underlying physics, they often struggle to enforce constraints, such as conservation of momentum and incompressibility. While these models often perform well on data from the training distribution, they often struggle with generalization. For example, they perform worse when exposed to novel forcing terms. A third approach, which we build upon in this work, uses ML to correct errors in cheap, under-resolved simulations [48, 52, 41]. These models borrow strength from the coarse-grained simulations. In this work we design algorithms that accurately solve the equations on coarser grids by replacing the components most affected by the resolution loss with better performing learned alternatives. We use data driven discretizations [5, 56] to interpolate differential operators onto a coarse mesh with high accuracy [Fig. 1(c)]. We train the solver inside a standard numerical method for solving the underlying PDEs as a differentiable program, with the neural networks and the numerical method written in a framework (JAX [15]) supporting reverse-mode automatic differentiation. This allows for end-to-end gradient based optimization of the entire algorithm, similar to prior work on density functional theory [29], molecular dynamics [47] and fluids [48, 52]. The methods we derive are equation specific, and require training a coarse resolution solver with high resolution ground truth simulations. Since the dynamics of a partial differential equation are local, the high resolution simulations can be carried out on a small domain. The models remains stable during long simulations and has robust and predictable generalization properties, with models trained on small domains producing accurate simulations on larger domains, with different forcing functions and even with different Reynolds number. Comparison to pure ML baselines shows that generalization arises from the physical constraints inherent in the formulation of the method.

Ii Background

ii.1 Navier-Stokes

Incompressible fluids are modeled by the Navier-Stokes equations:

(1a)
(1b)

where is the velocity field, the external forcing, and

denotes a tensor product. The density

is a constant, and the pressure is a Lagrange multiplier used to enforce (1b). The Reynolds number dictates the balance between the convection (first) or diffusion (second) terms in the right hand side of (1a). Higher Reynolds number flows dominated by convection are more complex and thus generally harder to model; flows are considered “turbulent” if . Direct numerical simulation (DNS) solves (1) directly, whereas large eddy simulation (LES) solves a spatially filtered version. In the equations of LES, is replaced by a filtered velocity and an sub-grid term is added to the right side of (1a), with the sub-grid stress defined as . Because is un-modeled, solving LES also requires a choice of closure model for as a function of . Numerical simulation of both DNS and LES further requires a discretization step to approximate the continuous equations on a grid. Traditional discretization methods (e.g., finite differences) converge to an exact solution as the grid spacing becomes small, with LES converging faster because it models a smoother quantity. Together, discretization and closure models are the two principle sources of error when simulating fluids on coarse grids [48, 18].

Figure 2: Learned interpolation achieves accuracy of direct simulation at higher resolution. (a) Evolution of predicted vorticity fields for reference (DS ), learned (LI ) and baseline (DS ) solvers, starting from the same initial velocities. The yellow box traces the evolution of a single vortex. (b) Comparison of the vorticity correlation between predicted flows and the reference solution for our model and DNS solvers. (c) Energy spectrum scaled by averaged between time-steps 10000 and 20000, when all solutions have decorrelated with the reference solution.

ii.2 Learned solvers

Our principle aim is to accelerate DNS without compromising accuracy or generalization. To that end, we consider ML modeling approaches that enhance a standard CFD solver when run on inexpensive to simulate coarse grids. We expect that ML models can improve the accuracy of the numerical solver via effective super-resolution of missing details 

[5]. Because we want to train neural networks for approximation inside our solver, we wrote a new CFD code in JAX [15]

, which allows us to efficiently calculate gradients via automatic differentiation. Our CFD code is a standard implementation of a finite volume method on a regular staggered mesh, with first-order explicit time-stepping for convection, diffusion and forcing, and implicit treatment of pressure; for details see the appendix. The algorithm works as follows: in each time-step, the neural network generates a latent vector at each grid location based on the current velocity field, which is then used by the sub-components of the solver to account for local solution structure. Our neural networks are convolutional, which enforces translation invariance and allows them to be local in space. We then use components from standard standard numerical methods to enforce inductive biases corresponding to the physics of the Navier Stokes equations, as illustrated by the light gray boxes in Fig. 

1(c): the convective flux model improves the approximation of the discretized convection operator; the divergence operator enforces local conservation of momentum according to a finite volume method; the pressure projection enforces incompressibility and the explicit time step operator forces the dynamics to be continuous in time, allowing for the incorporation of additional time varying forces. “DNS on a coarse grid” blurs the boundaries of traditional DNS and LES modeling, and thus invites a variety of data-driven approaches. In this work we focus on two types of ML components: learned interpolation and learned correction. Both focus on the convection term, the key term in (1) for turbulent flows.

ii.2.1 Learned interpolation (LI)

In a finite volume method, denotes a vector field of volume averages over unit cells, and the cell-averaged divergence can be calculated via Gauss’ theorem by summing the surface flux over the each face. This suggests that our only required approximation is calculating the convective flux on each face, which requires interpolating from where it is defined. Rather than using typical polynomial interpolation, which is suitable for interpolation without prior knowledge, here we use an approach that we call learned interpolation based on data driven discretizations [5]. We use the outputs of the neural network to generate interpolation coefficients based on local features of the flow, similar to the fixed coefficients of polynomial interpolation. This allows us to incorporate two important priors: (1) the equation maintains the same symmetries and scaling properties (e.g., rescaling coordinates ) as the original equations, and (2) as the mesh spacing vanishes, the interpolation module retains polynomial accuracy so that our model performs well in the regime where traditional numerical methods excel.

ii.2.2 Learned correction (LC)

An alternative approach, closer in spirit to LES modeling, is to simply model a residual correction to the discretized Navier-Stokes equations [(1)] on a coarse-grid [52, 48]. Such an approach generalizes traditional closure models for LES, but in principle can also account for discretization error. We consider learned correction models of the form , where LC is a neural network and is the uncorrected velocity field from the numerical solver on a coarse grid. Modeling the residual is appropriate both from the perspective of a temporally discretized closure model, and pragmatically because the relative error between and in a single time step is small. LC models have fewer inductive biases than LI models, but they are simpler to implement and potentially more flexible. We also explored LC models restricted to take the form of classical closure models (e.g., flow-dependent effective tensor viscosity models), but the restrictions hurt model performance and stability.

ii.2.3 Training

The training procedure tunes the machine learning components of the solver to minimize the discrepancy between an expensive high resolution simulation and a simulation produced by the model on a coarse grid. We accomplish this via supervised training where we use a cumulative point-wise error between the predicted and ground truth velocities as the loss function

The ground truth trajectories are obtained by using a high resolution simulation that is then coarsened to the simulation grid. Including the numerical solver in the training loss ensures fully “model consistent” training where the model sees its own outputs as inputs [18, 36, 52], unlike typical a priori training where simulation is only performed offline. As an example, for the Kolmogorov flow simulations below with Reynolds number , our ground truth simulation had a resolution of cells along each spatial dimension. We subsample these ground truth trajectories along each dimension and time by a factor of . For training we use trajectories of sequential time steps each, starting from different random initial conditions. To evaluate the model, we generate much longer trajectories (tens of thousands of time steps) to verify that models remain stable and produce plausible outputs. We unroll the model for 32 time steps when calculating the loss, which we find improves model performance for long time trajectories [52], in some cases using gradient checkpoints at each model step to reduce memory usage [22].

Figure 3: Learned interpolation generalizes well to decaying turbulence. (a) Evolution of predicted vorticity fields as a function of time. (b) Vorticity correlation between predicted flows and the reference solution. (c) Energy spectrum scaled by averaged between time-steps 2000 and 2500, when all solutions have decorrelated with the reference solution.

Iii Results

We take a utilitarian perspective on model evaluation: simulation methods are good insofar as they demonstrate accuracy, computational efficiency and generalizability. In this case, accuracy and computational efficiency require the method to be faster than the DNS baseline, while maintaining accuracy for long term predictions; generalization means that although the model is trained on specific flows, it must be able to readily generalize well to new simulation settings, including to different forcings and different Reynolds numbers. In what follows, we first compare the accuracy and generalizability of our method to both direct numerical simulation and several existing ML-based approaches for simulations of two dimensional turbulence flow. In particular, we first consider Kolmogorov flow [16], a parametric family of forced turbulent flows obeying the Navier-Stokes equation [(1)], with forcing , where the second term is a velocity dependent drag preventing accumulation of energy at large scales [12]. Kolmogorov flow produces a statistically stationary turbulent flow, with flow complexity controlled by a single parameter, the Reynolds number Re.

iii.1 Accelerating DNS

The accuracy of a direct numerical simulation quickly degrades once the grid resolution cannot capture the smallest details of the solution. In contrast, our ML-based approach strongly mitigates this effect. Figure 2 shows the results of training and evaluating our model on Kolmogorov flows at Reynolds number . All datasets were generated using high resolution DNS, followed by a coarsening step.

iii.1.1 Accuracy

The scalar vorticity field is a convenient way to describe a two-dimensional incompressible flows [12]. Accuracy can be quantified by correlating vorticity fields,111In our case the Pearson correlation reduces to a cosine distance because the flows considered here have mean velocity of . between the ground truth solution and the predicted state . Fig. 2 compares the learned interpolation model () to fully resolved DNS of Kolmogorov flow () using an initial condition that was not included in the training set. Strikingly, the learned discretization model matches the pointwise accuracy of DNS with a times finer grid. The eventual loss of correlation with the reference solution is expected due to the chaotic nature of turbulent flows; this is marked by a vertical grey line in Fig. 2(b), indicating the first three Lyapunov times. Fig. 2 (a) shows the time evolution of the vorticity field for three different models: the learned interpolation matches the ground truth () more accurately than the baseline, whereas it greatly outperforms a baseline solver at the same resolution as the model (). The learned interpolation model also produces a similar energy spectrum to DNS. With decreasing resolution, DNS cannot capture high frequency features, resulting in an energy spectrum that “tails off” for higher values of . Fig. 2 (c) compares the energy spectrum for learned interpolation and direct simulation at different resolutions after time steps. The learned interpolation model accurately captures the energy distribution across the spectrum.

iii.1.2 Computational efficiency

The ability to match DNS with a times coarser grid makes the learned interpolation solver much faster. We benchmark our solver on a single core of Google’s Cloud TPU v4, a hardware accelerator designed for accelerating machine learning models that is also suitable for many scientific computing use-cases  [10, 55, 32]. The TPU is designed for high throughput vectorized operations, and extremely high throughput matrix-matrix multiplication in low precision (bfloat16). On sufficiently large grid sizes ( and larger), our neural net makes good use of matrix-multiplication unit, achieving 12.5x higher throughput in floating point operations per second than our baseline CFD solver. Thus despite using 150 times more arithmetic operations, the ML solver is only about 12 times slower than the traditional solver at the same resolution. The gain in effective resolution in three dimensions (two space dimensions and time, due to the Courant condition) thus corresponds to a speedup of .

Figure 4: Learned interpolations can be scaled to simulate higher Reynolds numbers without retraining. (a) Evolution of predicted vorticity fields as a function of time for Kolmogorov flow at . (b) Vorticity correlation between predicted flows and the reference solution. (c) Energy spectrum scaled by averaged between time-steps 6000 and 12000, when all solutions have decorrelated with the reference solution.

iii.1.3 Generalization

In order to be useful, a learned model must accurately simulate flows outside of the training distribution. We expect our models to generalize well because they learn local operators: interpolated values and corrections at a given point depend only on the flow within a small neighborhood around it. As a result, these operators can be applied to any flow that features similar local structures as those seen during training. We consider three different types of generalization tests: (1) larger domain size, (2) unforced decaying turbulent flow, and (3) Kolmogorov flow at a larger Reynolds number. First, we test generalization to larger domain sizes with the same forcing. Our ML models have essentially the exact same performance as on the training domain, because they only rely upon local features of the flows.(see Appendix E and Fig. 5). Second, we apply our model trained on Kolmogorov flow to decaying turbulence, by starting with a random initial condition with high wavenumber components, and letting the turbulence evolve in time without forcing. Over time, the small scales coalesce to form large scale structures, so that both the scale of the eddies and the Reynolds number vary. Figure 3 shows that a learned discretization model trained on Kolmogorov flows can match the accuracy of DNS running at times finer resolution. A standard numerical method at the same resolution as the learned discretization model is corrupted by numerical diffusion, degrading the energy spectrum as well as pointwise accuracy. Our final generalization test is harder: can the models generalize to higher Reynolds number where the flows are more complex? The universality of the turbulent cascade [44, 26, 27] implies that at the size of the smallest eddies (the Kolmogorov length scale), flows “look the same” regardless of Reynolds number when suitably rescaled. This suggests that we can apply the model trained at one Reynolds number to a flow at another Reynolds number by simply rescaling the model to match the new smallest length scale. To test this we construct a new dataset for a Kolmogorov flow with . The theory of two-dimensional turbulence [13] implies that the smallest eddy size decreases as , implying that the smallest eddies in this flow are that for original flow with . We therefore can use a trained model at by simply halving the grid spacing. Fig. 4 (a) shows that with this scaling, our model achieves the accuracy of DNS running at times finer resolution. This degree of generalization is remarkable, given that we are now testing the model with a flow of substantially greater complexity. Fig. 4 (b) visualizes the vorticity, showing that higher complexity is captured correctly, as is further verified by the energy spectrum shown in Fig. 4 (c).

iii.2 Comparison to other ML models

Finally, we compare the performance of learned interpolation to alternative ML-based methods. We consider three popular ML methods: ResNet (RN) [23], Encoder-Processor-Decoder (EPD) [6, 45] architectures and the learned correction (LC) model introduced earlier. These models all perform explicit time-stepping without any additional latent state beyond the velocity field, which allows them to be evaluated with arbitrary forcings and boundary conditions, and to use the time-step based on the CFL condition. By construction, these models are inivariant to translation in space and time, and have similar runtime for inference (varying within a factor of two). To evaluate training consistency, each model is trained 9 times with different random initializations on the same Kolmogorov

dataset described previously. Hyperparameters for each model were chosen as detailed in Appendix

G, and the models are evaluated on the same generalization tasks. We compare their performance using several metrics: time until vorticity correlation falls below to measure pointwise accuracy for the flow over short time windows, the absolute error of the energy spectrum scaled by to measure statistical accuracy for the flow over long time windows, and the fraction of simulated velocity values that does not exceed the range of the training data to measure stability. Fig. 5 compares results across all considered configurations. Overall, we find that learned interpolation (LI) performs best, although learned correction (LC) is not far behind. We were impressed by the performance of the LC model, despite its weaker inductive biases. The difference in effective resolution for pointwise accuracy ( vs upscaling) corresponds to about a factor of two in run-time. There are a few isolated exceptions where pure black box methods outperform the others, but not consistently. A particular strength of the learned interpolation and correction models is their consistent performance and generalization to other flows, as seen from the narrow spread of model performance for different random initialization and their consistent dominance over other models in the generalization tests. Note that even a modest effective coarse-graining in resolution still corresponds to a computational speed-up. In contrast, the black box ML methods exhibit high sensitivity to random initialization and do not generalize well, with much less consistent statistical accuracy and stability.

iii.3 Acceleration of LES

Finally, up until now we have illustrated our method for DNS of the Navier Stokes equations. Our approach is quite general and could be applied to any nonlinear partial differential equation. To demonstrate this, we apply the method to accelerate LES, the industry standard method for large scale simulations where DNS is not feasible. Here we treat the LES at high resolution as the ground truth simulation and train an interpolation model on a coarser grid for Kolmogorov flows with Reynolds number according to the Smagorinsky-Lilly model SGS model [42]. Our training procedure follows the exact same approach we used for modeling DNS. Note in particular that we do not attempt to model the parameterized viscosity in the learned LES model, but rather let learned interpolation model this implicitly. Fig. 6 shows that learned interpolation for LES still achieves an effective upscaling, corresponding to roughly speedup.

Figure 5: Learned discretizations outperform a wide range of baseline methods in terms of accuracy, stability and generalization. Each row within a subplot shows performance metrics for nine model replicates with the same architecture, but different randomly initialized weights. The models are trained on forced turbulence, with larger domain, decaying and more turbulent flows as generalization tests. Vertical lines indicate performance of non-learned baseline models at different resolutions (all baseline models are perfectly stable). The pointwise accuracy test measures error at the start of time integration, whereas the statistical accuracy and stability tests are both performed on simulations at time 50 after about time integration steps (twice the number of steps for more turbulent flow). The points indicated by a left-pointing triangle in the statistical accuracy tests are clipped at a maximum error.
Figure 6: Learned discretizations achieve accuracy of LES simulation running on times finer resolution. (a) Evolution of predicted vorticity fields as a function of time. (b) Vorticity correlation between predicted flows and the reference solution. (c) Energy spectrum scaled by averaged between time-steps 3800 and 4800, when all solutions have decorrelated with the reference solution.

Iv Discussion

In this work we present a data driven numerical method that achieves the same accuracy as traditional finite difference/finite volume methods but with much coarser resolution. The method learns accurate local operators for convective fluxes and residual terms, and matches the accuracy of an advanced numerical solver running at 8– finer resolution, while performing the computation 40– faster. The method uses machine learning to interpolate better at a coarse scale, within the framework of the traditional numerical discretizations. As such, the method inherently contains the scaling and symmetry properties of the original governing Navier Stokes equations. For that reason, the methods generalize much better than pure black-box machine learned methods, not only to different forcing functions but also to different parameter regimes (Reynolds numbers). What outlook do our results suggest for speeding up 3D turbulence? In general, the runtime for efficient ML augmented simulation of time-dependent PDEs should scale like

(2)

where is the cost of ML inference per grid point, is the cost of baseline numerical method, is the number of grid points along each dimension of the resolved grid, is the number of spatial dimensions and is the effective coarse graining factor. Currently, , but we expect that much more efficient machine learning models are possible, e.g., by sharing work between time-steps with recurrent neural nets. We expect the decrease in effective resolution discovered here to generalize to 3D and more complex problems. This suggests that speed-ups in the range of may be possible for 3D simulations. Further speed-ups, as required to capture the full range of turbulent flows, will require either more efficient representations for flows (e.g., based on solution manifolds rather than a grid) or being satisfied with statistical rather than pointwise accuracy (e.g., as done in LES modeling). In summary, our approach expands the Pareto frontier of efficient simulation in computational fluid dynamics, as illustrated in Fig. 1(a). With ML accelerated CFD, users may either solve expensive simulations much faster, or increase accuracy without additional costs. To put these results in context, if applied to numerical weather prediction, increasing the duration of accurate predictions from 4 to 7 time-units would correspond to approximately 30 years of progress [7]. These improvements are possible due to the combined effect of two technologies still undergoing rapid improvements: modern deep learning models, which allow for accurate simulation with much more compact representations, and modern accelerator hardware, which allows for evaluating said models with a remarkably small increase in computational cost. We expect both trends to continue for the foreseeable future, and to eventually impact all areas of computationally limited science.

Acknowledgement

We thank John Platt and Rif A. Saurous for encouraging and supporting this work and for important conversations, and Yohai bar Sinai, Anton Geraschenko, Yi-fan Chen and Jiawei Zhuang for important conversations.

Appendix A Direct numerical simulation

Here we describe the details of the numerical solver that we use for data generation, model comparison and the starting point of our machine learning models. Our solver uses a staggered-square mesh [37] in a finite volume formulation: the computational domain is broken into computational cells where the velocity field is placed on the edges, while the pressure is solved at the cell centers. Our choice of real-space formulation of the Navier-Stokes equations, rather than a spectral method is motivated by practical considerations: real space simulations are much more versatile when dealing with boundary conditions and non-rectangular geometries. We now describe the implementation details of each component.

Convection and diffusion

We implement convection and diffusion operators based on finite-difference approximations. The Laplace operator in the diffusion is approximated using a second order central difference approximation. The convection term is solved by advecting all velocity components simultaneously, using a high order scheme based on Van-Leer flux limiter [50]. For the results presented in the paper we used explicit time integration using Euler discretization. This choice is motivated by performance considerations: for the simulation parameters used in the paper (high Reynold number) implicit diffusion is not required for stable time-stepping, and is approximately twice as slow, due to the additional linear solves required for each velocity component. For diffusion dominated simulations implicit diffusion would result in faster simulations.

Pressure

To account for pressure we use a projection method, where at each step we solve the corresponding Poisson equation. The solution is obtained using either a fast diagonalization approach with explicit matrix multiplication [34]

or a real-valued fast Fourier transform (FFT). The former is well suited for small simulation domains as it has better accelerator utilization, while FFT has best computational complexity and performs best in large simulations. For wall-clock evaluations we choose between the fast diagonalization and FFT approach by choosing the fastest for a given grid.

Forcing and closure terms

We incorporate forcing terms together with the accelerations due to convective and diffusive processes. In an LES setting the baseline and ground truth simulations additionally include a subgrid scale model that is also treated explicitly. We use the Smagorinsky-Lilly model ((3)) where is the grid spacing and :

(3)

Appendix B Datasets and simulation parameters

In the main text we introduced five datasets: two Kolmogorov flows at and , Kolmogorov flow with on a larger domain, decaying turbulence and an LES dataset with Reynolds number . Dataset generation consisted of three steps: (1) burn-in simulation from a random initial condition; (2) simulation for a fixed duration using high resolution solver; (3) downsampling of the solution to a lower grid for training and evaluation. The burn-in stage is fully characterized by the burn-in time and initialization wavenumber

which represent the discarded initial transient and the peak wavenumber of the log-normal distribution from which random initial conditions were sampled from. The maximum amplitude of the initial velocity field was set to

for forced simulations and

in the decaying turbulence, which was selected to minimize the burn-in time, as well as maintain standard deviation of the velocity field close to

. The initialization wavenumber was set to . Simulation parameters include simulation resolution along each spatial dimension, forcing and Reynolds number. The resulting velocity trajectories are then downsampled to the save resolution. Note that besides the spatial downsampling we also perform downsampling in time to maintain the Courant–Friedrichs–Lewy (CFL) factor fixed at , standard for numerical simulations. Parameters specifying all five datasets are shown in Table A1. We varied the size of our datasets from time slices at the downsampled resolution for decaying turbulence to slices in the forced turbulence settings. Such extended trajectories allow us to analyze stability of models when performing long-term simulations. We note that when performing simulations at higher Reynolds numbers we used rescaled grids to maintain fixed time stepping. This is motivated to potentially allow methods to specialize on a discrete time advancements. When reporting the results, spatial and time dimensions are scaled back to a fixed value to enable direct comparisons across the experiments. For comparison of our models with numerical methods we use the corresponding simulation parameters while changing only the resolution of the underlying grid. As mentioned in the Appendix A, when measuring the performance of all solvers on a given resolution we choose solver components to maximize efficiency.

Dataset Resolution Re Burn-in time
Kolmogorov
Kolmogorov
Kolmogorov
Decaying turbulence ()
LES
Table A1: Simulation parameters used for generation of the datasets in this manuscript. For resolution, the notation indicates that simulation was performed at resolution and the result was downsampled to resolution .

Appendix C Learned interpolations

To improve numerical accuracy of the convective process our models use psuedo-linear models for interpolation [5, 56]. The process of interpolating to is broken into two steps:

  1. Computation of local stencils for interpolation target .

  2. Computation of a weighted sum over the stencil.

Rather using a fixed set of interpolating coefficients (e.g., as done for typical polynomial interpolation), we choose from the output of a neural network additionally constrained to satisfy , which guarantees that the interpolation is at least first order accurate. We do so by generating interpolation coefficients with an affine transformation on the unconstrainted outputs of the neural network, where is the null-space of the constraint matrix (a matrix of ones) of shape and is an arbitrary valid set of coefficients (we use linear interpolation from the nearest source term locations). This is a special case of the procedure for arbitrary polynomial accuracy constraints on finite difference coefficients described in [5, 56]. In this work, we use a patch centered over the top-right corner of each unit-cell, which means we need unconstrained neural network outputs to generate each set of interpolation coefficients.

Appendix D Neural network architectures

Figure A1: Modeling approaches and neural network architecutres used in this paper. Rescaling of inputs and outputs from neural network sub-modules with fixed constants is not depicted. Dashed lines surround components that are repeatedly applied the indicated number of times.

All of the ML based models used in this work are based on fully convolutional architectures. Fig. A1 depicts our three modeling approaches (pure ML, learned interpolation and learned correction) and three architecturs for neural network sub-components (Basic ConvNet, Encoder-Process-Decoder and ResNet). Outputs and inputs for each neural network layer are linearly scaled such that appropriate values are in the range . For our physics augmented solvers (learned interpolation and learned correction), we used the basic ConvNet archicture, with (8 interpolations that need 15 inputs each) for learned interpolation and for learned correction. Our experiments found accuracy was slightly improved by using larger neural networks, but not sufficiently to justify the increased computational cost. For pure ML solvers, we used EPD and ResNet models that do not build impose physical priors beyond time continuity of the governing equations. In both cases a neural network is used to predict the acceleration due to physical processes that is then summed with the forcing function to compute the state at the next time. Both EPD and ResNet models consist of a sequence of CNN blocks with skip-connections. The main difference is that the EPD model has an arbitrarily large hidden state (in our case, size 64), whereas the ResNet model has a fixed size hidden state equal to 2, the number of velocity components.

Appendix E Details of accuracy measurements

In the main text we have presented accuracy results based on correlation between the predicted flow configurations and the reference solution. We reach the same conclusions based on other common choices, such as squared and absolute errors as shown in Fig. A2 comparing learned discretizations to DNS method on Kolmogorov flow with Reynolds number . As mentioned in the main text, we additionally evaluated our models on enlarged domains while keeping the flow complexity fixed. Because our models are local, this is the simplest generalization test. As shown in Fig. A3 (and Fig. 5 in the main text), the improvement for larger domains is identical to that found on a smaller domain.

Appendix F Details of overview figure

The Pareto frontier of model performance shown in Fig. 1(a) is based on extrapolated accuracy to an larger domain. Due to computational limits, we were unable to run the simulations on the grid for measuring accuracy, so the time until correlation goes below was instead taken from the domain size, which as described in the preceding section matched performance on the domain. The learned interpolation model corresponds to the depicted results in Fig. 2, whereas the and models were retrained on the same training dataset for coarser or finer coarse-graining.

Figure A2: Comparison of ML + CFD model to DNS running at different resolutions using varying metrics. (a) Vorticity correlation as presented in the main text. (b) Mean absolute error. (c) Absolute error of the kinetic energy.
Figure A3: Comparison of ML + CFD model to DNS running at different resolutions when evaluated on a larger domain with characteristic length-scales matching those of the training data. (a) Visualization of the vorticity at different times. (b) Vorticity correlation as a function of time. (c) Scaled energy spectrum .

Appendix G Hyperparameter tuning

All models were trained using Adam [25] optimizer. Our initial experiments showed that none of the models had a consistent dependence on the optimization parameters. The set that worked well for all models included learning rate of , and . For each model we performed a hyperparameter sweep over the length of the training trajectory used to compute the loss, model capacity such as size and number of hidden layers, as well as a sweep over different random initializations to assess reliability and consistency of training.

References

  • [1] G. Alfonsi (2009-07) Reynolds-Averaged Navier–Stokes equations for turbulence modeling. Appl. Mech. Rev. 62 (4). Cited by: §I.
  • [2] J. Anderson (2009) Basic philosophy of cfd. In Computational Fluid Dynamics, pp. 3–14. Cited by: §I.
  • [3] C. P. Arroyo, P. Kholodov, M. Sanjosé, and S. Moreau (2019) CFD modeling of a realistic turbofan blade for noise prediction. part 1: aerodynamics. In Proceedings of Global Power and Propulsion Society, Cited by: §I.
  • [4] A. Bakhshaii and E. A. Johnson (2019) A review of a new generation of wildfire–atmosphere modeling. Canadian Journal of Forest Research 49 (6), pp. 565–574. Cited by: §I.
  • [5] Y. Bar-Sinai, S. Hoyer, J. Hickey, and M. P. Brenner (2019) Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116 (31), pp. 15344–15349. Cited by: Appendix C, §I, §II.2.1, §II.2.
  • [6] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. (2018) Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261. Cited by: §III.2.
  • [7] P. Bauer, A. Thorpe, and G. Brunet (2015-09) The quiet revolution of numerical weather prediction. Nature 525 (7567), pp. 47–55. External Links: Link Cited by: §IV.
  • [8] P. Bauer, A. Thorpe, and G. Brunet (2015) The quiet revolution of numerical weather prediction. Nature 525 (7567), pp. 47–55. Cited by: §I.
  • [9] A. Beck, D. Flad, and C. Munz (2019) Deep neural networks for data-driven les closure models. Journal of Computational Physics 398, pp. 108910. Cited by: §I.
  • [10] Z. Ben-Haim, V. Anisimov, A. Yonas, V. Gulshan, Y. Shafi, S. Hoyer, and S. Nevo (2019-10) Inundation modeling in data scarce regions. External Links: Link, 1910.05006 Cited by: §III.1.2.
  • [11] K. Bhattacharya, B. Hosseini, N. B. Kovachki, and A. M. Stuart (2020) Model reduction and neural networks for parametric pdes. arXiv preprint arXiv:2005.03180. Cited by: §I.
  • [12] G. Boffetta and R. E. Ecke (2012-01) Two-Dimensional turbulence. Annu. Rev. Fluid Mech. 44 (1), pp. 427–451. External Links: Link Cited by: §III.1.1, §III.
  • [13] G. Boffetta and R. E. Ecke (2012) Two-dimensional turbulence. Annual Review of Fluid Mechanics 44, pp. 427–451. Cited by: §III.1.3.
  • [14] J. Boussinesq (1877) Theorie de l’ecoulement tourbillant. Mémoires de l’Académie des Sciences 23, pp. 46–50. Cited by: §I.
  • [15] J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, and S. Wanderman-Milne (2020) JAX: composable transformations of python+ numpy programs, 2018. URL http://github. com/google/jax, pp. 18. Cited by: §I, §II.2.
  • [16] G. J. Chandler and R. R. Kerswell (2013-05) Invariant recurrent solutions embedded in a turbulent two-dimensional kolmogorov flow. J. Fluid Mech. 722, pp. 554–595. External Links: Link Cited by: §III.
  • [17] K. Duraisamy, G. Iaccarino, and H. Xiao (2019) Turbulence modeling in the age of data. Annual Review of Fluid Mechanics 51, pp. 357–377. Cited by: §I.
  • [18] K. Duraisamy (2020-09) Machine learning-augmented reynolds-averaged and large eddy simulation models of turbulence. External Links: Link, 2009.10675 Cited by: §II.1, §II.2.3.
  • [19] N. B. Erichson, M. Muehlebach, and M. W. Mahoney (2019)

    Physics-informed autoencoders for lyapunov-stable fluid flow prediction

    .
    arXiv preprint arXiv:1905.10866. Cited by: §I.
  • [20] L. Esclapez, P. C. Ma, E. Mayhew, R. Xu, S. Stouffer, T. Lee, H. Wang, and M. Ihme (2017-07) Fuel effects on lean blow-out in a realistic gas turbine combustor. Combust. Flame 181, pp. 82–99. Cited by: §I.
  • [21] U. Frisch (1995) Turbulence: the legacy of an kolmogorov. Cambridge University Press. Cited by: §I.
  • [22] A. Griewank (1994-04) Achieving logarithmic growth of temporal and spatial complexity in reverse automatic differentiation. Optim. Methods Softw. 1 (1). External Links: Link Cited by: §II.2.3.
  • [23] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 770–778. Cited by: §III.2.
  • [24] B. Kim, V. C. Azevedo, N. Thuerey, T. Kim, M. Gross, and B. Solenthaler (2019) Deep fluids: a generative network for parameterized fluid simulations. In Computer Graphics Forum, Vol. 38, pp. 59–70. Cited by: §I.
  • [25] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: Appendix G.
  • [26] A. N. Kolmogorov (1941) The local structure of turbulence in incompressible viscous fluid for very large reynolds numbers. Cr Acad. Sci. URSS 30, pp. 301–305. Cited by: §III.1.3.
  • [27] R. H. Kraichnan and D. Montgomery (1980) Two-dimensional turbulence. Reports on Progress in Physics 43 (5), pp. 547. Cited by: §III.1.3.
  • [28] M. Lesieur and O. Metais (1996-01) New trends in Large-Eddy simulations of turbulence. Annu. Rev. Fluid Mech. 28 (1), pp. 45–82. Cited by: §I.
  • [29] L. Li, S. Hoyer, R. Pederson, R. Sun, E. D. Cubuk, P. Riley, and K. Burke (2020) Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics. Phys. Rev. Lett., pp. in press. External Links: Link Cited by: §I.
  • [30] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anandkumar (2020) Neural operator: graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485. Cited by: §I.
  • [31] J. Ling, A. Kurzawski, and J. Templeton (2016-11) Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 807, pp. 155–166. External Links: Link Cited by: §I.
  • [32] T. Lu, Y. Chen, B. Hechtman, T. Wang, and J. Anderson (2020-02) Large-Scale discrete fourier transform on TPUs. External Links: Link, 2002.03260 Cited by: §III.1.2.
  • [33] B. Lusch, J. N. Kutz, and S. L. Brunton (2018) Deep learning for universal linear embeddings of nonlinear dynamics. Nature communications 9 (1), pp. 1–10. Cited by: §I.
  • [34] R. E. Lynch, J. R. Rice, and D. H. Thomas (1964-12) Direct solution of partial difference equations by tensor product methods. Numer. Math. 6 (1), pp. 185–199. External Links: Link Cited by: Appendix A.
  • [35] Q. Malé, G. Staffelbach, O. Vermorel, A. Misdariis, F. Ravet, and T. Poinsot (2019-08) Large eddy simulation of Pre-Chamber ignition in an internal combustion engine. Flow Turbul. Combust. 103 (2), pp. 465–483. Cited by: §I.
  • [36] R. Maulik, O. San, A. Rasheed, and P. Vedula (2019) Subgrid modelling for two-dimensional turbulence using neural networks. Journal of Fluid Mechanics 858, pp. 122–144. Cited by: §I, §II.2.3.
  • [37] J. M. McDonough (2007) Lectures in computational fluid dynamics of incompressible flow: mathematics, algorithms and implementations. Cited by: Appendix A.
  • [38] C. Meneveau and J. Katz (2000) Scale-invariance and turbulence models for large-eddy simulation. Annual Review of Fluid Mechanics 32 (1), pp. 1–32. Cited by: §I.
  • [39] R. D. Moser, S. W. Haering, and G. R. Yalla (2020) Statistical properties of subgrid-scale turbulence models. Annual Review of Fluid Mechanics 53. Cited by: §I.
  • [40] P. Neumann, P. Düben, P. Adamidis, P. Bauer, M. Brück, L. Kornblueh, D. Klocke, B. Stevens, N. Wedi, and J. Biercamp (2019) Assessing the scales in numerical weather and climate predictions: will exascale be the rescue?. Philosophical Transactions of the Royal Society A 377 (2142), pp. 20180148. Cited by: §I.
  • [41] J. Pathak, M. Mustafa, K. Kashinath, E. Motheau, T. Kurth, and M. Day (2020) Using machine learning to augment coarse-grid computational fluid dynamics simulations. arXiv preprint arXiv:2010.00072. Cited by: §I.
  • [42] S. B. Pope (2000) Turbulent flows. Cambridge University Press. Cited by: §I, §III.3.
  • [43] S. B. Pope (2004-01) Ten questions concerning the large-eddy simulation of turbulent flows. New Journal of Physics 6 (35). Cited by: §I.
  • [44] L. F. Richardson (2007) Weather prediction by numerical process. Cambridge university press. Cited by: §I, §III.1.3.
  • [45] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. W. Battaglia (2020) Learning to simulate complex physics with graph networks. arXiv preprint arXiv:2002.09405. Cited by: §III.2.
  • [46] T. Schneider, J. Teixeira, C. S. Bretherton, F. Brient, K. G. Pressel, C. Schär, and A. P. Siebesma (2017) Climate goals and computing the future of clouds. Nature Climate Change 7 (1), pp. 3–5. Cited by: §I.
  • [47] S. S. Schoenholz and E. D. Cubuk (2020) JAX, M.D.: a framework for differentiable physics. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020), pp. in press. External Links: Link Cited by: §I.
  • [48] J. Sirignano, J. F. MacArt, and J. B. Freund (2020) DPM: a deep learning pde augmentation method with application to large-eddy simulation. Journal of Computational Physics 423, pp. 109811. Cited by: §I, §II.1, §II.2.2.
  • [49] J. Smagorinsky (1963-03) General circulation experiments with the primitive equations: i. the basic experiment. Mon. Weather Rev. 91 (3), pp. 99–164. Cited by: §I.
  • [50] P. K. Sweby (1984) High resolution schemes using flux limiters for hyperbolic conservation laws. SIAM journal on numerical analysis 21 (5), pp. 995–1011. Cited by: Appendix A.
  • [51] W. M. Tang and V. Chan (2005) Advances and challenges in computational plasma science. Plasma physics and controlled fusion 47 (2), pp. R1. Cited by: §I.
  • [52] K. Um, P. Holl, R. Brand, N. Thuerey, et al. (2020) Solver-in-the-loop: learning from differentiable physics to interact with iterative pde-solvers. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020), pp. in press. External Links: Link Cited by: §I, §II.2.2, §II.2.3.
  • [53] R. Wang, K. Kashinath, M. Mustafa, A. Albert, and R. Yu (2020) Towards physics-informed deep learning for turbulent flow prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1457–1466. Cited by: §I.
  • [54] P. Wolf, G. Staffelbach, L. Y. M. Gicquel, J. Müller, and T. Poinsot (2012-11) Acoustic and large eddy simulation studies of azimuthal modes in annular combustion chambers. Combust. Flame 159 (11), pp. 3398–3413. Cited by: §I.
  • [55] K. Yang, Y. Chen, G. Roumpos, C. Colby, and J. Anderson (2019-11) High performance monte carlo simulation of ising model on TPU clusters. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’19, New York, NY, USA, pp. 1–15. External Links: Link Cited by: §III.1.2.
  • [56] J. Zhuang, D. Kochkov, Y. Bar-Sinai, M. P. Brenner, and S. Hoyer (2020) Learned discretizations for passive scalar advection in a 2-d turbulent flow. arXiv preprint arXiv:2004.05477. Cited by: Appendix C, §I.