Unsupervised Deep Learning of Incompressible Fluid Dynamics

06/15/2020 ∙ by Nils Wandel, et al. ∙ University of Bonn 0

Fast and stable fluid simulations are an essential prerequisite for applications ranging from computer aided aerodynamic design of automobiles or airplanes to simulations of physical effects in CGI to research in meteorology. Recent differentiable fluid simulations allow gradient based methods to optimize e.g. fluid control systems in an informed manner. Solving the partial differential equations governed by the dynamics of the underlying physical systems, however, is a challenging task and current numerical approximation schemes still come at high computational costs. In this work, we propose an unsupervised framework that allows powerful deep neural networks to learn the dynamics of incompressible fluids end to end on a grid-based representation. For this purpose, we introduce a loss function that penalizes residuals of the incompressible Navier Stokes equations. After training, the framework yields models that are capable of fast and differentiable fluid simulations and can handle various fluid phenomena such as the Magnus effect and Kármán vortex streets. Besides demonstrating its real-time capability on a GPU, we exploit our approach in a control optimization scenario.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Simulating the behavior of fluid-like substances such as liquids (e.g. water, fuel) or gases (e.g. air) is of great importance for numerous applications in entertainment, advertisement, product design, urban planning and meteorology. Examples include the aerodynamic design of vehicles such as automobiles or airplanes, the accurate simulation of fluid phenomena to enrich scenes in computer games or movies, or weather simulations in meteorology. On top of simulating the behavior of fluids, differentiable fluid simulators allow to propagate gradients throughout the simulation. This is a fundamental prerequisite for gradient based methods to more efficiently perform e.g. control tasks in fluid domains holl2020learning.

As a full simulation of all interacting molecules within a fluid domain is deemed computationally infeasible, simplifications are needed. Most fluids can be described very precisely by the Navier Stokes equations - a set of non-linear partial differential equations which builds the foundation of most fluid simulations and takes important properties such as viscosity or fluid density into account. Since these Navier Stokes equations do not possess known analytical solutions for most application scenarios, we have to rely on numerical approximation methods. However, such numerical methods face several challenging problems: to allow for large simulation timesteps, they must be stable and robust. In order to make use of GPUs they need to be highly parallelizable. Often, large systems of equations have to be solved with iterative methods and trade-offs between computational complexity and accuracy need to be found. On top of that, differentiable fluid simulators have to make sure that no computational steps hinder gradient propagation.

Traditional schemes for fluid simulation include Lagrangian and Eulerian approaches. While Lagrangian methods handle the fluid domain from the perspective of individual particles moving with the velocity field (e.g. smoothed particle hydrodynamics (SPH) gingold1977smoothed), their accuracy is achieved at the cost of simulating the motions of many individual particles. In contrast, Eulerian methods (e.g. stam1999stable) handle fluid fields on a fixed reference frame such as a grid (finite difference methods) or a mesh (finite element methods). However, respective solvers relying on Preconditioned Conjugate Gradient approaches, iterative Jacobi or Gauss-Seidel methods, come at high computational costs. Recently, surrogate modeling with learning based approaches has gained a lot of attention due to short inference times and good generalization capabilities.

To the best of our knowledge, we present the first approach to learn the full incompressible Navier Stokes equations end-to-end on an Eulerian, grid-based representation. For this purpose, we use deep convolutional neural networks to recurrently infer velocity and pressure fields directly from given boundary conditions and the velocity and pressure fields of a previous timestep. In order to train the neural networks, we introduce an unsupervised training framework that produces random boundary configurations and forces the network to find solutions to these boundary conditions using a physically-constrained loss-function based on the residuals of the incompressible Navier Stokes equations. To achieve good performance and stability, we exploit the Helmholtz-decomposition allowing the network to output divergence-free vector fields and introduce pressure regularization for fluids with high Reynolds numbers. This end-to-end approach offers a simple, highly parallelizable, fully differentiable pipeline for computational fluid dynamics and allows to make use of efficient machine learning libraries. The unsupervised framework enables the network to learn intriguing fluid phenomena such as the Magnus effect or Kármán vortex streets without the need for fluid simulation software to generate big ground truth datasets. We make code and data publicly available at

release_upon_acceptance.

2 Related Work

In literature, several different approaches can be found that aim to approximate the dynamics of PDEs in general and fluids in particular with more efficient, learning-based surrogate models.

Lagrangian methods such as smoothed particle hydrodynamcs (SPH) gingold1977smoothed handle fluids from the perspective of many individual particles that move with the velocity field. Following this approach, learning-based methods using regression forests Ladicky:2015, graph neural networks Mrowca:2018; li2019learning and continuous convolutions Ummenhofer:2020 have been developed. In addition, Smooth Particle Networks (SP-Nets) schenck2018spnets allow for differentiable fluid simulations within the Lagrangian frame of reference. These Lagrangian methods are particularly suitable when a fluid domain exhibits large, dynamic surfaces (e.g. waves or droplets). However, to simulate the dynamics within a fluid domain, Eulerian methods such as finite differences or finite elements Foster:1996, that treat the Navier Stokes equations in a fixed frame of reference (e.g. a grid), are usually better suited.

Classical Eulerian approaches such as finite differences methods on marker and cell (MAC) grids date back to the world of Harlow and Welch harlow1965numerical. The work of Stam stam1999stable is well suited for computer graphics and builds the foundation for many special effects in computer generated imagery but is not accurate enough for most engineering applications as it suffers from numerical dissipation. Following this approach of Stam, recently, Holl et al. holl2020learning implemented a differentiable fluid simulation on a MAC grid to train efficient fluid control algorithms. In the following, we will discern two types of learning-based approaches in the Eulerian frame of reference. Continuous approaches learn to map domain coordinates (e.g. ,,) directly onto field values (e.g. velocity / pressure). In contrast, discrete approaches model PDEs on a grid using for example convolutional neural networks.

Continuous methods allow mesh-free solutions even when considering only a few points during training and work in complex, high-dimensional domains Sirignano:2018; grohs2018proof; khoo2019solving, when traditional discretization techniques come impractical. Recent applications focused on flow through porous media Zhu:2018; Zhu:2019; Tripathy:2018, fluid modeling Yang:2016; raissi2018hidden, turbulence modeling Geneva:2019; ling2016reynolds and modeling of molecular dynamics Schoeberl:2019. Training is usually based on a physics constrained loss functions that penalize residuals of the underlying PDEs. Similar to our approach, Raissi et al. Raissi:2019

used vector potentials to obtain continuous divergence-free velocity fields to approximate the incompressible Navier Stokes equations. Continuous methods can overcome the curse of dimensionality

grohs2018proof and have been demonstrated to be capable of returning smooth, accurate results for high-dimensional PDEs. However, the learned results are domain-specific and thus cannot be used in interactive scenarios.

Discrete grid-based methods, on the other hand, aim to generalize to new domains without retraining. Among discrete, grid-based approaches, several methods focused on data-driven fluid modeling. Respective approaches include the use of CNNs to learn the Reynolds averaged Navier Stokes equations (RANS) thuerey2019deep from training data generated with OpenFOAM OpenFoamUserGuide

, the use of a data-driven approach to continuously interpolate between training data examples of smoke and fluid simulations 

kim2019deep, and the use of generative adversarial networks Xie2018TempoGAN; Kim:2020 to generate temporal consistent smoke simulations at high resolutions. While such methods yield fast simulations, training needs ground truth data and may suffer from issues such as mode-collapse hindering physical accuracy.

Physics-constrained loss functions on the other hand can be directly applied to the output of a network without the need for ground-truth data and thus, reduce memory requirements while being less prone to overfitting. This way, Tompson et al. tompson2017accelerating used CNNs to solve the Poisson equation for a Helmholtz projection step to accelerate a pipeline inspired by Stam’s approach stam1999stable for inviscid fluids. Furthermore, Geneva and Zabara geneva2020modeling formulated a more general framework for physics-constrained loss functions on a square grid and presented results on Burger’s equations. In this work, we go a step further and aim to solve the full incompressible Navier Stokes equations end-to-end on a MAC grid using a U-Net ronneberger2015u and a vector potential for the velocity field.

3 Method

In this section, we provide a detailed description of our approach for unsupervised learning of incompressible fluid dynamics. For this purpose, we first motivate a physics constrained loss function based on the incompressible Navier Stokes equations. Then, we introduce a pressure regularization term for very high Reynolds numbers and explain, how we exploit the Helmholtz decomposition to obtain divergence-free velocity fields to ensure incompressibility within the fluid domain. Finally, we provide details of our discrete differentiable spatio-temporal fluid representation and its use for deep learning based fluid modeling.

3.1 Physics-constrained Loss based on Incompressible Navier Stokes Equations

The goal is to simulate a fluid consisting of a velocity field and a pressure field within a domain given Dirichlet boundary conditions on the boundary of the domain and initial conditions and for the velocity and pressure field at the beginning of the simulation.

Thus, we aim to solve the incompressible Navier Stokes equations for and within , which read as follows:

(1)
(2)

Here, describes the fluid density and its viscosity. In our case, external forces on the fluid (such as e.g. gravity) can be neglected, so we set .

Dirichlet boundary conditions mean, that must match at the boundaries:

(3)

Using the residuals of these equations, we can formulate the following divergence and momentum loss terms on as well as a boundary loss term on :

(4)
(5)
(6)

Combining the described loss terms, we obtain the following loss function:

(7)

where

are hyperparameters that weight the contributions of the different loss terms.

3.2 Pressure Regularization

In the special case of very high Reynolds numbers (see equation 15) and inviscid flows, training becomes unstable as viscous friction cannot dissipate enough energy out of the system anymore. This leads to unrealistic gradients for and . For such cases, we introduce an additional regularization term

(8)

for the loss function (7) to stabilize training. The intuition behind this regularization term is, that we want to penalize unrealistically high energies of the pressure field.

3.3 Helmholtz Decomposition

A common method to ensure incompressibility of a fluid is to project the flow field onto the divergence-free part of its Helmholtz decomposition. The Helmholtz theorem states that every vector field can be decomposed into a curl-free and a divergence-free part:

(9)

Note, that and .

The projection consists of solving the Poisson problem for , followed by substracting from the original flow field. However, solving the Poisson equation on arbitrary domains comes at high computational costs for classical methods and one has to rely e.g. on conjugate gradient methods to approximate its solution.

Here, we propose a different approach and directly try to learn a vector potential with . This ensures that the network outputs a divergence-free velocity field within the domain and allows omitting the divergence loss . In this setting, the two remaining loss terms become:

(10)
(11)

In this work, we consider 2D fluid simulations, so only the z-component of , , is of interest as and all derivatives with respect to the -axis are zero:

(12)

3.4 Discrete Spatio-temporal Fluid Representation

Marker and Cell (MAC) grid

To solve the Navier Stokes equations, we represent the relation between in Eulerian coordinates on a 2D staggered marker and cell (MAC) grid (see Figure (a)a).Therefore, we discretise time and space as follows:

(13)

Obtaining gradient, divergence, Laplace and curl operations on this grid with finite differences is straight forward and can be efficiently implemented with convolutions. For detailed descriptions regarding the fully discretized loss-functions, we refer to the supplementary material.

Explicit, Implicit, Implicit-Explicit (IMEX) Loss Formulations

The discretization of the time domain is needed to deal with the time-derivative of the velocity field in , which becomes:

(14)

The goal is to take timesteps that are as large as possible while maintaining stable and accurate solutions. Stability and accuracy largely depend on the definition of . In literature, choosing is often referred to as explicit integration schemes and frequently leads to unstable behavior. Choosing is usually associated with implicit integration schemes and gives stable solutions at the cost of numerical dissipation. Implicit-Explicit (IMEX) schemes, that set are a compromise between both methods and considered to be more accurate but less stable than implicit methods. Especially when propagating gradients throughout the simulation (e.g. for control optimization tasks), training the model beforehand with the implicit definition of returned more stable gradients.

3.5 Fluid Model

We represent the fluid dynamics by a recurrent model that maps the fluid state for timestep and the domain description to the fluid state of the next timestep. Here, describes the pressure field and describes the vector potential of . For , we can set and . is a binary mask that contains the domain geometry and is 1 for the fluid domain and 0 everywhere else. For the boundary of the domain, we simply take the inverse of : . represents the Dirichlet boundary conditions and contains a velocity field that must be matched by at the domain boundaries. Figure (b)b shows a diagram of the fluid model. First, is taken to derive a feature representation that comprises . These features are then fed into a U-Net ronneberger2015u with a reduced number of channels (the exact network configuration can be found in the supplementary material). The mean of the U-Net output is set to 0 in order to keep and well defined and prevent drifting offset values. Finally, the output is added to and to obtain and respectively.

(a) Layout of Staggered Marker and Cell (MAC) grid.
(b) Diagram of the fluid model
Figure 1: MAC grid and diagram of the fluid model

3.6 Training Procedure and Implementation Details

Training starts with initializing a pool of randomized domains and boundary conditions as well as initial conditions for the vector potential and pressure fields that we both set to zero ( and ). The resolution of our training domains was 100x300 grid cells and example-domains of the training pool are shown in the supplementary material.

At each training step, a random mini-batch is drawn from the pool and fed into the neural network which is designed to predict the velocity () and pressure () fields of the next time step. Based on the physically constrained loss-function (7), we update the weights of the network using the Adam optimizer kingma2014adam. At the end of each training step, the pool is updated by replacing the old vector potential and pressure fields by the newly generated ones .

From time to time, old environments of the training pool are replaced by new randomized environments and the vector potential as well as the pressure fields are reset to 0. This increases the variance of the training pool and helps the neural network to learn "cold starts" from 0-velocity and 0-pressure fields.

Besides the fluid model described above, which we denote as -Net in the following, we also trained an ablation model,

-Net, that directly learns to predict the velocity field without a vector potential. For the implementation of both models, we used the popular machine learning framework Pytorch and trained the models on a NVidia GeForce RTX 2080 Ti where the training converged after about 1 day. The hyperparameters in the loss-function for the

-Net were and . The reason for choosing a higher weight for the loss term than for was the observation, that errors in can lead to unrealistic flows leaking through boundaries. For the ablation study (-Net), we used . Here, we had to choose a very high weight for to ensure incompressibility of the fluid, otherwise unrealistic source and sink effects start to appear. For on the other hand, we used a very low weight as the boundary conditions can be trivially learned by the -Net. We used these parameter settings for all experiments.

4 Results

To evaluate the potential of our method, we demonstrate its capability to re-produce physical effects such as vortex streets an the Magnus effect. In addition, we provide an analysis of the generalization capability and demonstrate its real-time performance.

4.1 Qualitative Evaluation

Qualitative Analysis of Wake Dynamics

The Reynolds number is a dimensionless quantity closely related to qualitative effects in fluid dynamics such as the wake dynamics behind an obstacle. It is defined by:

(15)

Here, is the fluid density, is the fluid speed, is the diameter of the obstacle, and is the viscosity. (We use the units of the grid).

We retrained models for different values of and to compare the fluid behavior for a wide range of Reynolds numbers. Figure 2 shows, that the trained models are able to predict the wake dynamics behind an obstacle in good accordance with qualitative expectations from fluid dynamics. As a rule of thumb, for , the flow becomes time-reversible. This can be noticed in Figure (a)a by the symmetry of the flow before and after the obstacle and the nearly constant pressure gradient within the pipe. Starting from , the flow is still laminar but a static wake is forming behind the obstacle (see Figure (b)b). For Reynolds numbers , Kármán vortex streets start to appear (see Figure (c)c). A Kármán vortex street consists of clock and counterclockwise spinning vortices that are generated at the obstacle and then start moving in a regularly oscillating pattern with the flow. For very large Reynolds numbers or inviscid flows, the flow field becomes turbulent, which can be recognized by the irregular patterns behind the obstacle in Fig (d)d.

(a) Re=0.6,
(b) Re=30,
(c) Re=600,
(d) Re,
(obtained with regularization on )
Figure 2: After training, our models are able to show correct wake flow dynamics for a wide range of different Reynolds numbers. (). Streamlines indicate flow direction, linewidth indicates speed and colors represent the pressure field (blue: low pressure / yellow: high pressure).

Magnus effect

The Magnus effect appears when a flow interacts with a rotating body. It is widely known e.g. in sports such as soccer or tennis where spin is used to deflect the path of a ball. The reason for the deflection stems from a low pressure field where the surface of the object moves along flow direction and a high pressure field where the object surface moves against the flow. Figure (a)a shows, that our models are able to reproduce the Magnus effect around a rotating cylinder.

(a) Magnus effect on a clock-wise turning cylinder.
(b) Generalization to objects unseen during training.
Figure 3: Our models feature the Magnus effect and generalize to new flow domains.

Analysis of Generalization Capability

We also tested the networks capability to generalize to objects not seen during training. Figure (b)b shows the networks capability to meet boundary conditions and return a plausible pressure field for a domain that was not contained in the pool of training domains.

Real-time capability

Since progressing the simulation by a timestep consists merely of one pass through a convolutional neural network, it can be done at comparably low computational costs and can easily be parallelized. This enables for example interactive real-time simulations. We implemented a demo that allows to interact with a fluid by moving obstacles, rotating spheres and changing the flow speed within a pipe (see video in supplementary material). Our method runs at 250 timesteps per second on a 100x300 grid. Therefore, we used a NVidia GeForce RTX 2080 Ti consuming about 860 MB of GPU memory.

4.2 Quantitative Evaluation

The quantitative comparison of different fluid solvers comes with several challenges, as their performance is highly dependent on factors like the geometry of the domain, fluid parameters such as viscosity or density, flow speed or the timestep of the integrator. As benchmarks for fluid simulations on MAC grids are not yet available, we built a simple toy domain on a 100 x 100 grid which simulates a flow around an obstacle within a pipe (similar to Figure 2). More details are provided in the supplementary material. We compared our method (

-Net) against the recently released open source differentiable fluid simulator PhiFlow

holl2020learning and provide an ablation study (-Net) to show the performance of our method with and without the use of the Helmholtz decomposition (see Table 1).

First, we compared the computational speed on a CPU and GPU by looking at the number of timesteps that the different methods could integrate per second. The -Net as well as the -Net were significantly faster than PhiFlow as they do not rely on a conjugate gradient solver but instead use a convolutional neural network that has a fast inference time and can be easily parallelized on a GPU. To provide a fair comparison on , we set the velocity field at the boundaries equal to . This enables us to compute for the -Net architecture on the domain boundaries which would otherwise have zero divergence everywhere. We noticed, that PhiFlow yields significantly better performance on for smaller timesteps (e.g. for 2.5e-5 and for 3e-7), however, this further increases its computational burden. Finally, we compared the momentum loss. For both, and , the -Net architecture significantly outperformed the more naive -Net approach.

Method TpS on CPU TpS on GPU
PhiFlow 7 - 6.2e-4 -
-Net (ours) 82 311 8.66e-7 4.87e-5
-Net (ours) 82 311 5.44e-7 1.56e-5
Table 1: Quantitative comparison of timesteps per second (TpS) on CPU / GPU as well as divergence loss and momentum loss for differentiable fluid solvers on a 100x100 grid for timesteps of size .

4.3 Controlling Frequency of Kármán Vortex Street

Whereas the primary focus of this work is placed on learning incompressible fluid dynamics, we also provide a proof of concept to show the effectiveness of backpropagating gradients through time in order to obtain differentiable fluid simulations with the recurrent

-Net.

For this purpose, we considered an application scenario, where we aim at controlling the frequency of Kármán vortex streets by changing the flow speed within a pipe. To address this problem, we use an iterative approach where we perform the following major steps within each iteration until we reach convergence:

  1. We first simulate a flow at a certain speed (as specified in ) within a pipe using the -Net and capture the y component of the flow speed behind an obstacle over time.

  2. Subsequently, we compute the fast Fourier transform of

    : and introduce a loss function, that rewards strong signals at a given target frequency :

    (16)

    We observed that projecting

    onto a Gaussian with standard deviation

    centered around returned more stable gradients than directly setting .

  3. This is followed by the computation of the gradients of the loss function with respect to by backpropagating gradients through time:

  4. Finally, we optimize the flow speed by updating with a gradient descent step (e.g. using the Adam optimizer kingma2014adam.)

A more detailed description can be found in the supplementary material. We want to stress out, that differentiable fluid simulations are limited to scenarios with low Reynolds numbers and laminar flows as in the presence of turbulences, chaotic behavior will lead to exploding gradients which can be explained by the butterfly effect.

5 Discussion and Outlook

In this work, we presented an unsupervised learning scheme for the incompressible Navier Stokes equations. We introduced a fluid model that uses a vector potential to output divergence-free velocity fields. Qualitative results of our trained fluid models are in good accordance with expectations from fluid dynamics for a wide range of Reynolds numbers and generalize to unknown fluid domains. Quantitative assessment showed superior performance for large integration timesteps compared to PhiFlow and a naive approach that directly predicts the velocity field. Furthermore, our fluid model allows for significantly faster simulations than PhiFlow. We gave a real-time demo and showed how differentiability can be used for optimization tasks in a fluid control scenario.

In the future, this model could be extended to 3D convolutional neural networks and multi-phase domains. On top of Dirichlet boundary conditions, Neumann boundary conditions could be incorporated as well.

Broader Impact

Surrogate models that capture physical relationships are of great importance for simulation tasks and control systems with numerous industrial applications. In addition, the integration of such physics-based constraints into learning techniques allows gaining efficiency, avoiding the need for ground-truth data and reducing the susceptibility regarding overfitting.

As the accuracy of deep surrogate models for grid-based solutions of the Navier Stokes equations will further increase in the future, such fast and differentiable fluid simulations could be applied in a wide variety of research and engineering fields. Researchers could speed up meteorology simulations or investigate the diffusion of aerosols to study the spread of airborne infections such as e.g. Covid-19. Engineers could accelerate development of aerodynamic proof of concepts in simulated wind-tunnels and use differentiable physics for various optimization tasks in fields like flow control and reinforcement learning. In computer generated imagery, this method could speed up physics engines for games or movies to obtain more realistic smoke or water effects.

We do not believe that this work puts anyone at disadvantage. However, if learning-based fluid simulators are used to optimize highly critical tasks, one should keep track of and during inference time and validate the results with well-established classical approaches. As our method directly learns from a physics-constrained loss function, it is not affected by biases in the data.

This work was partially funded by the scholarship "x-rite".

References