Frequency-compensated PINNs for Fluid-dynamic Design Problems

by   Tongtao Zhang, et al.
Siemens AG

Incompressible fluid flow around a cylinder is one of the classical problems in fluid-dynamics with strong relevance with many real-world engineering problems, for example, design of offshore structures or design of a pin-fin heat exchanger. Thus learning a high-accuracy surrogate for this problem can demonstrate the efficacy of a novel machine learning approach. In this work, we propose a physics-informed neural network (PINN) architecture for learning the relationship between simulation output and the underlying geometry and boundary conditions. In addition to using a physics-based regularization term, the proposed approach also exploits the underlying physics to learn a set of Fourier features, i.e. frequency and phase offset parameters, and then use them for predicting flow velocity and pressure over the spatio-temporal domain. We demonstrate this approach by predicting simulation results over out of range time interval and for novel design conditions. Our results show that incorporation of Fourier features improves the generalization performance over both temporal domain and design space.


Physics-informed deep learning for incompressible laminar flows

Physics-informed deep learning (PIDL) has drawn tremendous interest in r...

Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data

We present hidden fluid mechanics (HFM), a physics informed deep learnin...

Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data

High-fidelity reconstruction of fluids from sparse multiview RGB videos ...

Learned coupled inversion for carbon sequestration monitoring and forecasting with Fourier neural operators

Seismic monitoring of carbon storage sequestration is a challenging prob...

Experience report of physics-informed neural networks in fluid simulations: pitfalls and frustration

The deep learning boom motivates researchers and practitioners of comput...

Learning with latent group sparsity via heat flow dynamics on networks

Group or cluster structure on explanatory variables in machine learning ...

Surrogate Modeling of Fluid Dynamics with a Multigrid Inspired Neural Network Architecture

Algebraic or geometric multigrid methods are commonly used in numerical ...

1 Introduction

The current approach for designing complex devices and systems, such as aero-dynamic surfaces and turbine components, typically involves an iterative interaction between design/operating space exploration and evaluation. However, high-fidelity fluid-dynamic simulations, which are necessary to evaluate the performance of design candidates under a variety of operating conditions, demand significant time and computational power. This limits the scope of the overall design optimization process and as a consequence, may lead to sub-optimal design choices. Applying machine learning algorithms to develop a fast and accurate surrogate for predicting simulation outcomes has the potential to significantly accelerate design evaluations thereby generating improved design choices.

In recent years, deep neural networks and representation learning Goodfellow et al. (2016) have improved significantly in accuracy and is widely-used in many application domains, such as image recognition He et al. (2016), sequential decision making Silver et al. (2017), and language comprehension Devlin et al. (2018)

. These approaches, especially supervised learning, leverage datasets of inputs (e.g. pictures of people, historical data of a region’s average weather data, etc.) and outputs (e.g. identity of the individuals in a picture, and weather forecast, respectively) to learn their functional relationship using backpropagation. As inference in a neural network involves a single forward pass this provides an excellent opportunity to learn good-quality, low-cost surrogates for exploring the design spaces associated with fluid-dynamic problems.

To enable generalization beyond the training set, learning approaches incorporate appropriate inductive bias (Baxter, 2000; Haussler, 1988) and promote representations which are more plausible in some sense. It typically manifests itself via a set of assumptions, which in turn can guide a learning algorithm to pick one hypothesis over another. The success in predicting an outcome for previously unseen data then depends on how well the inductive bias captures the ground reality. Inductive bias can be introduced as a prior in a Bayesian model, or via the choice of computational graphs and regularization terms in a neural network. In problems wherein laws of physics have a strong influence on the input-output relationship, generalization can be improved by leveraging underlying physics for defining the regularization term.

In this work, we incorporate a physics-based regularization term so that a learned surrogate conforms to the underlying physics governed by Navier-Stokes equations. We develop a PINN framework to infer how the velocity and pressure fields associated with the flow of an incompressible fluid around a cylinder depend on the underlying geometry and boundary condition - in particular, the size of the cylinder and inlet velocity, respectively. In addition, to capture the periodic nature of the solution which is governed by the Strouhal number (Anderson and Wendt, 1995), the proposed approach learns a set of Fourier features and a phase offset parameter as functions of the underlying geometry and boundary conditions. After the PINN has been trained, simulation results, i.e. velocity and pressure fields, for new design choices can be inferred in a fast, computationally inexpensive way. Thus, by enabling high-throughput evaluation of potential design candidates, this proposed approach provides a means to achieve better, more efficient design solutions. The key contributions of this work are as follows:

  • [leftmargin=1.5em,noitemsep,topsep=-8pt]

  • We introduce a Fourier Feature Mapping (FFM) subnetwork within a PINN framework to yield better predictions about fluid flow around a cylinder. The FFM subnetwork learns frequency and phase offset parameters as a function of cylinder shape and inlet velocity.

  • Subsequently, we use the learned surrogate to predict simulation outputs for novel design conditions and demonstrate the improvement in its generalization performance.

Related Work

ML-based Approaches for Fluid-dynamic Simulations:

Use of machine learning algorithms in fluid-dynamic problems has drawn significant attention over the last few years. Hennigh (2017); Wei et al. (2018) have shown that supervised learning, using large datasets of simulation results obtained from finite-element or finite-volume solvers, can build surrogates for predicting predict simulation results with high accuracy. Jiang et al. (2020); Nabian and Meidani (2020); Raissi et al. (2019); Wang et al. (2020) have demonstrated that ML-based approaches can predict simulation results in mesh-free manner and incorporation of physics-based regularization in these formulations improves the quality of results by a significant margin. In addition, supervised learning has also been used to guide the discretization process in a data-driven way (Bar-Sinai et al., 2019) or to learn efficient iterative solvers (Hsieh et al., 2019). On the other hand, alternative approaches Dwivedi et al. (2019); Lu et al. (2019); Nabian and Meidani (2019)

based on self-supervised learning have also been proposed; they employ a neural network to approximate the solution of a differential equation and then use automatic differentiation to compute the loss function which is a quantitative measure on how well the dynamics (represented via a differential equation) and the initial/boundary conditions are enforced. As these approaches use the physics itself to generate training data, this line of of work completely avoids the computationally expensive process for generating simulation datasets. It has also been shown that such neural network based approximations for a class of quasilinear, parabolic partial differential equations can converge to their true solutions with arbitrary accuracy

(Sirignano and Spiliopoulos, 2018).

Frequency Bias in Neural Networks:

Frequency bias in neural networks is a relatively well-studied problem, with a body of work focusing on the relationship between frequency components present in a function and the speed at which neural networks learn them. (Rahaman et al., 2019)

has demonstrated that neural networks with ReLU activation favors functions with low frequency components.

(Eldan and Shamir, 2016; Montufar et al., 2014) have shown that deeper architectures are needed for a neural network to learn high-frequency functions. (Basri et al., 2020; Ronen et al., 2019) analyzed the learning dynamics in gradient descent and have shown that neural networks learn low frequency functions much faster than high frequency functions. On the other hand, since the seminal work by Rahimi and Recht (Rahimi and Recht, 2008), multiple work have focused on accelerating learning algorithms by mapping the input to an appropriate feature space. A recent work (Tancik et al., 2020)

has shown that by using Fourier features as inputs to a multi-layer perceptron (MLP) can improve its capability to learn high frequency functions. Another work

(Sitzmann et al., 2020)

has shown that use of periodic activation functions (e.g.,

) can provide an efficient tool to learn representations in a large class of problems.

2 Frequency-compensated PINN for Fluid-dynamic Problems

2.1 Navier-Stokes Equations

In this work, we consider a classical fluid-dynamics problem, namely a cylinder in the cross flow (Anderson and Wendt, 1995). In this flow configuration, an incompressible fluid passes around a cylinder. Then, by letting , denote the horizontal and vertical components of the velocity field and denote the pressure field, the dynamics can be expressed as:


where denotes the kinematic viscosity of the fluid under consideration and the functional describes the underlying partial differential equation. Moreover, by letting and denote the cylinder’s diameter and inlet velocity, respectively, the Reynolds number for this flow can expressed as . As we assume the kinematic viscosity to be , the vortex shedding frequency can be approximated as for the range of inlet velocity and cylinder diameter considered in this work (Sumer and others, 2006).

2.2 PINN Architecture and the Fourier Component

Figure 1 illustrates the PINN architecture that we propose to learn a surrogate for the aforementioned problem. The input to the network are as follows: the spatial coordinates inside the domain (), the temporal dimension (), the inlet velocity () and the diameter of the cylinder (); and the PINN predicts the velocity () and pressure () of the flow.

Figure 1: A diagram demonstrating the proposed framework. Arrows with dotted lines denote the data flow with regard to Fourier Feature Mapping.

As mentioned in the previous subsection, the velocity and pressure exhibit a periodic behavior along the temporal dimension. To capture this aspect, we use a set of Fourier features defined as


and employ an MLP to learn the frequency () and phase shift () in the Fourier component as a function of and . These features are then fed into a second MLP subnetwork, along with the spatio-temporal coordinates () and design specifications (). The output from this second subnetwork yield predictions on flow velocity and pressure. By letting , and denote the predicted velocity and pressure, respectively, we define the prediction error as:


where is the training dataset size. In addition, to enforce that the predicted values conform to the underlying physics governed by (1), we use the following regularization term


Finally we define the following loss function for the PINN


where the hyper-parameter maintains a balance between prediction accuracy and regularization.

3 Experiment

To evaluate performance of our proposed PINN framework, we train the network using simulation data corresponding to a handful of predefined geometry () and inlet velocity () combinations and use it to predict outputs for other combinations of geometry and inlet velocity values.

3.1 Dataset

We use FeniCS Alnæs et al. (2015); Logg et al. (2012), a finite-element solver, to create a dataset for the flow around a cylinder problem, as illustrated in Figure 2. We have a rectangular channel with length and width , and an elliptical cylinder with fixed horizontal diameter and vertical diameter

Figure 2: Geometry of the flow configuration. The pink area highlights the region of interest which is aligned in the vertical middle and is located in the right of the cylinder after an offset of from its center.

is placed inside the channel. The cylinder is placed at the vertical middle point and from the inlet on the left. The region of interest is a rectangular region of length and width . We run FEniCS simulation for the time interval (with a time resolution ), and then sample the velocity and pressure data for this region of interest. In this work, we train and evaluate the PINN predict the output within this region of interest (shown in pink in Figure 2).

In this work, we assume and and form the training set by running FEniCS simulation for the following 9 combinations (), (), (), (), (), (), (), (), and () and then only taking the points from the time interval with the interval of . Also, we keep the result for the following combinations of and as the validation set - () and (); although the combination () appears in the both these sets they they do not overlap in the time direction. The rest of the points constitute the test set. With this, we have a dataset with training set of instances, validation set of instances and test set of instances111For further details such as data point distribution among the geometry settings, please read the supplementary materials. If a geometry setting (i.e. a particular combination of and ) appears in training set, we name it “Seen”, otherwise “Unseen”.

3.2 Parameters and Settings

For the Fourier Feature Mapping (FFM) subnetwork, we use fully-connected (FC) layers with neurons on each layer and use Tanh activation function after the dropout layer. The final output size of the subnetwork is set as , and the first outputs serve as frequency in the Fourier component (2) while the rest serve as phase shift .

For the second MLP subnetwork, we set up a network consisting of FC layers with neurons on each layer. The activation function for each FC layer is also chosen as Tanh.

In each step, besides the minibatches from the training set, we also randomly sample points from the recatangular region and we calculate the PDE loss (4) for those randomly sampled points so that the learned surrogate conforms to (1). The partial deriverative equations are implemented using autograd

toolbox from PyTorch 

Paszke et al. (2019). The random points are drawn in the following domains: and .

We use Adam Optimizer Kingma and Ba (2014) to train the whole neural network with a learning rate of . The default run is epoches with the minibatch size of 32768, and we adopt early-stopping strategy if the validation loss does not reduce.

3.3 Results

We show the errors and illustrate visualization output from our proposed framework given different geometry and component settings. Table 1 includes the MSE loss values at all the geometry and component settings which will be analyzed through the whole subsection.

Seen/Covered Unseen/Covered Seen/Uncovered Unseen/Uncovered
Full Component 1.0910 0.0269 0.0727 0.112
No-FFM 0.347 0.0973 0.163
Strong-Reg 0.0589 0.0453 0.0986
No-Reg/Overfit 0.0431 0.0649 0.126
Table 1: Mean square error values of the testing sets and different experiment settings which will be discussed through Section 3.3. “Seen/Unseen” denotes whether the geometry settings appear in the training set (the data points in training and test set never overlap), “Covered/Uncovered” denotes whether the data is drawn from span.
(a) Full Component, data sampled at with and
(b) Full Component, data sampled at with and
Figure 3: Visualization output as discussed in Section 3.3.1 and 3.3.2. In the same figure, the first row is the ground truth, the second row is the prediction and the third row is the error, other visualization figures have the same layout.

3.3.1 Prediction within training geometry settings (Seen)

Figure 2(a) demonstrates the “worst” prediction output (the one possessing the highest MSE among the instances) of our proposed framework when using the geometry in the training set (, and ). It is clear that, within the training geometry settings, our proposed PINN framework provides accurate prediction on the inputs that do not appear at the training time stamps. The errors are barely noticeable on the visualization result and there is almost no phase shift.

3.3.2 Prediction outside training geometry settings (Unseen)

Figure 2(b) demonstrates the performance with the geometry settings that is unseen in the training set. We select the best (lowest MSE) prediction results and demonstrate it in Figure 2(b). We can conclude that our framework generally works well in terms of recovering the data distribution in the sample space – or in terms of visualization, the shapes and artifacts in the images.

3.3.3 Ablation Settings, Challenges and Discussion

(a) Prediction beyond time span, data sampled at with and
(b) No-FFM, data sampled at with and
(c) Strong-Reg, data sampled at with and .
(d) No-Reg/Overfit, data sampled at with and
Figure 4: Visualization results which are discussed in Section 3.3.3
Beyond Time Span:

In Table 1, we notice that points within time span (denoted as “Uncovered”) have lower performance or higher loss values. This is a major challenge in the proposed framework due to lack of further guidance in the training set and training process – no sampled points are drawn from time span , However, as we illustrate the in Figure 3(a), our proposed framework can still predict the flow velocity and pressure, albeit with some small phase shift.

Fourier Feature Mapping:

Figure 3(b) demonstrate the visualization of a framework that removes the FFM component – i.e., , , and were directly fed into the second MLP. We use the same geometry setting and time stamp as the one in Figure 2(b). From the output, we can conclude that, FFM provide crucial frequency and phase shift information in the output; and without it, the framework is not able to handle the complex frequencies and phase shifts among different combination of geometry settings.

Weighted Loss:

In most of the experiments, the loss weight parameter was set as , and Figure 3(c) shows that a higher weighted PDE loss would inhibit the vorticity in the unseen settings. If we set , meaning that we do not introduce PDE regularization in the framework, we encounter over fitting as demonstrated in Figure 3(d).

Potential to Replace FeniCS-like Simulator:

When we conduct FeniCS simulation on one geometry setting, with a time step of on a machine equipped with two Intel Xeon CPU E5-2620 v4 CPUs and 4 Nvidia Titan Xp, we spend more than seconds in simulating data over the time interval . With our proposed framework, we only need time steps as anchor points from FeniCS, and within seconds (even we do not use early-stopping) we are able to acquire the same amount of data with comparable quality.

4 Conclusion

In this paper, we proposed a frequency-compensated PINN framework which can build high-accuracy surrogate for predicting simulation result in fluid dynamic design problems. In particular, we introduced and leveraged a Fourier feature mapping subnetwork to capture the periodicity present in the flow velocity and pressure; to conform with the underlying physics governed by the Strouhal number, this subnetwork learns the Fourier components as a functions of inlet velocity and cylinder size. Our results show that these Fourier features improve generalization in spatial and temporal domain as well for novel geometry settings. Future work would explore how this framework can be further extended to address fluid dynamic problems with more complex geometry and boundary conditions.


  • M. S. Alnæs, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. E. Rognes, and G. N. Wells (2015) The fenics project version 1.5. Archive of Numerical Software 3 (100). External Links: Document Cited by: §3.1.
  • J. D. Anderson and J. Wendt (1995) Computational fluid dynamics. Vol. 206, Springer. Cited by: §1, §2.1.
  • Y. Bar-Sinai, S. Hoyer, J. Hickey, and M. P. Brenner (2019) Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116 (31), pp. 15344–15349. Cited by: §1.
  • R. Basri, M. Galun, A. Geifman, D. Jacobs, Y. Kasten, and S. Kritchman (2020) Frequency bias in neural networks for input of non-uniform density. External Links: 2003.04560 Cited by: §1.
  • J. Baxter (2000) A model of inductive bias learning.

    Journal of Artificial Intelligence Research

    12, pp. 149–198.
    Cited by: §1.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1.
  • V. Dwivedi, N. Parashar, and B. Srinivasan (2019) Distributed physics informed neural network for data-efficient solution to partial differential equations. arXiv preprint arXiv:1907.08967. Cited by: §1.
  • R. Eldan and O. Shamir (2016) The power of depth for feedforward neural networks. In Conference on learning theory, pp. 907–940. Cited by: §1.
  • I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio (2016) Deep learning. Vol. 1, MIT press Cambridge. Cited by: §1.
  • D. Haussler (1988) Quantifying inductive bias: AI learning algorithms and Valiant’s learning framework. Artificial Intelligence 36 (2), pp. 177–221. Cited by: §1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 770–778. Cited by: §1.
  • O. Hennigh (2017) Lat-net: compressing lattice boltzmann flow simulations using deep neural networks. arXiv preprint arXiv:1705.09036. Cited by: §1.
  • J. Hsieh, S. Zhao, S. Eismann, L. Mirabella, and S. Ermon (2019) Learning neural PDE solvers with convergence guarantees. In International Conference on Learning Representations, External Links: Link Cited by: §1.
  • C. M. Jiang, S. Esmaeilzadeh, K. Azizzadenesheli, K. Kashinath, M. Mustafa, H. A. Tchelepi, P. Marcus, A. Anandkumar, et al. (2020)

    MeshfreeFlowNet: a physics-constrained deep continuous space-time super-resolution framework

    arXiv preprint arXiv:2005.01463. Cited by: §1.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.2.
  • A. Logg, K. Mardal, G. N. Wells, et al. (2012) Automated solution of differential equations by the finite element method. Springer. External Links: Document, ISBN 978-3-642-23098-1 Cited by: §3.1.
  • L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis (2019) DeepXDE: a deep learning library for solving differential equations. arXiv preprint arXiv:1907.04502. Cited by: §1.
  • G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio (2014) On the number of linear regions of deep neural networks. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 2924–2932. Cited by: §1.
  • M. A. Nabian and H. Meidani (2019) A deep learning solution approach for high-dimensional random differential equations. Probabilistic Engineering Mechanics 57, pp. 14–25. Cited by: §1.
  • M. A. Nabian and H. Meidani (2020) Physics-driven regularization of deep neural networks for enhanced engineering design and analysis. Journal of Computing and Information Science in Engineering 20 (1). Cited by: §1.
  • A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024–8035. External Links: Link Cited by: §3.2.
  • N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, and A. Courville (2019) On the spectral bias of neural networks. K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 5301–5310. Cited by: §1.
  • A. Rahimi and B. Recht (2008) Random features for large-scale kernel machines. In Advances in neural information processing systems, pp. 1177–1184. Cited by: §1.
  • M. Raissi, P. Perdikaris, and G. E. Karniadakis (2019) Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 378, pp. 686–707. Cited by: §1.
  • B. Ronen, D. Jacobs, Y. Kasten, and S. Kritchman (2019) The convergence rate of neural networks for learned functions of different frequencies. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.), pp. 4761–4771. Cited by: §1.
  • D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. (2017) Mastering the game of Go without human knowledge. Nature 550 (7676), pp. 354–359. Cited by: §1.
  • J. Sirignano and K. Spiliopoulos (2018) DGM: a deep learning algorithm for solving partial differential equations. Journal of computational physics 375, pp. 1339–1364. Cited by: §1.
  • V. Sitzmann, J. N. Martel, A. W. Bergman, D. B. Lindell, and G. Wetzstein (2020) Implicit neural representations with periodic activation functions. arXiv preprint arXiv:2006.09661. Cited by: §1.
  • B. M. Sumer et al. (2006) Hydrodynamics around cylindrical strucures. Vol. 26, World scientific. Cited by: §2.1.
  • M. Tancik, P. P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. T. Barron, and R. Ng (2020) Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS. Cited by: §1.
  • R. Wang, K. Kashinath, M. Mustafa, A. Albert, and R. Yu (2020) Towards physics-informed deep learning for turbulent flow prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1457–1466. Cited by: §1.
  • Q. Wei, I. Akrotirianakis, A. Dasgupta, and A. Chakraborty (2018) Learn to learn: application to topology optimization. Smart and Sustainable Manufacturing Systems 2 (1), pp. 250–260. Cited by: §1.

Appendix A Data Distribution

Table 2 demonstrates the distribution of the training, validation and test set. If a geometry setting appears in training set, we name it “Seen”, otherwise “Unseen”.

Train Validation Test
/ /
/ /
/ /
Table 2: Statistics and split of training, validation and test sets. Due to internal mechanics of FeniCS, such as mesh grid settings, there are slight differences in the numbers of data points among different geometry settings.