1 Introduction
The current approach for designing complex devices and systems, such as aerodynamic surfaces and turbine components, typically involves an iterative interaction between design/operating space exploration and evaluation. However, highfidelity fluiddynamic simulations, which are necessary to evaluate the performance of design candidates under a variety of operating conditions, demand significant time and computational power. This limits the scope of the overall design optimization process and as a consequence, may lead to suboptimal design choices. Applying machine learning algorithms to develop a fast and accurate surrogate for predicting simulation outcomes has the potential to significantly accelerate design evaluations thereby generating improved design choices.
In recent years, deep neural networks and representation learning Goodfellow et al. (2016) have improved significantly in accuracy and is widelyused in many application domains, such as image recognition He et al. (2016), sequential decision making Silver et al. (2017), and language comprehension Devlin et al. (2018)
. These approaches, especially supervised learning, leverage datasets of inputs (e.g. pictures of people, historical data of a region’s average weather data, etc.) and outputs (e.g. identity of the individuals in a picture, and weather forecast, respectively) to learn their functional relationship using backpropagation. As inference in a neural network involves a single forward pass this provides an excellent opportunity to learn goodquality, lowcost surrogates for exploring the design spaces associated with fluiddynamic problems.
To enable generalization beyond the training set, learning approaches incorporate appropriate inductive bias (Baxter, 2000; Haussler, 1988) and promote representations which are more plausible in some sense. It typically manifests itself via a set of assumptions, which in turn can guide a learning algorithm to pick one hypothesis over another. The success in predicting an outcome for previously unseen data then depends on how well the inductive bias captures the ground reality. Inductive bias can be introduced as a prior in a Bayesian model, or via the choice of computational graphs and regularization terms in a neural network. In problems wherein laws of physics have a strong influence on the inputoutput relationship, generalization can be improved by leveraging underlying physics for defining the regularization term.
In this work, we incorporate a physicsbased regularization term so that a learned surrogate conforms to the underlying physics governed by NavierStokes equations. We develop a PINN framework to infer how the velocity and pressure fields associated with the flow of an incompressible fluid around a cylinder depend on the underlying geometry and boundary condition  in particular, the size of the cylinder and inlet velocity, respectively. In addition, to capture the periodic nature of the solution which is governed by the Strouhal number (Anderson and Wendt, 1995), the proposed approach learns a set of Fourier features and a phase offset parameter as functions of the underlying geometry and boundary conditions. After the PINN has been trained, simulation results, i.e. velocity and pressure fields, for new design choices can be inferred in a fast, computationally inexpensive way. Thus, by enabling highthroughput evaluation of potential design candidates, this proposed approach provides a means to achieve better, more efficient design solutions. The key contributions of this work are as follows:

[leftmargin=1.5em,noitemsep,topsep=8pt]

We introduce a Fourier Feature Mapping (FFM) subnetwork within a PINN framework to yield better predictions about fluid flow around a cylinder. The FFM subnetwork learns frequency and phase offset parameters as a function of cylinder shape and inlet velocity.

Subsequently, we use the learned surrogate to predict simulation outputs for novel design conditions and demonstrate the improvement in its generalization performance.
Related Work
MLbased Approaches for Fluiddynamic Simulations:
Use of machine learning algorithms in fluiddynamic problems has drawn significant attention over the last few years. Hennigh (2017); Wei et al. (2018) have shown that supervised learning, using large datasets of simulation results obtained from finiteelement or finitevolume solvers, can build surrogates for predicting predict simulation results with high accuracy. Jiang et al. (2020); Nabian and Meidani (2020); Raissi et al. (2019); Wang et al. (2020) have demonstrated that MLbased approaches can predict simulation results in meshfree manner and incorporation of physicsbased regularization in these formulations improves the quality of results by a significant margin. In addition, supervised learning has also been used to guide the discretization process in a datadriven way (BarSinai et al., 2019) or to learn efficient iterative solvers (Hsieh et al., 2019). On the other hand, alternative approaches Dwivedi et al. (2019); Lu et al. (2019); Nabian and Meidani (2019)
based on selfsupervised learning have also been proposed; they employ a neural network to approximate the solution of a differential equation and then use automatic differentiation to compute the loss function which is a quantitative measure on how well the dynamics (represented via a differential equation) and the initial/boundary conditions are enforced. As these approaches use the physics itself to generate training data, this line of of work completely avoids the computationally expensive process for generating simulation datasets. It has also been shown that such neural network based approximations for a class of quasilinear, parabolic partial differential equations can converge to their true solutions with arbitrary accuracy
(Sirignano and Spiliopoulos, 2018).Frequency Bias in Neural Networks:
Frequency bias in neural networks is a relatively wellstudied problem, with a body of work focusing on the relationship between frequency components present in a function and the speed at which neural networks learn them. (Rahaman et al., 2019)
has demonstrated that neural networks with ReLU activation favors functions with low frequency components.
(Eldan and Shamir, 2016; Montufar et al., 2014) have shown that deeper architectures are needed for a neural network to learn highfrequency functions. (Basri et al., 2020; Ronen et al., 2019) analyzed the learning dynamics in gradient descent and have shown that neural networks learn low frequency functions much faster than high frequency functions. On the other hand, since the seminal work by Rahimi and Recht (Rahimi and Recht, 2008), multiple work have focused on accelerating learning algorithms by mapping the input to an appropriate feature space. A recent work (Tancik et al., 2020)has shown that by using Fourier features as inputs to a multilayer perceptron (MLP) can improve its capability to learn high frequency functions. Another work
(Sitzmann et al., 2020)has shown that use of periodic activation functions (e.g.,
) can provide an efficient tool to learn representations in a large class of problems.2 Frequencycompensated PINN for Fluiddynamic Problems
2.1 NavierStokes Equations
In this work, we consider a classical fluiddynamics problem, namely a cylinder in the cross flow (Anderson and Wendt, 1995). In this flow configuration, an incompressible fluid passes around a cylinder. Then, by letting , denote the horizontal and vertical components of the velocity field and denote the pressure field, the dynamics can be expressed as:
(1) 
where denotes the kinematic viscosity of the fluid under consideration and the functional describes the underlying partial differential equation. Moreover, by letting and denote the cylinder’s diameter and inlet velocity, respectively, the Reynolds number for this flow can expressed as . As we assume the kinematic viscosity to be , the vortex shedding frequency can be approximated as for the range of inlet velocity and cylinder diameter considered in this work (Sumer and others, 2006).
2.2 PINN Architecture and the Fourier Component
Figure 1 illustrates the PINN architecture that we propose to learn a surrogate for the aforementioned problem. The input to the network are as follows: the spatial coordinates inside the domain (), the temporal dimension (), the inlet velocity () and the diameter of the cylinder (); and the PINN predicts the velocity () and pressure () of the flow.
As mentioned in the previous subsection, the velocity and pressure exhibit a periodic behavior along the temporal dimension. To capture this aspect, we use a set of Fourier features defined as
(2) 
and employ an MLP to learn the frequency () and phase shift () in the Fourier component as a function of and . These features are then fed into a second MLP subnetwork, along with the spatiotemporal coordinates () and design specifications (). The output from this second subnetwork yield predictions on flow velocity and pressure. By letting , and denote the predicted velocity and pressure, respectively, we define the prediction error as:
(3) 
where is the training dataset size. In addition, to enforce that the predicted values conform to the underlying physics governed by (1), we use the following regularization term
(4) 
Finally we define the following loss function for the PINN
(5) 
where the hyperparameter maintains a balance between prediction accuracy and regularization.
3 Experiment
To evaluate performance of our proposed PINN framework, we train the network using simulation data corresponding to a handful of predefined geometry () and inlet velocity () combinations and use it to predict outputs for other combinations of geometry and inlet velocity values.
3.1 Dataset
We use FeniCS Alnæs et al. (2015); Logg et al. (2012), a finiteelement solver, to create a dataset for the flow around a cylinder problem, as illustrated in Figure 2. We have a rectangular channel with length and width , and an elliptical cylinder with fixed horizontal diameter and vertical diameter
is placed inside the channel. The cylinder is placed at the vertical middle point and from the inlet on the left. The region of interest is a rectangular region of length and width . We run FEniCS simulation for the time interval (with a time resolution ), and then sample the velocity and pressure data for this region of interest. In this work, we train and evaluate the PINN predict the output within this region of interest (shown in pink in Figure 2).
In this work, we assume and and form the training set by running FEniCS simulation for the following 9 combinations (), (), (), (), (), (), (), (), and () and then only taking the points from the time interval with the interval of . Also, we keep the result for the following combinations of and as the validation set  () and (); although the combination () appears in the both these sets they they do not overlap in the time direction. The rest of the points constitute the test set. With this, we have a dataset with training set of instances, validation set of instances and test set of instances^{1}^{1}1For further details such as data point distribution among the geometry settings, please read the supplementary materials. If a geometry setting (i.e. a particular combination of and ) appears in training set, we name it “Seen”, otherwise “Unseen”.
3.2 Parameters and Settings
For the Fourier Feature Mapping (FFM) subnetwork, we use fullyconnected (FC) layers with neurons on each layer and use Tanh
activation function after the dropout layer. The final output size of the subnetwork is set as , and the first outputs serve as frequency in the Fourier component (2) while the rest serve as phase shift .
For the second MLP subnetwork, we set up a network consisting of FC layers with neurons on each layer. The activation function for each FC layer is also chosen as Tanh
.
In each step, besides the minibatches from the training set, we also randomly sample points from the recatangular region and we calculate the PDE loss (4) for those randomly sampled points so that the learned surrogate conforms to (1). The partial deriverative equations are implemented using autograd
toolbox from PyTorch
Paszke et al. (2019). The random points are drawn in the following domains: and .3.3 Results
We show the errors and illustrate visualization output from our proposed framework given different geometry and component settings. Table 1 includes the MSE loss values at all the geometry and component settings which will be analyzed through the whole subsection.
Seen/Covered  Unseen/Covered  Seen/Uncovered  Unseen/Uncovered  

Full Component  1.0910  0.0269  0.0727  0.112 
NoFFM  0.347  0.0973  0.163  
StrongReg  0.0589  0.0453  0.0986  
NoReg/Overfit  0.0431  0.0649  0.126 
3.3.1 Prediction within training geometry settings (Seen)
Figure 2(a) demonstrates the “worst” prediction output (the one possessing the highest MSE among the instances) of our proposed framework when using the geometry in the training set (, and ). It is clear that, within the training geometry settings, our proposed PINN framework provides accurate prediction on the inputs that do not appear at the training time stamps. The errors are barely noticeable on the visualization result and there is almost no phase shift.
3.3.2 Prediction outside training geometry settings (Unseen)
Figure 2(b) demonstrates the performance with the geometry settings that is unseen in the training set. We select the best (lowest MSE) prediction results and demonstrate it in Figure 2(b). We can conclude that our framework generally works well in terms of recovering the data distribution in the sample space – or in terms of visualization, the shapes and artifacts in the images.
3.3.3 Ablation Settings, Challenges and Discussion
Beyond Time Span:
In Table 1, we notice that points within time span (denoted as “Uncovered”) have lower performance or higher loss values. This is a major challenge in the proposed framework due to lack of further guidance in the training set and training process – no sampled points are drawn from time span , However, as we illustrate the in Figure 3(a), our proposed framework can still predict the flow velocity and pressure, albeit with some small phase shift.
Fourier Feature Mapping:
Figure 3(b) demonstrate the visualization of a framework that removes the FFM component – i.e., , , and were directly fed into the second MLP. We use the same geometry setting and time stamp as the one in Figure 2(b). From the output, we can conclude that, FFM provide crucial frequency and phase shift information in the output; and without it, the framework is not able to handle the complex frequencies and phase shifts among different combination of geometry settings.
Weighted Loss:
In most of the experiments, the loss weight parameter was set as , and Figure 3(c) shows that a higher weighted PDE loss would inhibit the vorticity in the unseen settings. If we set , meaning that we do not introduce PDE regularization in the framework, we encounter over fitting as demonstrated in Figure 3(d).
Potential to Replace FeniCSlike Simulator:
When we conduct FeniCS simulation on one geometry setting, with a time step of on a machine equipped with two Intel Xeon CPU E52620 v4 CPUs and 4 Nvidia Titan Xp, we spend more than seconds in simulating data over the time interval . With our proposed framework, we only need time steps as anchor points from FeniCS, and within seconds (even we do not use earlystopping) we are able to acquire the same amount of data with comparable quality.
4 Conclusion
In this paper, we proposed a frequencycompensated PINN framework which can build highaccuracy surrogate for predicting simulation result in fluid dynamic design problems. In particular, we introduced and leveraged a Fourier feature mapping subnetwork to capture the periodicity present in the flow velocity and pressure; to conform with the underlying physics governed by the Strouhal number, this subnetwork learns the Fourier components as a functions of inlet velocity and cylinder size. Our results show that these Fourier features improve generalization in spatial and temporal domain as well for novel geometry settings. Future work would explore how this framework can be further extended to address fluid dynamic problems with more complex geometry and boundary conditions.
References
 The fenics project version 1.5. Archive of Numerical Software 3 (100). External Links: Document Cited by: §3.1.
 Computational fluid dynamics. Vol. 206, Springer. Cited by: §1, §2.1.
 Learning datadriven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116 (31), pp. 15344–15349. Cited by: §1.
 Frequency bias in neural networks for input of nonuniform density. External Links: 2003.04560 Cited by: §1.

A model of inductive bias learning.
Journal of Artificial Intelligence Research
12, pp. 149–198. Cited by: §1.  Bert: pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1.
 Distributed physics informed neural network for dataefficient solution to partial differential equations. arXiv preprint arXiv:1907.08967. Cited by: §1.
 The power of depth for feedforward neural networks. In Conference on learning theory, pp. 907–940. Cited by: §1.
 Deep learning. Vol. 1, MIT press Cambridge. Cited by: §1.
 Quantifying inductive bias: AI learning algorithms and Valiant’s learning framework. Artificial Intelligence 36 (2), pp. 177–221. Cited by: §1.

Deep residual learning for image recognition.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 770–778. Cited by: §1.  Latnet: compressing lattice boltzmann flow simulations using deep neural networks. arXiv preprint arXiv:1705.09036. Cited by: §1.
 Learning neural PDE solvers with convergence guarantees. In International Conference on Learning Representations, External Links: Link Cited by: §1.

MeshfreeFlowNet: a physicsconstrained deep continuous spacetime superresolution framework
. arXiv preprint arXiv:2005.01463. Cited by: §1.  Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.2.
 Automated solution of differential equations by the finite element method. Springer. External Links: Document, ISBN 9783642230981 Cited by: §3.1.
 DeepXDE: a deep learning library for solving differential equations. arXiv preprint arXiv:1907.04502. Cited by: §1.
 On the number of linear regions of deep neural networks. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 2924–2932. Cited by: §1.
 A deep learning solution approach for highdimensional random differential equations. Probabilistic Engineering Mechanics 57, pp. 14–25. Cited by: §1.
 Physicsdriven regularization of deep neural networks for enhanced engineering design and analysis. Journal of Computing and Information Science in Engineering 20 (1). Cited by: §1.
 PyTorch: an imperative style, highperformance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024–8035. External Links: Link Cited by: §3.2.
 On the spectral bias of neural networks. K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 5301–5310. Cited by: §1.
 Random features for largescale kernel machines. In Advances in neural information processing systems, pp. 1177–1184. Cited by: §1.
 Physicsinformed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 378, pp. 686–707. Cited by: §1.
 The convergence rate of neural networks for learned functions of different frequencies. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett (Eds.), pp. 4761–4771. Cited by: §1.
 Mastering the game of Go without human knowledge. Nature 550 (7676), pp. 354–359. Cited by: §1.
 DGM: a deep learning algorithm for solving partial differential equations. Journal of computational physics 375, pp. 1339–1364. Cited by: §1.
 Implicit neural representations with periodic activation functions. arXiv preprint arXiv:2006.09661. Cited by: §1.
 Hydrodynamics around cylindrical strucures. Vol. 26, World scientific. Cited by: §2.1.
 Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS. Cited by: §1.
 Towards physicsinformed deep learning for turbulent flow prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1457–1466. Cited by: §1.
 Learn to learn: application to topology optimization. Smart and Sustainable Manufacturing Systems 2 (1), pp. 250–260. Cited by: §1.
Appendix A Data Distribution
Table 2 demonstrates the distribution of the training, validation and test set. If a geometry setting appears in training set, we name it “Seen”, otherwise “Unseen”.
Train  Validation  Test  
/  
/  
/  
/  /  
/  
/  /  
/  
/  
/  
/  
/  /  
/  
Sum 