Smart Adaptive Mesh Refinement with NEMoSys

Adaptive mesh refinement (AMR) offers a practical solution to reduce the computational cost and memory requirement of numerical simulations that use computational meshes. In this work, we introduce a novel smart methodology for adaptive mesh refinement. Smart adaptive refinement blends classical AMR with machine learning to address some of the known issues of the conventional approaches. We provide an algorithm for adaptive refinement. Subsequently, we introduce a modular object-oriented structure for our smart AMR algorithm. Then we present procedures used for the training of a smart AMR model. The study follows with a demonstration of preliminary numerical studies indicating the feasibility of performing adaptive mesh refinement on a few demonstrative problems selected from the CFD domain. Finally, we conclude with a few comments about future work.

READ FULL TEXT VIEW PDF

Authors

page 7

page 8

page 9

page 10

12/18/2021

A new quality preserving polygonal mesh refinement algorithm for Virtual Element Methods

Mesh adaptivity is a useful tool for efficient solution to partial diffe...
01/10/2021

Implementation of Polygonal Mesh Refinement in MATLAB

We present a simple and efficient MATLAB implementation of the local ref...
09/04/2020

Performance Analysis of FEM Solvers on Practical Electromagnetic Problems

The paper presents a comparative analysis of different commercial and ac...
02/16/2017

Visualization and Analysis of Large-Scale, Tree-Based, Adaptive Mesh Refinement Simulations with Arbitrary Rectilinear Geometry

We present here the first systematic treatment of the problems posed by ...
10/21/2019

A Fat boundary-type method for localized nonhomogeneous material problems

Problems with localized nonhomogeneous material properties arise frequen...
02/25/2021

Machine Learning-Based Optimal Mesh Generation in Computational Fluid Dynamics

Computational Fluid Dynamics (CFD) is a major sub-field of engineering. ...
04/28/2021

Dynamic Mode Decomposition in Adaptive Mesh Refinement and Coarsening Simulations

Dynamic Mode Decomposition (DMD) is a powerful data-driven method used t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Nomenclature

 = reynolds number
 = learning rate for machine learning network
 = Initial bias for machine learning network
 = class weights for output layer in machine learning network
 = non-dimensional Velocity Gradient
 = non-dimensional Q-Criteria
 = non-dimensional Vorticity
 = non-dimensional Modified Delta

2 Introduction

Preliminary algorithms for adaptive mesh refinement (AMR) were originally developed for computational fluid dynamics applications [bergerPhDDissertationAdaptive1982], and now AMR is routinely used in many disciplines such as material modeling [vazAspectsDuctileFracture2001], combustion [pomraningModelingTurbulentCombustion2014], biophysics [oliveiraSimulationsCardiacElectrophysiology2016a], astrophysics [bryanENZOADAPTIVEMESH2014], climate modeling [chenExtendingLegacyClimate2019] and many others.AMR offers a practical solution to reduce the computational cost and memory requirements of mesh-based numerical analyses by eliminating the need for uniform refinement. Several AMR algorithms are developed, sharing a remarkable level of similarity. Efforts roughly split into two major categories: i) application-specific codes; and ii) library development. The latter poses significant challenges, as it should provide the flexibility to treat a diverse set of applications in a generic and reusable framework.

Development of software and algorithms for efficient, physics-independent adaptive mesh refinement dates back to 1970s. Early attempts set the ground work for two dimensional AMR of elliptic problems [careyMeshrefinementSchemeFinite1976], treatment of mid-edge (hanging) nodes with algebraic constraints [careyAnalysisFiniteElement1976], and penalty methods [careyPenaltyMethodsInterelement1982]. Later developments include introduction of bi-section method [mitchellPLTMGSoftwarePackage1995] and extensions to three dimensions [plaza3DRefinementDerefinement2000]. More recent works were focused on leveraging rapidly evolving parallel computing infrastructure for AMR [kirkLibMeshLibraryParallel2006c]. Parallelism is usually achieved through domain decomposition and efficient data structures for MPI communication of the partitioned mesh data [kirkLibMeshLibraryParallel2006c].

Traditionally, local error indicators obtained from a computed solution furnish automated adaptation. Several error indicators are developed for specific refinement purposes. Heuristic criteria are commonly used to increase the flexibility in library development efforts irrespective of the underlying physics of a simulation problem

[kirkLibMeshLibraryParallel2006c]. Similarly, more rigorous and problem-specific recovery indicators [zienkiewiczSimpleErrorEstimator1987] are also widely used, but they should be considered as physics-independent criteria. Other ad hoc methods include feature-based indicators. They are considered competitive in resolving a-priori known critical feature(s) of the simulation, e.g., local vorticity fields [kasmaiFeaturebasedAdaptiveMesh2011]

. Local indicators are also extracted from a-posteriori error estimates

[verfurthPosterioriErrorEstimation1994a] that are closely linked to the residuals of numerical operators and governing equations. Regardless, a-priori knowledge about the continuous problem is still required. Finally, more precise indicators are developed, taking to account global measures of error. Evaluation of such indicators may involve additional computational efforts of solving a related dual (adjoint) problem [bangerthAdaptiveFiniteElement2003].

There are two major deficiencies of existing AMR algorithms for the treatment of both steady-state and time-dependent (or otherwise evolving) problems: i) the need for a-priori knowledge about the behavior of the solution field and ii) over-reliance on practitioner’s expertise for criteria selection. AMR libraries merely rely on the user to pick an accurate error indicator for the problem at hand. An improper selection may lead to an efficiency lower than simple uniform refinement [bangerthAdaptiveFiniteElement2003]. Further, often a combination of indicators is needed to treat complex problems. Similarly, the order of importance of employed error indicators or required thresholds for refinement based on a given criterion may evolve during transient analyses. Finally, derived quantities should be computed from the solution fields, increasing conventional AMR’s computational cost.

The need for refinement can be triggered by criteria other than error estimates. For instance, detection and refinement of the mesh at the neighborhood of non-slip boundaries can yield a better resolution of the flowfield. Currently, boundary refinement is performed manually based on user experience. Strofer et al. [strofer2018data]

propose the use of convolutional neural networks for the identification of types of vortex and boundaries of domains using steady-state flow solutions as training reference. Their approach is to map the mesh domain onto a 2D Cartesian grid and predict the boundaries, as well as vortex type present on those boundaries using CNN.

In this work, we investigate the application of machine learning in AMR. We aim to address challenges associated with classical approaches, in particular, the treatment of problems with no optimal refinement strategy. In the most simple form, adaptive mesh refinement can be cast into a logistic regression problem: a classifier computes whether refinement is needed or not. We investigate several popular machine learning models as the decision-making apparatus for smart AMR methodology. We first introduce the algorithm within the context of classical adaptive mesh refinement. We then introduce the object-oriented design as well as the overall structure of our smart AMR. An implementation of this work for robust, automated adaptive mesh refinement is provided in our open-source meshing software

NEMoSys [nateshNEMoSysPlatformAdaptive2018]. Our study continues with a few preliminary studies to demonstrate the feasibility and robustness of the smart AMR. We select all problems from the CFD domain. We then wrap up with a discussion on some of the open issues, limitations, and plans for future extensions.

3 Smart AMR Algorithm

The algorithm we propose for smart AMR is summarized in Fig. 1. The algorithm involves three major steps: i) data generation, ii) model selection and iii) training and parameter tuning. Smart refinement is agnostic to the underlying PDEs and only requires properly selected training data. For steady-state problems, training data may include a reference solution to problems of similar nature to the desired one. Similarly, for a transient problem, a proper set of time snapshots of the solution field is sufficient.

Figure 1: Smart refinement classifier training algorithm

The following step is preprocessing. This step serves three purposes: i) data normalization, ii) computing error indicators, and iii) formating data for supervised training. The normalization process ensures the smoothness of the training procedure. We further explain our second step in Section 4.3.

For simplicity, the refinement classifier operates on a point cloud. Regardless of the selected discretization scheme, a solution field can be easily represented by a point cloud, e.g. selecting a finite element solution’s nodal values. In the current paper, we investigate structured meshes, and for simplicity, we include the point cloud’s spatial coordinates as the input feature vector. A point cloud provides additional advantages by eliminating combinatorial irregularities and other complexities of meshes.

For each element, our smart AMR classifier marks (labels) the cell for refinement or coarsening, thus the input is solution field point cloud, and outputs are marks. We investigate the applicability of a trained classifier in problems of similar nature. In time-dependent problems, the classifier is trained on a few initial time snapshots. The possibility of such generalization is investigated in Sec. 5.

4 Implementation Aspects

Figure 2: Modular architecture of NEMoSys with integrated libraries (bottom), core libraries/central API (center), and user end points (top)

4.1 Object-oriented Design

To facilitate ease-of-use, we implement our smart refinement as an extension to our open-source333https://github.com/IllinoisRocstar/Nemosys meshing library NEMoSys. Figure 2 illustrates the modular and extensible architecture of the NEMoSys. The package is implemented in object-oriented C++ programming language. Several open-source meshing packages are integrated into the NEMoSys (shown at the bottom of Fig. 2). We use OpenCASCADE444https://github.com/tpaviot/oce for I/O and B-Rep operations. Gmsh [geuzaineGmsh3DFinite2009a], Netgen [schoberlNETGENAdvancingFront1997c], and cfMesh 555https://sourceforge.net/projects/cfmesh/ serve mesh generation tasks. We use MAdLib [compereMeshAdaptationFramework2010] and Omega_h [vittitowScalableDeterministicStateoftheArt], and OpenFOAM [jasakOpenFOAMOpenSource2009] for h-adaptation. For mesh data storage and I/O, we use VTK as the central data structure and provide support for mesh data I/O in Exodus II, HDF, and CGNS formats. For local users, NEMoSys implements a command-line interface (CLI). A network interface is also under development for remote accessibility. All interface modules communicate with the core library using a central python/C++ application programming interfaces (API) (shown at the center of Fig. 2).

NEMoSys is equipped with a pair of closely-interacting geometry (GModel) and mesh (VtkDataSet) databases. The GeoMeshBase serves as the core object binding mesh and geometry data. Derivative objects (GmshGeoMesh, VtkGeoMesh, and ExoGeoMesh) are concrete implementations of the GeoMeshBase for additional mesh formats. In NEMoSys each service offered is controlled by a manager class that is a derivate of SrvBase class. The services currently offered include i) mesh generation, ii) adaptive mesh refinement, iii) re-meshing, iv) quality control, v) format conversion, vi) solution data transfer, vii) solution verification, viii) simulation input generation, ix) partitioning, x) joining, and xi) nuclear mesh generation.

AMRFoam is our preliminary implementation of the smart AMR procedure. This object provides a dynamic mesh data structure and implements methods for different steps of the refinement. AMRFoam takes a trained AMR classifier prepared according to the procedure described in Section 4.3. For each cell, AMRFoam

evaluates the classifier to compute the probability of refinement/coarsening. The mesh adaptation is then performed by another method implemented by this class of delegated to the physics solver. The class also provides optional capabilities for solution data transfer and file I/O. For improved automation,

AMRFoam also takes information such as the starting and ending time steps for refinement, the maximum allowable number of elements, maximum permissible levels of refinement/coarsening, and I/O frequency.

4.2 Feature Extraction

Smart AMR is best trained to perform refinement based on a set of characteristic features of the solution field. In the current work, we use vorticities as characterisic feature for refinement. A vortex structure in a flowfield has no univocal mathematical definition. It can be described as the rotating motion of fluid around a common centerline. In contrast, we may be able to easily identify a vortex by visualizing a flowfield, however perception about where the vortex ends vary among analysts. To automate the adaptive mesh refinement process, identifying flow features without the visual means and a-priori knowledge of flow solution is of utmost importance. Although a vortex is commonly perceived as a circular motion, shear and boundary layer vortex flows do not necessarily follow clear circular motions. Several criteria such as Q-criterion, criteria, and criteria have been developed to visualize such vortex structures. Vorticity () is also used to identify turbulent structures in a flow. These criteria work well for a given flow problem, but due to their representation of vortex structures in localized dimensioned values, their thresholds are not easily generalizable across multiple flow problems. As such, none of these criteria can be considered optimal for adaptive refinement.

Therefore, we look for non-dimensional criteria that capture turbulent flow structures across various flow problems without large variation in thresholds. Kamkar and Wissink [kamkar2009automated] have proposed non-dimensional form of Q-criteria, and a modified, non-dimensional version of

criteria. These criteria use local velocity strain tensor and yield good results in capturing flow structures of importance with minimal variation in threshold values. Along with non-dimensional Q-criteria and modified

, we propose normalized velocity gradient and vorticity. We normalize the velocity gradient and vorticity with their respective local infinite norms. We use these criteria to train our proposed machine learning model implemented in AMRFoam. We believe that these quantities best represent several key features where refinement is most needed in a CFD problem. Figure 3 shows all selected quantities on a domain surrounding the airfoil at 20 angle of attack.

Figure 3:

Visualization of non dimensional quantities selected for automatic feature extraction

4.2.1 Velocity Gradient

Velocity gradient represents localized change in velocity in all three flow directions. Velocity gradient values vary with flow velocity and cannot be used directly as general purpose refinement criteria. We normalize velocity gradient by its infinite norm in the entire domain. This is given by,

(1)

where are the components of the velocity gradient field. Normalized velocity gradient shows very narrow variation in threshold for detecting vortex structures across several different flow problems.

4.2.2 Q-Criteria

The Q-criteria is defined as a measurement of difference between velocity rotation and velocity strain magnitudes. The dimensional version of it depends on velocity value and characteristic length.

(2)

where, is anti-symmetric part of velocity gradient and is symmetric part of velocity gradient. This is normalized by symmetric part of velocity gradient and normalized form of Q criteria is defined by,

(3)

Non-dimensional Q criteria shows the quality of rotation. Larger area (3) shows presence of a stronger vortex and smaller areas indicate a weaker vortex.

4.2.3 Modified

Modified

is derived from eigenvalues of velocity gradient tensor as described in

[kamkar2009automated]. The solution of eigenvalue problem of velocity gradient () for a 3D case is given by roots of equation,

(4)

where , , and . The solution to equation 4 will either yield 3 real roots , or a real root with a pair of complex roots ().

Complex roots indicate that a local vortex motion exists and it’s strength is indicated by imaginary part of complex root pair. This strength indicator is normalized by symmetric part of velocity gradient () and non-dimensional form of modified criteria is given by,

(5)

4.2.4 Vorticity

Vorticity is described as a curl of velocity. Vorticity in its raw form helps visualize a swirling motion occuring in a flow field. Threshold for identifying such phenomena depends on local values of velocity and length scales. We normalize vorticity is by its infinite local norm to normalize it. The non-dimensional vorticity therefore, is calculated as,

(6)

4.3 Classifier Architecture and Training

NEMoSys integrates with tensorflow666https://www.tensorflow.org/

for low-level training and evaluation of smart AMR classifiers. In this study, we use two different classifiers, a fully connected artificial neural network (ANN) with four input neurons, and a convolutional neural network (CNN) with 4 channels of

pixel matrix.

Figure 4: Architecture for artificial neural network (ANN) classifier

For CFD problems, we consume two different sets of inputs for each cell. Normalized values of the coordinates of the centers and velocity gradient (1) or four different non-dimensional features representing quantities discussed in section 5. A general architecture for the neural network is shown in Figure 4. For neural network, input layer is connected with fully connected hidden layers denoted by . We use one output neuron connected to the hidden layers for binary classification and n output neurons for classification between total classes.

Figure 5: Architecture for convolutional neural network (CNN) classifier

For convolutional neural network, input layer consists of 4 channels of

pixel matrix. This matrix is comprised of a cell at center and its 24 neighbors (two rings) arranged in spatially correlated positions. Each channel represents one of four non-dimensional quantities. Input layer is connected with 2 layers of convolution and max pooling operations followed by one hidden layer connected to output neurons as shown in Figure

5

. Output neuron computes the probability of refinement. During model construction and training, we found that for problems studied, the combination of ReLu

activation function for hidden layers and Sigmoid activation function for output layer yields best performance for binary classification problems. For multi-class prediction, we use Softmax activation function for output layer.

In an adaptive refinement, the number of refinement and coarsening locations rarely balance; coarsening labels usually outnumber refinement labels by large. The lack of balance warrants some adjustments in the architecture of the classifier to avoid potential overfitting problems. To achieve balance, we use an initial bias and class weights. We calculate initial bias using,

(7)

where is the number of refinement labels, and is the number of coarsening labels.

We use initial bias for binary classification problems only. For multi-class classifications, only class weights are used. Class weights are means of telling machine learning networks to focus on under-represented classes more during training so that model does not overfit on over-represented classes. For class weights in the output layer we use,

(8)

where

is number of labels for class n. The training of our neural network involves finding a minima of the loss function using a stochastic gradient descent procedure. To ensure that model does not overshoot the global minimum, learning rate (lr) is controlled using time inverse decay given by

(9)

where is the initial learning rate, is the decay rate, is the global step (number of training points used in one batch), and is the decay step (total number of batches). Once the training is successfully completed, the trained model is used by AMRFoam class for smart refinement, as discussed in 4.1.

5 Preliminary Feasibility Studies

5.1 Pitz-Daily Problem

For the first demonstration case, we apply smart AMR to the canonical Pitz-Daily problem. Figure 6-(Top) shows the velocity field computed at a time step dominated by vortices formed at the wake of the step. The problem was set up and ran with pimpleFoam solver from OpenFOAM. The CFD solver was augmented to use AMRFoam class. The Reynolds number for the flow field was selected to be . Problem runtime was selected to be 0.1 seconds with a time step of second. We opt for vorticity as the dominant feature of this physics simulation using insight obtained from the velocity gradient and vorticity field for the first 100 time steps. Thus as the input feature vector, we use coordinates scaled by the extents of the domain and the magnitude of the velocity gradient tensor scaled by its infinite norm in the entire domain given by 1.

Figure 6: Flow field velocity for the canonical Pitz-Daily problem (Top). Expected refined mesh for capturing vorticities (Center). On-the-fly smart adaptive refinement for the same time step calculated with 98% accuracy (Bottom)

An artificial neural network with four input neurons, two fully connected hidden layers with fifteen and ten neurons, and one output neuron was used as the classifier. The classifier was trained on data collected from the time step range 1-25 (up to second of runtime). The neural network training was performed on data obtained from a higher density mesh (~79K cells) compared to the coarser mesh (~11.5K cells) shown in Fig. 6

on which the predictions were made. We used 30% of training data for cross-validation of the classifier after each epoch of training. We performed training until the loss function stops improving for ten consecutive epochs.

Figure 6-(Center) shows expected refinement for second computed using a conventional value-based refinement procedure. The mesh was refined for best resolving vorticities. Figure 6-(Bottom) shows the smart refinement prediction for the same time step. The smart refinement was able to achieve 98% accuracy in predictions compared to conventional value-based refinement. Fig 7

-(Left) shows the confusion matrix for the prediction indicating that the classifier mispredicted 300 coarsening labels out of a total of 7172 labels and properly predicted all refinement labels. It should be noted that mispredicted coarsening labels slightly reduce the AMR efficiency, whereas the accuracy of the physics simulation is preserved. Figure

7-(Right) lists the precision, recall, and f1-scores for the predictions generated by the classifier.

Figure 7: Confusion matrix for Pitz-daily case (Left), and precision, recall, and f1-scores (Right)

5.2 Turbofan Engine Blades

As discussed in Section 3, a trained classifier can be applied to an ensemble of simulation problems with similar physical characteristics. We investigated the feasibility of such applications through an example. Figure 8-(Left) illustrates the initial mesh and flow field computed for a turbofan engine with a pair of fan blades and guiding vanes an inlet at left, an outlet at the right, and periodic boundary conditions at the top and the bottom. Reynolds number of the flowfield was selected to be 600,000, and the problem was solved for a total runtime of second. We employed the same ANN classifier described in Section 5.1. In these problems, vorticities were predominantly observed in the wake of blades and vanes, so a feature-based AMR with proper criteria for vortex identification such , , , and criteria can be employed. Similarly, the magnitude of the velocity gradient tensor itself can be considered as a proper refinement criterion.

Figure 8: Flow field captured by an unrefined mesh (Left), and a mesh refined by smart refinement methodology (Right). Vorticities are better resolved by the refined mesh

5.2.1 Pure Classifier

The classifier used in this problem is similar to the one used in Pitz-Daily problem. This classifier was trained using simulation data collected from a much simpler airfoil flow field simulation shown in Fig. 3. The training problem includes an airfoil at a 20 angle of attack with the Reynolds number of . As observed, the training problem shares similarities with the primal problem, but it is much simpler to set up and run than the primal problem. The neural network was trained with a total of 11.1M inputs (mesh cell center coordinates and non-dimensional velocity gradient) collected from the time-evolving airfoil problem. As shown in Fig 8-(Right), the smart refinement methodology has was able to identify and resolve the vortex zones successfully.

5.2.2 Ensemble Classifier

To assess the degree of generalization this neural network can offer across different flow problems, we added Pitz-Daily and T-Junction (Figure 9) flow solutions along with flow over airfoil solution to the current neural network training dataset. Training the classifier on this new dataset, we observed a drop in training accuracy from 98% to 89%. Upon further exploration, we identified input coordinates as a root cause of the issue. We then replaced mesh cell center coordinates with three non-dimensional quantities (Q-criteria, Vorticity, and Modified ) described in 4.2. Now the input layer consists of non-dimensional velocity gradient, vorticity, modified , and Q criteria.

Figure 9: Flowfield for the flow over airfoil at 20 AOA (Left); Flowfield for T-junction problem (Center); Flowfield for Pitz-daily problem (Right)

We re-trained the same neural network architecture with a new training dataset consisting of 2.6M new input parameters and training labels evenly sampled from different time snapshots of all 3 problems shown in Figure 9. Re-training the network showed improved accuracy of 94%. This improvement indicates that using proper non-dimensional quantities derived from flow is critical for the classifier’s generalization. Figure 10 shows prediction made using the old and new neural network models on turbine blade case. It is evident that the neural network trained on ensemble simulation data performs much better and targets specific regions for refinement, leading to improved computational efficiency.

Figure 10: Flowfield for turbine blade domain (Left). Predictions for refinement made by ANN with four inputs as mesh cell center coordinates and non-dimensional velocity gradient (Top-Right) vs. ANN with four inputs as four non-dimensional quantities described in section 4.2 (Bottom-Right)

5.2.3 Boundary Detection

Smart AMR can be used to automate complex mesh optimization for CFD analyses, e.g., to predict the flowfield problem’s boundaries. In this study, we use a CNN classifier shown in Figure 5 to label boundary layer cells where turbulent structures exist. We used the same four non-dimensional quantities described above as different channels for the input layer. We trained this network on ensemble data generated from simulation problems shown in Figure 9. This dataset contains 2.6M training inputs and training labels. This network achieved a training accuracy of 92.5% for predicting three different classes: i) internal cell refinement, ii) boundary layer cell refinement and iii) coarsening. Figure 11 shows the trained classifier’s prediction for turbine blade mesh. The classifier was able to pick boundaries of airfoils and nearest boundary layer cells, which are candidates for refinement.

Figure 11: Boundaries predicted by trained convolutional neural network

6 Conclusion

In this work, we introduced the smart AMR methodology to tackle adaptive refinement without optimal refinement criteria. We first discussed the algorithm used for the method. We introduced our object-oriented implementation. The application of proper non-dimensional feature extraction criteria suitable for different classes of problems and training operations used were discussed in detail. The feasibility of the method in performing adaptive mesh refinement was demonstrated on a few representative problems. Our preliminary studies indicated that smart refinement with a proper set of input and training is generalizable to similar nature problems. Finally, we presented an example of the application of smart AMR for flowfield boundary mesh optimization. Future works may include an extension to physical phenomena other than flow problems and extension to three-dimensional cases.

Acknowledgments

This work has been supported by the U.S. Department of Energy Office of Sciences SBIR grant No. DE-SC0018077.

References