Modelling pressure-Hessian from local velocity gradients information in an incompressible turbulent flow field using deep neural networks

11/19/2019
by   Nishant Parashar, et al.
0

The understanding of the dynamics of the velocity gradients in turbulent flows is critical to understanding various non-linear turbulent processes. The pressure-Hessian and the viscous-Laplacian govern the evolution of the velocity-gradients and are known to be non-local in nature. Over the years, several simplified dynamical models have been proposed that models the viscous-Laplacian and the pressure-Hessian primarily in terms of local velocity gradients information. These models can also serve as closure models for the Lagrangian PDF methods. The recent fluid deformation closure model (RFDM) has been shown to retrieve excellent one-time statistics of the viscous process. However, the pressure-Hessian modelled by the RFDM has various physical limitations. In this work, we first demonstrate the limitations of the RFDM in estimating the pressure-Hessian. Further, we employ a tensor basis neural network (TBNN) to model the pressure-Hessian from the velocity gradient tensor itself. The neural network is trained on high-resolution data obtained from direct numerical simulation (DNS) of isotropic turbulence at Reynolds number of 433 (JHU turbulence database, JHTD). The predictions made by the TBNN are tested against two different isotropic turbulence datasets at Reynolds number of 433 (JHTD) and 315 (UP Madrid turbulence database, UPMTD) and channel flow dataset at Reynolds number of 1000 (UT Texas and JHTD). The evaluation of the neural network output is made in terms of the alignment statistics of the predicted pressure-Hessian eigenvectors with the strain-rate eigenvectors for turbulent isotropic flow as well as channel flow. Our analysis of the predicted solution leads to the discovery of ten unique coefficients of the tensor basis of strain-rate and rotation-rate tensors, the linear combination over which accurately captures key alignment statistics of the pressure-Hessian tensor.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/26/2018

Deep Learning of Vortex Induced Vibrations

Vortex induced vibrations of bluff bodies occur when the vortex shedding...
05/11/2021

U-Net-Based Surrogate Model For Evaluation of Microfluidic Channels

Microfluidics have shown great promise in multiple applications, especia...
05/30/2020

Error analysis of proper orthogonal decomposition stabilized methods for incompressible flows

Proper orthogonal decomposition (POD) stabilized methods for the Navier-...
09/17/2021

The Optimization of the Constant Flow Parallel Micropump Using RBF Neural Network

The objective of this work is to optimize the performance of a constant ...
07/22/2019

Numerical analysis of a projection-based stabilized POD-ROM for incompressible flows

In this paper, we propose a new stabilized projection-based POD-ROM for ...
04/27/2021

Deep Learning of the Eddington Tensor in the Core-collapse Supernova Simulation

We trained deep neural networks (DNNs) as a function of the neutrino ene...
04/26/2020

A weakly non-hydrostatic shallow model for dry granular flows

A non-hydrostatic depth-averaged model for dry granular flows is propose...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In a turbulent flow, various processes like energy cascade, intermittency, fluid element deformation are strongly related to the small scale velocity gradient field. Various experimental, direct numerical simulation and simple dynamical models based studies have been performed to understand the dynamics of the velocity gradient tensor [luthi2005lagrangian, ashurst1987alignment, vieillefosse1982local, cantwell1992exact]. In continuation to these works, several other studies have been reported as well [ashurst1987alignment, ashurst1987pressure, girimaji1990diffusion, ohkitani1993eigenvalue, pumir1994numerical, girimaji1995modified, o2005relationship, chevillard2006lagrangian, da2008invariants, chevillard2011lagrangian, soria1994study, pirozzoli2004direct_b, suman2012velocity, wang2012flow, vaghefi2015local, danish2016influence, parashar2017, parasharJFM, parasharIJHFF]. The pressure-Hessian and the viscous tensor are the two important processes governing the evolution of the velocity gradient tensor. These processes are inherently non-local in nature and are unclosed from a mathematical viewpoint. chevillard2008 developed a recent fluid deformation closure model (RFDM) for modelling the viscous tensor and the pressure-Hessian. Although the RFD model robustly captures various one-time statistics of the viscous tensor, it has various inherent limitations in predicting the pressure-Hessian tensor (discussed in section III). Hence, in this paper, we focus on modelling the pressure-Hessian tensor using velocity gradients information. Recently an improved modelrecent fluid deformation of Gaussian fields (RFDG model) has been proposed by johnson2016. It is an improvement over the RFD model in terms of predicting various one-time statistics of the velocity gradient tensor. However, the authors [johnson2016] did not focus on any relevant statistics of the pressure-Hessian tensor. Due to this reason, in this work, all our comparisons will be made against the RFD model of chevillard2008.

In the recent past, machine learning has gained popularity in the turbulence research community. The earliest such contribution in the field of machine learning aided turbulence research was made by

duraisamy2014, where the authors developed an intermittency transport-based model for bypass transition using machine learning and inverse modelling. Since then, a large number of researchers have tried to model various turbulence processes using machine learning models [duraisamy2015, brendan2015, brendan2015b, parish2016, zhang2015, jack2017, duraisamy2019]. ling2016 employed a deep neural network to directly model the Reynolds stress anisotropy tensor using strain-rate and rotation-rate tensors. In doing so, they developed a novel tensor basis neural network (TBNN), which can be employed to map a given tensor from known input tensors. The TBNN has been shown to achieve superior performance by embedding tensor invariance properties in the network itself. Later fang2018 used the TBNN for turbulent channel flow and compared their results against standard turbulence models. sotgiu2018 developed a new framework in conjunction with TBNN for predicting turbulent heat fluxes. Further, geneva2019 developed a Bayesian tensor basis neural network for predicting the Reynolds stress anisotropy tensor.

As mentioned earlier, the recent fluid deformation closure model (RFDM) [chevillard2008] is considered to be the state of the art model for pressure-Hessian calculation. However, the pressure-Hessian predicted by the RFD model shows nonphysical alignment tendencies with the strain rate tensor (explained in section III

). Any further improvement in the existing model may require a deeper understanding of the complex relationship between pressure-Hessian and velocity gradients. For this task, we employ deep learning, which can potentially decipher any functional relationship that can potentially exist between the quantities of interest. The tensor basis neural network (TBNN) developed by

ling2016 has already been shown to map tensorial quantities robustly. In this work, we use high resolution incompressible isotropic turbulence data from John Hopkins University turbulence database, JHTD [JHUTD_1, JHUTD_2] (http://turbulence.pha.jhu.edu) to train a neural network model inspired by TBNN. Further, we show that by appropriate normalization of the input data and a few modifications in the network can lead to significant improvements in alignment characteristics of the predicted output. The predictions made by the TBNN are compared against two different isotropic turbulence datasets that were not used for training the network(i) Taylor Reynolds number of 433, JHTB [JHUTD_1, JHUTD_2] and (ii) isotropic turbulence at Taylor Reynolds number of 315 (UP Madrid database, https://torroja.dmt.upm.es/turbdata/Isotropic) [cardesa2017]. To demonstrate the generality of the predicted solution in terms of alignment statistics for other types of flows, we also test the trained model for channel flow data at friction velocity of 1000 (UT Austin and JHU turbulence database) [JHUTD_3]. Further evaluation of the neural network output helps us retrieve ten unique coefficients of the tensor basis of strain-rate and rotation-rate tensors, the linear combination over which can be used to predict the pressure-Hessian tensor robustly.

This paper is organized into six sections. In section II we present the governing equations. In section III, we explain the limitations of the RFD model. In section IV, we present the details of the tensor basis neural network architecture employed for this study. The analysis of the predicted solution from the TBNN is also presented in section IV. Further, in section V, we explain the modifications incorporated in the TBNN network and compare its results against state of the art RFD model. Section VI concludes the paper with a brief summary.

Ii Governing Equations

The governing equations of an incompressible flow field comprises of the continuity, momentum and state equation of a perfect gas:

(1)
(2)
(3)

where and represents the velocity and position respectively. Density, pressure and temperature are represented by , and , while denotes the gas constant. The velocity gradient tensor is defined as:

Taking the gradient of momentum equation (2), the exact evolution equation of can be derived:

(4)

where and represent the pressure-Hessian and the viscous Laplacian governing the evolution of the velocity gradient tensor. The rate of change of following a fluid particle is represented using the substantial derivative: .

Iii Limitations of the RFD model for pressure-Hessian calculation

The state of the art model for pressure-Hessian calculation is the recent fluid deformation closure model (RFDM) developed by chevillard2008. The RFD pressure-Hessian () is expressed as:

(5)

where, is the right Cauchy Green tensor modelled as: and the symbol represents the trace of the tensor. The pressure-Hessian predicted by the RFD model has some inherent inconsistencies as compared to the actual pressure-Hessian obtained from DNS. These limitations are listed below:

  1. is always positive-definite: It is evident that is a positive-definite matrix as it is basically a product of a real matrix () and it’s transpose. Since the inverse of a positive-definite matrix is also positive-definite, therefore, is always positive-definite and is guaranteed to be either positive-definite or negative-definite, depending on the sign of

    . Therefore, the eigenvalues of

    are either all negative or all positive. This behavior of is nonphysical, since the governing equations does not impose any such restriction on to be either positive-definite or negative-definite. is real symmetric by nature and hence will have at-least one positive and one negative eigenvalue most of the time.

    Figure 1: Alignment of -eigenvectors () with -eigenvectors (). Here, (= , or ) denotes the three eigenvectors corresponding to the three eigenvalues
  2. In strain-dominated regions, the eigenvectors of coincides with the strain-rate eigenvectors:

    As discussed above is always either negative-definite or positive-definite. Further, it is evident that if is close to being symmetric (strain-dominant, ), the eigenvectors of will be approximately parallel or perpendicular to the eigenvectors of itself. Hence, in strain-dominated regions is expected to show biased alignment towards the strain-rate eigenvectors which is nonphysical. In order to verify this claim, we show the alignment of the eigenvectors of and with strain-rate eigenvectors in Figure 1. In Figure 1

    (a,b,c) we show the PDF (probability distribution function) of the alignment of eigenvectors of

    with strain-rate eigenvectors and in Figure 1(d,e,f) we show alignment of -eigenvectors with -eigenvectors for comparison. It can be observed that for a large percentage of particles -eigenvectors are either parallel or perpendicular to the -eigenvectors (Figure 1(a,b,c)). On the other hand, the eigenvectors of obtained from DNS show no such alignment tendencies as shown by -eigenvectors.

Iv Using Neural networks to model pressure-Hessian

It is evident from the discussion in the previous section that the functional relationship between pressure-Hessian and local velocity gradient tensor, if any, is far too complex to be addressed by simple algebraic models. In general, the evolution of pressure-Hessian of individual fluid particles is expected to be governed by a large spectrum of flow quantities, their higher derivatives and their evolutionary history as well. Nevertheless, in this work, we intend to explore the maximum potential of local velocity gradients to describe the pressure-Hessian accurately. For this purpose, we employ deep neural networks. Given, a sufficiently large network and training-data, neural networks can potentially decipher the functional relationship (if any) existing between the quantities of interest. With this motivation, we resort to neural networks to provide a better mapping between pressure-Hessian and the velocity-gradient tensor.

iv.1 Neural network architecture

In this work, we employ the tensor basis neural network (TBNN) developed by ling2016. This architecture has been shown to be robust for mapping tensors. The TBNN increases the representation power of the neural network by embedding knowledge of Tensor basis () and invariants () in the network itself. The TBNN network takes advantage from the Caley-Hamilton theorem, which states that any function derived from a given tensor alone can be expressed as a linear combination of the integrity basis [spencer1958] of the given tensors. The predictions made by the TBNN network are basically a linear combination of the integrity basis () of the input tensors. Hence, the TBNN network explores the full spectrum of all the mappings that any input tensor can offer, by enforcing the output of the network to be a linear combination of its integrity basis. Further, the TBNN network has embedded rotational invariance, which ensures that the predictions made by the TBNN network are independent of the orientation of the coordinate system. If the input tensors are expressed in a rotated-coordinate system, the predicted output will also get rotated accordingly. Hence, the TBNN network predicts the same output tensor irrespective of the orientation of the coordinate system. Figure 2, presents a brief overview of the TBNN network.

Figure 2: Schematic of the TBNN network. and

are the weight matrix and the bias vector of the

layer. Both and

are the learnable parameters of the neural network, which are optimized using the RMSprop optimizer

[RMSprop].

For incompressible flow field, pope1975 derived the ten trace-free integrity basis () and five independent invariants () of the strain-rate () and rotation rate () tensors. These tensor basis and invariants are listed below:

(6)
(7)

The symbol represents the trace of the tensor. A linear combination of these ten tensor basis () can represent any trace-free tensor that is directly derived from and . Since the exact expression for the trace of the pressure-Hessian is already known:

(8)

these trace-free integrity bases () can be readily used to model the trace-free part of the pressure-Hessian using the TBNN. We use the symbol to denote the trace-free part of . To find the relevant mapping between velocity gradient tensor and the ten coefficients () corresponding to the ten integrity basis () needs to be modelled. The five invariants () of and forms the primary input of the TBNN. The output of the last layer of the network yields the ten coefficients . A secondary input containing the ten tensor basis (called tensor layer) is fed to the last layer of the network. Finally, a dot product between the coefficient layer and the tensor layer of the network makes the final output of the network, which can be expressed as:

(9)

The cost function of the network can be expressed as:

(10)

where is the number of training examples required to train the TBNN and the symbol represents the Frobenius norm.

iv.2 Training of the neural network

The employed tensor basis neural network (TBNN) model is trained using data from an isotropic incompressible flow field at Reynolds number of 433. This data is taken from the John Hopkins University’s Turbulence database [JHUTD_1, JHUTD_2] available online at http://turbulence.pha.jhu.edu/

. The opensource library Keras

[chollet2015keras]

with TensorFlow backend is used for training the TBNN model. The velocity gradient tensor and pressure-Hessian information are extracted from the database at a particular time instant. A total number of 262,144 unique data-points are extracted from the flow field. Out of these 262,144 data points, 236,544 points are used for training the network, while the remaining 25,600 data-points are reserved for the cross-validation of the predicted solution. The training data is randomly distributed into 924 mini-batches of 256 data-points each at the beginning of every epoch. Since one epoch is one complete pass through the training dataset overall mini-batches. Hence, one epoch accounts for 924 iterations of the training cycle. The velocity gradient tensor was non-dimensionalized with the mean value of the Frobenius norm of the whole sample of 262,144 data-points. No, further normalization was used for the derived tensor-basis (

) and invariants ().

Figure 3: Decay of cost function during training for TBNN. Mini-batch size=256, 1 epoch = 924 iterations of the optimizer.

A deep network with 11 hidden layers and a combination of 50, 150, 150, 150, 150, 300, 300, 150, 150, 150, 100 in the consecutive hidden layers was found to yield the best performance of all the combinations that were tested. We use the Glorot normal initialization [glorot2010]

for weight matrices and RELU (rectified linear unit) non-linear activation function for the hidden layers. The RMSprop optimizer

[RMSprop], with a learning rate of was used to train the network. The training was stopped when the value of cost function became stagnant. The minimum value of training cost and cross-validation cost recorded while training was - and - respectively. In Figure 3, we show the training and cross-validation cost as a function of a number of training epochs. The cross-validation cost didn’t show any significant rise during the training process. A low dropout rate of 10 was used to facilitate ensemble learning in the network. There was no gain in model performance with further increase in data-size and network depth.

iv.3 Testing of the trained network

Figure 4: Alignment of -eigenvectors () with -eigenvectors (). Here, (= , or ) denotes the three eigenvectors corresponding to the three eigenvalues . (JHTD isotropic turbulence testing dataset, Reynolds number 433 [JHUTD_1, JHUTD_2])

The primary testing of the trained TBNN model was performed on a separate testing dataset (other than training and validation data) of isotropic turbulence (JHTD [JHUTD_1, JHUTD_2]) The relative Frobeniusnorm error of the pressure-Hessian obtained from the trained model on the testing dataset was found to be . On the same dataset, an error of was obtained by the RFDM model. Hence, in terms of element-wise mean squared error comparison, the accuracy of the trained TBNN model is comparable to the existing RFDM model. However, just the element-wise comparison is not a wise comparison metric for comparing tensorial quantities. We have earlier (in figure 1) seen that the RFD model fails to capture the alignment statistics with the strain-rate tensor. In figure 4, we present the alignment of the pressure-Hessian eigenvectors predicted by the TBNN (figure 4(a,b,c)) with the strain-rate eigenvectors compared against that obtained from DNS (figure 4(d,e,f)). We observe that although the alignment statistics (figure 4(a,b,c)) have improved as compared to the RFD model results (figure 1(a,b,c)), the obtained statistics are still far-off from that obtained from DNS.

V Modified neural network architecture

Figure 5: Decay of cost function during training for the modified TBNN. Mini-batch size=256, 1 epoch = 924 iterations of the optimizer.
Figure 6: Alignment of -eigenvectors () obtained from modified TBNN with -eigenvectors (). Here, (= , or ) denotes the three eigenvectors corresponding to the three eigenvalues . (JHTD isotropic turbulence testing dataset [JHUTD_1, JHUTD_2], Reynolds number 433 [JHUTD_1, JHUTD_2])
Figure 7: Alignment of -eigenvectors () obtained from modified TBNN with -eigenvectors (). Here, (= , or ) denotes the three eigenvectors corresponding to the three eigenvalues . (UP Madrid isotropic turbulence testing dataset [cardesa2017], Reynolds number 315)

We have observed that TBNN is unable to capture the alignment statistics of the pressure-Hessian tensor. It implies that assuming the pressure-Hessian to lie on the tensor basis of the strain-rate and rotation-rate tensors is not an appropriate modelling assumption. Constraining the network to obey tensor invariance properties restricts us to the use only global normalization of the input tensors. However, the velocity gradients in a turbulent flow field are known to be highly intermittent. Hence, global normalization of the input tensors might not be an effective strategy for such highly intermittent tensors. The learning of important feature mappings by a neural network relies heavily on effective normalization strategies. At this juncture, we performed several experiments on the TBNN by choosing various normalization strategies which allow TBNN to deviate from its tensor invariance characteristics. We found out through hit-and-trial that normalizing the tensor basis such that all its elements are scaled between [0, 1], yields tremendous improvement in network output. Two matrices and are used to scale the tensor basis:

(11)
(12)

Using and the tensor basis can be appropriately scaled using the following relationship:

(13)

where, symbol represents the Hadamard division between the two tensor. With this normalization, the network loses most of the properties of the original TBNN. However, it leads to significant improvements in alignment statistics of the predicted output.

We employ the modified network with the same settings (viz. the number of hidden layers, neurons per layer, activation function, learning rate, etc.) as used with the original TBNN network. In figure

5, we show the learning curve obtained while training the modified TBNN. We use an early stopping criterion while training the network, at the point when the validation-loss curve becomes almost flat (no further decline with increasing epochs).

v.1 Testing modified TBNN for isotropic turbulence flow

In figure 6 we show the alignment statistics obtained on the testing dataset with the modified TBNN on isotropic turbulence testing dataset (JHTD [JHUTD_1, JHUTD_2]). We observe that modified TBNN predictions (figure 6(a,b,c)) demonstrate excellent alignment statistics as compared to that obtained from DNS (figure 6(d,e,f)). Although this testing dataset is extracted at different grid locations other than the training dataset, it still has the same Reynolds number as the training dataset (433). To make a better judgement of the generalization of the learnt pressure-Hessian mapping, we scrutinize the performance of the trained model for isotropic turbulence dataset at a different Reynolds number of 315. This dataset is extracted from the UP Madrid turbulence dataset [cardesa2017]). In figure 7 we plot the alignment statistics obtained from the learnt modified TBNN model for this dataset (at Renolds number of 315). We find that similar statistics are retrieved at Renolds number of 315 as well (Figure 7). Hence, we can conclude that the trained modified TBNN has learnt key physical features that can be generalized for an isotropic turbulent flow independent of its Reynolds number.

v.2 Testing modified TBNN for turbulent channel flow

Figure 8: Alignment of -eigenvectors () with -eigenvectors (). Here, (= , or ) denotes the three eigenvectors corresponding to the three eigenvalues . (UT Texas and JHTD channel flow dataset [JHUTD_3], friction velocity Reynolds number of 1000.

The modified TBNN was trained using an isotropic turbulent flow dataset. We saw in the previous section V.1 that the network can learn key features of isotropic turbulent flows, that leads to accurate predictions of pressure-Hessian especially in terms of the alignment statistics with the strain-rate eigenvectors. We, now take a step ahead and scrutinize the trained model for a different type of flow viz. the channel flow, to which the network was not exposed while training. The presence of solid walls in a channel flow leads to the generation of boundary layers near the walls. The pressure and velocity profiles in a boundary layer are very different from that observed in isotropic flow, that has no solid walls. Hence, we cannot expect our trained model to predict the pressure-Hessian for turbulent channel flow accurately. In fact, when we pass the velocity gradient information through the trained network, a very large relative Frobeniusnorm error of 2.1838 is obtained on the predicted solution. However, the predicted output of the modified TBNN still retrieves accurate alignment statistics with the strain-rate eigenvectors, as shown in figure 8. Hence, there does exist a relevant mapping between pressure-Hessian and velocity gradients that can ensure correct alignment with the strain-rate eigenvectors. The network has been able to learn this key physical mapping which is possibly independent of the type of flow and its Reynolds number (at least for isotropic and channel turbulent flows). As discussed in section IV, the evolution of pressure-Hessian is expected to be governed by a large spectrum of flow quantities, their derivatives and evolution history. However, the major focus of this work has been to explore the maximum potential of local velocity gradients to describe the pressure-Hessian. We report that using only the local velocity gradient tensor we can model the pressure-Hessian such that it at least aligns with the strain-rate eigenvectors appropriately.

v.3 Predicted coefficients by the modified network

In Figure 9, we show the scatter plot of the coefficients predicted by the modified TBNN. We observe that each of these ten coefficients (

) have negligible variance. The overall distribution can effectively be replaced by the mean value of the distribution of each of the coefficients. Further, we find that by using the mean value of the coefficients, we retrieve the same statistics as obtained by passing the velocity gradient information through the modified TBNN.

Figure 9: Scatter plot of the ten coefficients predicted by the modified TBNN.

With this revelation, it is no longer required to use the trained network for pressure-Hessian estimation. Rather, we can just use a very simple process for pressure-Hessian prediction:

  1. Non-dimensionalize, the velocity gradient tensor, using the mean value of the Frobenius norm of the whole sample.

  2. Calculate the ten tensor basis and five independent invariants of strain-rate and rotation-rate tensors using equations 6 and 7.

  3. Normalize the tensor basis using the scaling matrices used for the trained network (details in Appendix A).

  4. Take a linear combination of the tensor basis using the mean value of the coefficients obtained from the trained network. This would yield the modelled normalized pressure-Hessian (refer Appendix A).

  5. Scale the predicted pressure-Hessian back to its original dimensional form, using the same scaling matrices that were used while training the network.

  6. Enforce the predicted solution to have the desired trace (since the trace of is the same as the trace of )

The complete details of the step-by-step process for calculation of the modelled pressure-Hessian tensor are presented in Appendix A.

Vi Conclusions

In this work, we first scrutinize the state of the art RFD model [chevillard2006lagrangian] for pressure-Hessian prediction, in terms of its alignment statistics with the strain-rate eigenvectors. We report that the eigenvectors of the pressure-Hessian obtained from RFD model are either mostly parallel or perpendicular to the strain-rate eigenvectors. To decipher a better functional mapping between pressure-Hessian and velocity gradient, we employ a tensor basis neural network (TBNN) architecture [ling2016]. The neural network is trained on high-resolution isotropic turbulence data at a Renolds number of 433. With the help of TBNN, the pressure-Hessian tensor is modelled in terms of the trace-free and symmetric tensor basis of strain-rate and rotation-rate tensors. We report that the accuracy of the predicted pressure-Hessian by TBNN is comparable to that obtained from the state of the are RFD model. However, only a marginal improvement in the alignment statistics of the TBNN output is observed. Further, we report that by scaling the tensor basis of strain-rate and rotation-rate tensors such that each element of the basis lies between , the predicted output of the neural network yields excellent alignment statistics with the strain-rate tensor for isotropic turbulent flows at different Reynolds number. Further, we test the trained model for turbulent channel flow dataset, to which the network was not exposed while training. We find that although there is significant error in element-wise comparison, the statistics of alignments obtained with the strain-rate eigenvectors are in good agreement with DNS results. With this finding, we come to the conclusion that there does exist a relevant physical mapping between pressure-Hessian and velocity gradients which enforce their eigenvectors to align appropriately with each other. This mapping is found to be independent of the type of flow and its Reynolds number (at least for isotropic turbulence and channel flow). The modified TBNN has been able to learn this key mapping by appropriately normalizing the tensor basis of strain-rate and rotation-rate tensors. Finally, we find that the distribution of the coefficients of the tensor basis obtained from the neural network has negligible variance. With this revelation, we have been able to identify ten unique coefficients of the tensor basis, the linear combination over which can be used to model the pressure-Hessian tensor directly.

References

Appendix A

We present a step-by-step process for the modelled pressure-Hessian calculation based on mean values of the coefficients derived from the modified TBNN.

  1. Non-dimensionalize the strain-rate and rotation-rate tensors

    where, represents the mean over the whole sample.

  2. Find the ten tensor basis and the five independent invariants

  3. Normalize the tensor basis ()

    where, symbol represents the Hadamard division between the two tensor. and represents the matrices used to scale the tensor basis :

  4. Take a linear combination of tensor basis using mean coefficient values

    (14)

    where, are the mean coefficient values predicted by the modified TBNN:

  5. Scale the predicted pressure-Hessian back to its dimensional form

    (15)

    where, the symbol represents the Hadamard product of two matrices and and are the scaling matrices:

  6. Trace correction step

    (16)

    where, I

    is the identity matrix and

    represents the trace of the matrix.