I Introduction
In a turbulent flow, various processes like energy cascade, intermittency, fluid element deformation are strongly related to the small scale velocity gradient field. Various experimental, direct numerical simulation and simple dynamical models based studies have been performed to understand the dynamics of the velocity gradient tensor [luthi2005lagrangian, ashurst1987alignment, vieillefosse1982local, cantwell1992exact]. In continuation to these works, several other studies have been reported as well [ashurst1987alignment, ashurst1987pressure, girimaji1990diffusion, ohkitani1993eigenvalue, pumir1994numerical, girimaji1995modified, o2005relationship, chevillard2006lagrangian, da2008invariants, chevillard2011lagrangian, soria1994study, pirozzoli2004direct_b, suman2012velocity, wang2012flow, vaghefi2015local, danish2016influence, parashar2017, parasharJFM, parasharIJHFF]. The pressureHessian and the viscous tensor are the two important processes governing the evolution of the velocity gradient tensor. These processes are inherently nonlocal in nature and are unclosed from a mathematical viewpoint. chevillard2008 developed a recent fluid deformation closure model (RFDM) for modelling the viscous tensor and the pressureHessian. Although the RFD model robustly captures various onetime statistics of the viscous tensor, it has various inherent limitations in predicting the pressureHessian tensor (discussed in section III). Hence, in this paper, we focus on modelling the pressureHessian tensor using velocity gradients information. Recently an improved modelrecent fluid deformation of Gaussian fields (RFDG model) has been proposed by johnson2016. It is an improvement over the RFD model in terms of predicting various onetime statistics of the velocity gradient tensor. However, the authors [johnson2016] did not focus on any relevant statistics of the pressureHessian tensor. Due to this reason, in this work, all our comparisons will be made against the RFD model of chevillard2008.
In the recent past, machine learning has gained popularity in the turbulence research community. The earliest such contribution in the field of machine learning aided turbulence research was made by
duraisamy2014, where the authors developed an intermittency transportbased model for bypass transition using machine learning and inverse modelling. Since then, a large number of researchers have tried to model various turbulence processes using machine learning models [duraisamy2015, brendan2015, brendan2015b, parish2016, zhang2015, jack2017, duraisamy2019]. ling2016 employed a deep neural network to directly model the Reynolds stress anisotropy tensor using strainrate and rotationrate tensors. In doing so, they developed a novel tensor basis neural network (TBNN), which can be employed to map a given tensor from known input tensors. The TBNN has been shown to achieve superior performance by embedding tensor invariance properties in the network itself. Later fang2018 used the TBNN for turbulent channel flow and compared their results against standard turbulence models. sotgiu2018 developed a new framework in conjunction with TBNN for predicting turbulent heat fluxes. Further, geneva2019 developed a Bayesian tensor basis neural network for predicting the Reynolds stress anisotropy tensor.As mentioned earlier, the recent fluid deformation closure model (RFDM) [chevillard2008] is considered to be the state of the art model for pressureHessian calculation. However, the pressureHessian predicted by the RFD model shows nonphysical alignment tendencies with the strain rate tensor (explained in section III
). Any further improvement in the existing model may require a deeper understanding of the complex relationship between pressureHessian and velocity gradients. For this task, we employ deep learning, which can potentially decipher any functional relationship that can potentially exist between the quantities of interest. The tensor basis neural network (TBNN) developed by
ling2016 has already been shown to map tensorial quantities robustly. In this work, we use high resolution incompressible isotropic turbulence data from John Hopkins University turbulence database, JHTD [JHUTD_1, JHUTD_2] (http://turbulence.pha.jhu.edu) to train a neural network model inspired by TBNN. Further, we show that by appropriate normalization of the input data and a few modifications in the network can lead to significant improvements in alignment characteristics of the predicted output. The predictions made by the TBNN are compared against two different isotropic turbulence datasets that were not used for training the network(i) Taylor Reynolds number of 433, JHTB [JHUTD_1, JHUTD_2] and (ii) isotropic turbulence at Taylor Reynolds number of 315 (UP Madrid database, https://torroja.dmt.upm.es/turbdata/Isotropic) [cardesa2017]. To demonstrate the generality of the predicted solution in terms of alignment statistics for other types of flows, we also test the trained model for channel flow data at friction velocity of 1000 (UT Austin and JHU turbulence database) [JHUTD_3]. Further evaluation of the neural network output helps us retrieve ten unique coefficients of the tensor basis of strainrate and rotationrate tensors, the linear combination over which can be used to predict the pressureHessian tensor robustly.This paper is organized into six sections. In section II we present the governing equations. In section III, we explain the limitations of the RFD model. In section IV, we present the details of the tensor basis neural network architecture employed for this study. The analysis of the predicted solution from the TBNN is also presented in section IV. Further, in section V, we explain the modifications incorporated in the TBNN network and compare its results against state of the art RFD model. Section VI concludes the paper with a brief summary.
Ii Governing Equations
The governing equations of an incompressible flow field comprises of the continuity, momentum and state equation of a perfect gas:
(1)  
(2)  
(3) 
where and represents the velocity and position respectively. Density, pressure and temperature are represented by , and , while denotes the gas constant. The velocity gradient tensor is defined as:
Taking the gradient of momentum equation (2), the exact evolution equation of can be derived:
(4) 
where and represent the pressureHessian and the viscous Laplacian governing the evolution of the velocity gradient tensor. The rate of change of following a fluid particle is represented using the substantial derivative: .
Iii Limitations of the RFD model for pressureHessian calculation
The state of the art model for pressureHessian calculation is the recent fluid deformation closure model (RFDM) developed by chevillard2008. The RFD pressureHessian () is expressed as:
(5) 
where, is the right Cauchy Green tensor modelled as: and the symbol represents the trace of the tensor. The pressureHessian predicted by the RFD model has some inherent inconsistencies as compared to the actual pressureHessian obtained from DNS. These limitations are listed below:

is always positivedefinite: It is evident that is a positivedefinite matrix as it is basically a product of a real matrix () and it’s transpose. Since the inverse of a positivedefinite matrix is also positivedefinite, therefore, is always positivedefinite and is guaranteed to be either positivedefinite or negativedefinite, depending on the sign of
. Therefore, the eigenvalues of
are either all negative or all positive. This behavior of is nonphysical, since the governing equations does not impose any such restriction on to be either positivedefinite or negativedefinite. is real symmetric by nature and hence will have atleast one positive and one negative eigenvalue most of the time.Figure 1: Alignment of eigenvectors () with eigenvectors (). Here, (= , or ) denotes the three eigenvectors corresponding to the three eigenvalues 
In straindominated regions, the eigenvectors of coincides with the strainrate eigenvectors:
As discussed above is always either negativedefinite or positivedefinite. Further, it is evident that if is close to being symmetric (straindominant, ), the eigenvectors of will be approximately parallel or perpendicular to the eigenvectors of itself. Hence, in straindominated regions is expected to show biased alignment towards the strainrate eigenvectors which is nonphysical. In order to verify this claim, we show the alignment of the eigenvectors of and with strainrate eigenvectors in Figure 1. In Figure 1
(a,b,c) we show the PDF (probability distribution function) of the alignment of eigenvectors of
with strainrate eigenvectors and in Figure 1(d,e,f) we show alignment of eigenvectors with eigenvectors for comparison. It can be observed that for a large percentage of particles eigenvectors are either parallel or perpendicular to the eigenvectors (Figure 1(a,b,c)). On the other hand, the eigenvectors of obtained from DNS show no such alignment tendencies as shown by eigenvectors.
Iv Using Neural networks to model pressureHessian
It is evident from the discussion in the previous section that the functional relationship between pressureHessian and local velocity gradient tensor, if any, is far too complex to be addressed by simple algebraic models. In general, the evolution of pressureHessian of individual fluid particles is expected to be governed by a large spectrum of flow quantities, their higher derivatives and their evolutionary history as well. Nevertheless, in this work, we intend to explore the maximum potential of local velocity gradients to describe the pressureHessian accurately. For this purpose, we employ deep neural networks. Given, a sufficiently large network and trainingdata, neural networks can potentially decipher the functional relationship (if any) existing between the quantities of interest. With this motivation, we resort to neural networks to provide a better mapping between pressureHessian and the velocitygradient tensor.
iv.1 Neural network architecture
In this work, we employ the tensor basis neural network (TBNN) developed by ling2016. This architecture has been shown to be robust for mapping tensors. The TBNN increases the representation power of the neural network by embedding knowledge of Tensor basis () and invariants () in the network itself. The TBNN network takes advantage from the CaleyHamilton theorem, which states that any function derived from a given tensor alone can be expressed as a linear combination of the integrity basis [spencer1958] of the given tensors. The predictions made by the TBNN network are basically a linear combination of the integrity basis () of the input tensors. Hence, the TBNN network explores the full spectrum of all the mappings that any input tensor can offer, by enforcing the output of the network to be a linear combination of its integrity basis. Further, the TBNN network has embedded rotational invariance, which ensures that the predictions made by the TBNN network are independent of the orientation of the coordinate system. If the input tensors are expressed in a rotatedcoordinate system, the predicted output will also get rotated accordingly. Hence, the TBNN network predicts the same output tensor irrespective of the orientation of the coordinate system. Figure 2, presents a brief overview of the TBNN network.
For incompressible flow field, pope1975 derived the ten tracefree integrity basis () and five independent invariants () of the strainrate () and rotation rate () tensors. These tensor basis and invariants are listed below:
(6) 
(7) 
The symbol represents the trace of the tensor. A linear combination of these ten tensor basis () can represent any tracefree tensor that is directly derived from and . Since the exact expression for the trace of the pressureHessian is already known:
(8) 
these tracefree integrity bases () can be readily used to model the tracefree part of the pressureHessian using the TBNN. We use the symbol to denote the tracefree part of . To find the relevant mapping between velocity gradient tensor and the ten coefficients () corresponding to the ten integrity basis () needs to be modelled. The five invariants () of and forms the primary input of the TBNN. The output of the last layer of the network yields the ten coefficients . A secondary input containing the ten tensor basis (called tensor layer) is fed to the last layer of the network. Finally, a dot product between the coefficient layer and the tensor layer of the network makes the final output of the network, which can be expressed as:
(9) 
The cost function of the network can be expressed as:
(10) 
where is the number of training examples required to train the TBNN and the symbol represents the Frobenius norm.
iv.2 Training of the neural network
The employed tensor basis neural network (TBNN) model is trained using data from an isotropic incompressible flow field at Reynolds number of 433. This data is taken from the John Hopkins University’s Turbulence database [JHUTD_1, JHUTD_2] available online at http://turbulence.pha.jhu.edu/
. The opensource library Keras
[chollet2015keras]with TensorFlow backend is used for training the TBNN model. The velocity gradient tensor and pressureHessian information are extracted from the database at a particular time instant. A total number of 262,144 unique datapoints are extracted from the flow field. Out of these 262,144 data points, 236,544 points are used for training the network, while the remaining 25,600 datapoints are reserved for the crossvalidation of the predicted solution. The training data is randomly distributed into 924 minibatches of 256 datapoints each at the beginning of every epoch. Since one epoch is one complete pass through the training dataset overall minibatches. Hence, one epoch accounts for 924 iterations of the training cycle. The velocity gradient tensor was nondimensionalized with the mean value of the Frobenius norm of the whole sample of 262,144 datapoints. No, further normalization was used for the derived tensorbasis (
) and invariants ().A deep network with 11 hidden layers and a combination of 50, 150, 150, 150, 150, 300, 300, 150, 150, 150, 100 in the consecutive hidden layers was found to yield the best performance of all the combinations that were tested. We use the Glorot normal initialization [glorot2010]
for weight matrices and RELU (rectified linear unit) nonlinear activation function for the hidden layers. The RMSprop optimizer
[RMSprop], with a learning rate of was used to train the network. The training was stopped when the value of cost function became stagnant. The minimum value of training cost and crossvalidation cost recorded while training was  and  respectively. In Figure 3, we show the training and crossvalidation cost as a function of a number of training epochs. The crossvalidation cost didn’t show any significant rise during the training process. A low dropout rate of 10 was used to facilitate ensemble learning in the network. There was no gain in model performance with further increase in datasize and network depth.iv.3 Testing of the trained network
The primary testing of the trained TBNN model was performed on a separate testing dataset (other than training and validation data) of isotropic turbulence (JHTD [JHUTD_1, JHUTD_2]) The relative Frobeniusnorm error of the pressureHessian obtained from the trained model on the testing dataset was found to be . On the same dataset, an error of was obtained by the RFDM model. Hence, in terms of elementwise mean squared error comparison, the accuracy of the trained TBNN model is comparable to the existing RFDM model. However, just the elementwise comparison is not a wise comparison metric for comparing tensorial quantities. We have earlier (in figure 1) seen that the RFD model fails to capture the alignment statistics with the strainrate tensor. In figure 4, we present the alignment of the pressureHessian eigenvectors predicted by the TBNN (figure 4(a,b,c)) with the strainrate eigenvectors compared against that obtained from DNS (figure 4(d,e,f)). We observe that although the alignment statistics (figure 4(a,b,c)) have improved as compared to the RFD model results (figure 1(a,b,c)), the obtained statistics are still faroff from that obtained from DNS.
V Modified neural network architecture
We have observed that TBNN is unable to capture the alignment statistics of the pressureHessian tensor. It implies that assuming the pressureHessian to lie on the tensor basis of the strainrate and rotationrate tensors is not an appropriate modelling assumption. Constraining the network to obey tensor invariance properties restricts us to the use only global normalization of the input tensors. However, the velocity gradients in a turbulent flow field are known to be highly intermittent. Hence, global normalization of the input tensors might not be an effective strategy for such highly intermittent tensors. The learning of important feature mappings by a neural network relies heavily on effective normalization strategies. At this juncture, we performed several experiments on the TBNN by choosing various normalization strategies which allow TBNN to deviate from its tensor invariance characteristics. We found out through hitandtrial that normalizing the tensor basis such that all its elements are scaled between [0, 1], yields tremendous improvement in network output. Two matrices and are used to scale the tensor basis:
(11) 
(12) 
Using and the tensor basis can be appropriately scaled using the following relationship:
(13) 
where, symbol represents the Hadamard division between the two tensor. With this normalization, the network loses most of the properties of the original TBNN. However, it leads to significant improvements in alignment statistics of the predicted output.
We employ the modified network with the same settings (viz. the number of hidden layers, neurons per layer, activation function, learning rate, etc.) as used with the original TBNN network. In figure
5, we show the learning curve obtained while training the modified TBNN. We use an early stopping criterion while training the network, at the point when the validationloss curve becomes almost flat (no further decline with increasing epochs).v.1 Testing modified TBNN for isotropic turbulence flow
In figure 6 we show the alignment statistics obtained on the testing dataset with the modified TBNN on isotropic turbulence testing dataset (JHTD [JHUTD_1, JHUTD_2]). We observe that modified TBNN predictions (figure 6(a,b,c)) demonstrate excellent alignment statistics as compared to that obtained from DNS (figure 6(d,e,f)). Although this testing dataset is extracted at different grid locations other than the training dataset, it still has the same Reynolds number as the training dataset (433). To make a better judgement of the generalization of the learnt pressureHessian mapping, we scrutinize the performance of the trained model for isotropic turbulence dataset at a different Reynolds number of 315. This dataset is extracted from the UP Madrid turbulence dataset [cardesa2017]). In figure 7 we plot the alignment statistics obtained from the learnt modified TBNN model for this dataset (at Renolds number of 315). We find that similar statistics are retrieved at Renolds number of 315 as well (Figure 7). Hence, we can conclude that the trained modified TBNN has learnt key physical features that can be generalized for an isotropic turbulent flow independent of its Reynolds number.
v.2 Testing modified TBNN for turbulent channel flow
The modified TBNN was trained using an isotropic turbulent flow dataset. We saw in the previous section V.1 that the network can learn key features of isotropic turbulent flows, that leads to accurate predictions of pressureHessian especially in terms of the alignment statistics with the strainrate eigenvectors. We, now take a step ahead and scrutinize the trained model for a different type of flow viz. the channel flow, to which the network was not exposed while training. The presence of solid walls in a channel flow leads to the generation of boundary layers near the walls. The pressure and velocity profiles in a boundary layer are very different from that observed in isotropic flow, that has no solid walls. Hence, we cannot expect our trained model to predict the pressureHessian for turbulent channel flow accurately. In fact, when we pass the velocity gradient information through the trained network, a very large relative Frobeniusnorm error of 2.1838 is obtained on the predicted solution. However, the predicted output of the modified TBNN still retrieves accurate alignment statistics with the strainrate eigenvectors, as shown in figure 8. Hence, there does exist a relevant mapping between pressureHessian and velocity gradients that can ensure correct alignment with the strainrate eigenvectors. The network has been able to learn this key physical mapping which is possibly independent of the type of flow and its Reynolds number (at least for isotropic and channel turbulent flows). As discussed in section IV, the evolution of pressureHessian is expected to be governed by a large spectrum of flow quantities, their derivatives and evolution history. However, the major focus of this work has been to explore the maximum potential of local velocity gradients to describe the pressureHessian. We report that using only the local velocity gradient tensor we can model the pressureHessian such that it at least aligns with the strainrate eigenvectors appropriately.
v.3 Predicted coefficients by the modified network
In Figure 9, we show the scatter plot of the coefficients predicted by the modified TBNN. We observe that each of these ten coefficients (
) have negligible variance. The overall distribution can effectively be replaced by the mean value of the distribution of each of the coefficients. Further, we find that by using the mean value of the coefficients, we retrieve the same statistics as obtained by passing the velocity gradient information through the modified TBNN.
With this revelation, it is no longer required to use the trained network for pressureHessian estimation. Rather, we can just use a very simple process for pressureHessian prediction:

Nondimensionalize, the velocity gradient tensor, using the mean value of the Frobenius norm of the whole sample.

Normalize the tensor basis using the scaling matrices used for the trained network (details in Appendix A).

Take a linear combination of the tensor basis using the mean value of the coefficients obtained from the trained network. This would yield the modelled normalized pressureHessian (refer Appendix A).

Scale the predicted pressureHessian back to its original dimensional form, using the same scaling matrices that were used while training the network.

Enforce the predicted solution to have the desired trace (since the trace of is the same as the trace of )
The complete details of the stepbystep process for calculation of the modelled pressureHessian tensor are presented in Appendix A.
Vi Conclusions
In this work, we first scrutinize the state of the art RFD model [chevillard2006lagrangian] for pressureHessian prediction, in terms of its alignment statistics with the strainrate eigenvectors. We report that the eigenvectors of the pressureHessian obtained from RFD model are either mostly parallel or perpendicular to the strainrate eigenvectors. To decipher a better functional mapping between pressureHessian and velocity gradient, we employ a tensor basis neural network (TBNN) architecture [ling2016]. The neural network is trained on highresolution isotropic turbulence data at a Renolds number of 433. With the help of TBNN, the pressureHessian tensor is modelled in terms of the tracefree and symmetric tensor basis of strainrate and rotationrate tensors. We report that the accuracy of the predicted pressureHessian by TBNN is comparable to that obtained from the state of the are RFD model. However, only a marginal improvement in the alignment statistics of the TBNN output is observed. Further, we report that by scaling the tensor basis of strainrate and rotationrate tensors such that each element of the basis lies between , the predicted output of the neural network yields excellent alignment statistics with the strainrate tensor for isotropic turbulent flows at different Reynolds number. Further, we test the trained model for turbulent channel flow dataset, to which the network was not exposed while training. We find that although there is significant error in elementwise comparison, the statistics of alignments obtained with the strainrate eigenvectors are in good agreement with DNS results. With this finding, we come to the conclusion that there does exist a relevant physical mapping between pressureHessian and velocity gradients which enforce their eigenvectors to align appropriately with each other. This mapping is found to be independent of the type of flow and its Reynolds number (at least for isotropic turbulence and channel flow). The modified TBNN has been able to learn this key mapping by appropriately normalizing the tensor basis of strainrate and rotationrate tensors. Finally, we find that the distribution of the coefficients of the tensor basis obtained from the neural network has negligible variance. With this revelation, we have been able to identify ten unique coefficients of the tensor basis, the linear combination over which can be used to model the pressureHessian tensor directly.
References
Appendix A
We present a stepbystep process for the modelled pressureHessian calculation based on mean values of the coefficients derived from the modified TBNN.

Nondimensionalize the strainrate and rotationrate tensors
where, represents the mean over the whole sample.

Find the ten tensor basis and the five independent invariants

Normalize the tensor basis ()
where, symbol represents the Hadamard division between the two tensor. and represents the matrices used to scale the tensor basis :

Take a linear combination of tensor basis using mean coefficient values
(14) where, are the mean coefficient values predicted by the modified TBNN:

Scale the predicted pressureHessian back to its dimensional form
(15) where, the symbol represents the Hadamard product of two matrices and and are the scaling matrices:
Comments
There are no comments yet.