Neural-network acceleration of projection-based model-order-reduction for finite plasticity: Application to RVEs

09/16/2021
by   S. Vijayaraghavan, et al.
University of Luxembourg
0

Compared to conventional projection-based model-order-reduction, its neural-network acceleration has the advantage that the online simulations are equation-free, meaning that no system of equations needs to be solved iteratively. Consequently, no stiffness matrix needs to be constructed and the stress update needs to be computed only once per increment. In this contribution, a recurrent neural network is developed to accelerate a projection-based model-order-reduction of the elastoplastic mechanical behaviour of an RVE. In contrast to a neural network that merely emulates the relation between the macroscopic deformation (path) and the macroscopic stress, the neural network acceleration of projection-based model-order-reduction preserves all microstructural information, at the price of computing this information once per increment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

05/28/2021

Accelerating Neural ODEs Using Model Order Reduction

Embedding nonlinear dynamical systems into artificial neural networks is...
03/16/2020

Clustering-Based Model Order Reduction for Nonlinear Network Systems

Clustering by projection has been proposed as a way to preserve network ...
11/16/2021

Finite element based model order reduction for parametrized one-way coupled steady state linear thermomechanical problems

This contribution focuses on the development of Model Order Reduction (M...
03/25/2021

Model Order Reduction based on Runge-Kutta Neural Network

Model Order Reduction (MOR) methods enable the generation of real-time-c...
01/27/2020

On the stability of projection-based model order reduction for convection-dominated laminar and turbulent flows

In the literature on projection-based nonlinear model order reduction fo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Relatively recently, artificial neural networks (ANNs) have been investigated to emulate the relation between the macroscale deformation (path) and the macroscale stress in nested multiscale approaches Ghavamian and Simone (2019); Settgast and others (2020); Unger and Könke (2008); Fritzen et al. (2019); Logarzo and others (2021). Although such ANN-emulations are rapid, all microstructural information is in principle lost (some microstructural information could be included in the ANN emulator Wu and Noels ). In order to preserve all microstructural information, ANNs can be combined with projection-based model-order-reduction (MOR) Koronaki and others (2019); Swischuk and others (2019).

Projection-based MOR is an a posteriori

method; it utilizes the solutions of training simulations as global basis functions to interpolate kinematic variables. It uses either a representative set of orthonormalized training solutions directly as global basis (i.e. the method of ‘reduced basis’

Prud’homme and others (2002); Quarteroni and others (2011)

), or applies singular value decomposition to the training solutions, and uses the basis vectors associated with the highest singular values as global basis functions (i.e. the method of ‘Proper Orthogonal Decomposition’ - POD

Kerfriden and others (2013, 2011); Meyer and Matthies (2003); Carlberg and others (2011)).

The global basis in projection-based MOR interpolates the kinematic variables to reduce the number of degrees of freedom in the online simulations. In this contribution, ANNs are used to emulate the values of these remaining degrees of freedom: the coefficients of the global basis functions. This eliminates the need to construct stiffness matrices, since the iterative process to solve for the basis coefficients is avoided. The only issue that remains to be computed once per increment is the stress update in each quadrature point (i.e. the plastic variables in the case of elastoplasticity).

The aim of this work is to formulate an ANN-accelerated POD-based MOR for finite plasticity under cyclic and random loading, applied to a representative volume element (RVE). The use of projection-based MOR for non-elliptical problems (such as those governed by elastoplasticity) requires a large number of global basis functions to achieve an acceptable accuracy (we use 100 basis functions). As ANNs avoid the computation of the basis coefficients, many basis functions can be used and hence, ANN-accelerated projection-based MOR may be considered particularly useful in the context of non-elliptical problems.

Since elastoplasticity includes both reversible and irreversible physics, a suitable ANN must be able to account for the deformation path. Since Mozaffar and others (2019); Gorji and others (2020); Wu and others (2020); Ghavamian and Simone (2019) have shown that the hidden variables in recurrent neural networks (RNNs) are able to account for this (in the context of conventional finite element simulations to compute inelastic responses), these types of ANNs are also used in the current contribution.

The remainder of this paper is organized as follows: in the next section, the direct numerical simulations are concisely discussed. Section 3 describes a conventional POD-based MOR followed by the description of the neural network architecture in section 4. Results are discussed in section 5, where the predictions of the RNNs are compared with those of the direct numerical simulation (DNS) and the conventional POD-based MOR. A short conclusion is presented in section 6.

2 Direct Numerical Simulations

The plane strain simulations employ bilinear quadrilateral (four node) finite elements with four Gauss quadrature points. An F-bar method is utilized to alleviate locking due to the incompressibility of the plastic deformation. Within this framework, the volume change of the deformation gradient tensor at a quadrature point is replaced with the volume change at the center of the element. The resulting deformation gradient tensor,

, is multiplicatively decomposed into an elastic (subscript ) and a plastic (subscript ) deformation gradient tensor: .

The following strain energy density is employed:

(1)

where and denote Young’s modulus and Poisson’s ratio, respectively. Furthermore: and , where superscript denotes the transpose. Differentiating the strain energy with respect to gives the Piola-Kirchhoff stress tensor, : , which is related to the Mandel stress, , as .

The employed yield function reads:

(2)

where material parameters , and denote the initial yield stress, the hardening modulus and an exponential hardening parameter, respectively. Furthermore, denotes the deviatoric Mandel stress and the plastic variable. The following associated flow rule is employed:

(3)

The Karush-Kuhn-Tucker conditions close the constitutive model:

(4)

A periodic mesh is employed in the simulations, where the constraints due to the periodic boundary conditions are imposed using Lagrange multipliers. Dirichlet boundary conditions are used for the four corner nodes, where the displacement values are dictated by the right stretch tensor of the macroscale deformation , which is symmetric). This results in the following system of linear equations, which must be constructed and solved for each iteration, for each increment:

(5)

where column collects the displacement components of all nodes at an intermediate solution, column the plastic variables in all quadrature points at an intermediate solution ( and ), column the Lagrange multipliers, column the constraints due to the periodic boundary conditions, column the components of the reaction forces, column the components of the internal forces and matrix the derivatives of the internal force components with respect to the displacement components. and together denote the correction to the intermediate solution given by and .

3 POD-based model order reduction

Projection-based MOR interpolates all kinematic variables, , using global basis functions according to:

(6)

where of length denotes the basis function and scalar denotes its associated weight that is to be computed online. and collect all the basis functions and their associated weights in an matrix and a column of length , respectively.

In the POD method, global basis functions

are the orthonormal vectors corresponding to the largest singular values of an

matrix, , storing

training solutions. Alternatively, one can apply eigenvalue decomposition to product

(of size ), resulting in an matrix

of which each column is an eigenvector of

. The left-singular vectors (with the largest singular values) of are the columns of product , from which the first columns are utilized as the global basis functions.

In RVE simulations with periodic boundary conditions, the global basis functions are not directly extracted from the training solutions. Rather, the displacements are additively decomposed into a homogeneous contribution, , and a fluctuating contribution, :

(7)

Because the right stretch tensor of the macroscale deformation, , is the input for RVE simulations, , can straightforwardly be computed. The global basis functions are thus only used to interpolate the fluctuating part of the kinematic variables, yielding the following expression:

(8)

where column of length 3 is completely dictated by the known components of (three components in 2D simulations) and matrix (of size in 2D) homogeneously interpolates the displacement components.

According to Eqs. (7) and (8), the homogeneous deformations are first subtracted from the training solutions, and SVD is applied to the resultant (i.e. fluctuating) displacement field.

The decomposition produces basis functions that are themselves periodic and hence, periodicity does not need to be actively enforced in the online simulations. Consequently, the system of linear equations that needs to be solved in terms of update in the online simulations reads:

(9)

where update is known.

4 ANN-acceleration

In this section, we discuss the RNN that rapidly emulates the basis coefficients in for each increment (given by ). This circumvents the iterative process (Eq. (9)) necessary for conventional POD-based MOR.

4.1 Network architecture

A neural network is a combination of numerous neurons. Each neuron receives several input values (

to in Fig. 1), and outputs a single value () as a function of weighing the input values (), adding a bias to it (

), and inserting the result in an activation function (

). A collection of neurons are grouped together to form a layer. In a deep neural network, several layers of neurons are placed one after another (see Fig. 2).

Figure 1: A single artificial neuron at layer . The outputs of previous layer are the inputs of current layer .

The best known ANNs are feed forward deep neural networks (see Fig. 2). Feed forward networks have a unique relationship between input and output data, and cannot handle sequential information (i.e. an incremental sequence ), as required for path-dependent models such as elastoplasticity Rocha and others (2020).

Figure 2:

A feed forward neural network with three layers: Two hidden layers with five neurons each (

), two neurons for the input layer and three for the output layer.

On the other hand, recurrent neural networks (RNNs) have the intrinsic feature to handle sequential data. RNNs (Fig. 3) employ hidden state variables (‘’) as memory elements to store and pass on information from the past to the upcoming sequence. Therefore, the predictions of an RNN at time step ‘’ are based on the input at time step (‘’) and the values of the hidden variables (‘’) at the beginning of that time step (‘’). Hidden variables ‘’ can be considered as the RNN's way to quantify the past, in analogy to history variables in the DNS and the conventional POD-based MOR.

Traditional RNNs suffer from the problem of vanishing gradients while handling long sequential data. This is due to the fact that the parameters at the beginning of the sequence depend on the gradient of the parameters present later in the sequence. In this process, the derivatives, which take small values, are multiplied several times, resulting in significantly smaller values, explaining the term vanishing gradients. To overcome this problem, we use an RNN with a Gated Recurrent Unit (GRU).

The GRU enables control over the flow of information through the hidden variables ‘’. It uses an update gate and a reset gate to determine the amount of information to be passed on and to be retained by the hidden variable. The gated structure of a GRU also controls the flow of gradients during learning, such that the parameter update value does not vanish.

The RNN architecture used in this contribution is shown in Fig. 3. Adding Feed forward neural networks ( and ) at the input and at the output of the GRU increases the accuracy of the RNN for complex problems Wu and others (2020).

Figure 3: Neural network architecure used in this contribution. The red dashed box indicates the GRU.

4.2 Learning phase of the recurrent neural network

We perform a supervised learning strategy in which the entire data is passed through the RNN numerous times, where every time the data is passed through is referred to as an epoch. The learning stage, i.e. the identification of the weights and biases, is an iterative process in which the loss function (

10) is minimized to increase the network's accuracy.

(10)

where denotes the number of training solutions, superscript refers to the basis coefficients predicted by the RNN, and denotes the -norm.

An epoch has a forward propagation stage in which the information is passed from the input layer to the output layer. At the beginning of learning, the parameters such as weights and biases are initialized randomly.

In the backward propagation stage, the loss function's gradients with respect to the RNN’s parameters are calculated starting at the output layer and ending at the input layer. A gradient descent algorithm is used to update the RNN's parameters. The procedure is followed for a number of epochs until the desired convergence is obtained.

The data is fed to the network in multiple batches in order to speed up the training process. Each batch consists of a sequence of input-output pairs that are extracted as training solutions at every load increment. The length of sequences is equal for all the batches. In the current contribution, data in each batch corresponds to the solutions of a single training simulation. There are 1000 load increments per simulation, therefore each batch is of length 1000. The hidden variables of the GRU unit are initialized as -1, as recommended in Wu and others (2020).

5 Results

5.1 Model setup and data collection

The discretized RVE is portrayed in Fig. 4 and is subjected to cyclic and random loading. The material parameters for the matrix are set to , , , and . For the particles, the elastic material parameters are set to , , while ensures that the particles behave purely elastically. Because the matrix deforms mostly plastically and plastic deformation is isochoric, and because the particles deform only minimally relative to the matrix (due to the ratio of Young’s moduli), we only consider the application of isochoric macroscale deformations (i.e. ), governed by bounds , and . This means that it is sufficient to only consider components and as input variables of the RNN (as , and dictate the value of ).

Figure 4: The discretized RVE with particles.

A single RNN is simultaneously trained to emulate the basis coefficients for both cyclic and random loading. 350 cyclic loading training simulations (+10 validation simulations) and 10,000 random loading simulations (+100 validation simulations) are performed to determine the basis functions and to train the RNN. Random loading is not per se simulated because it is expected in nested multiscale simulations. Instead, it is considered to enhance the training because in true multiscale simulations with cyclic loading, the cyclic loading path of each RVE will slightly differ for each cycle Wu and others (2020). The loading paths of the cyclic training simulations are presented in the left diagram of Fig. 5. Each involves a loading stage and an unloading stage, each stage consisting of 500 increments. In the random loading simulations (one loading path is shown on the right in Fig. 5), the loading direction is randomly selected for each of the 1000 increments and the loading step is fixed.

Figure 5: Left: Each red line presents the load path of a cyclic training simulation. Right: Load path of a single training simulation for random loading. Bounds , and of surface are presented by blue lines.

The RNN's output variables are the coefficients of the POD basis, . In order to obtain the input and output pairs for training the RNN, the POD problem described in section 3 is solved for the same loading paths used for DNS. In this contribution 100 POD basis functions are used, whose coefficients are extracted at every load increment of each training simulation. Since elastoplasticity yields non-ellipticity, a large number of basis functions are required to obtain an acceptable accuracy.

5.2 RNN predictions

The learning stage of a neural network (i.e. the algorithm to minimise the loss function in order to identify the network's weights and biases) requires the selection of several hyperparameters. The selection of these hyperparameters affects the speed of the learning stage and the accuracy of the resulting network. In this contribution, the learning rate is set to 0.001. The mini-batch size is set to 1000. The parameters for ADAM optimizer are

, and .

Different combinations of the numbers of layers, neurons and hidden variables are investigated with respect to the convergence of the loss function. Fig. 6 shows that the number of hidden variables and the number of hidden layers of influence the RNN's accuracy the most.

Figure 6: The loss function after training for 60,000 epochs for three neural network parameters: (i) Number of hidden layers in the , (ii) Number of neurons in the hidden layers of and (iii) Number of hidden variables ‘’ of the GRU. The size of the circle corresponds to the value of the loss function (i.e. a large circle corresponds to a large value of the loss function). Left: Loss function values for training data. Right: Loss function values for validation data.

Therefore, in this contribution the number of hidden layers in , and the number of GRU layers is set to 1. The number of neurons in the hidden layer of the

is set to 100. Both FFNs use the ‘Leaky ReLu’ activation function. The number of hidden variables in the GRU and the number of neurons in the hidden layer of

are set to 1600.

The RNN is trained for a total of 450,000 epochs, after which the loss function did not decrease further for both training and validation data. Each epoch consists of approximately of the whole training data in a mini-batch. Therefore, mini-batch is switched every 50 epochs to include all the training simulations. After training, the loss function (Eq. (10)) is reduced to .

Figure 7: Cyclic loading validation simulations: Some RNN predictions (crosses) and the actual values (lines). The colors distinguish the four validation simulations.

Coefficients for some POD basis functions predicted by the RNN are presented in Figs. 7 (cyclic loading) and 8 (random loading), together with the exact coefficients. It is clearly visible that the RNN's accuracy for cyclic loading is higher than for random loading, although the accuracy for random loading is still acceptable in our opinion. The average error calculated using Eq. (10) for the 10 cyclic loading validation simulations is around , whereas the average error for the 100 random loading validation simulations is around .

Figure 8: Random loading validation simulations: Some RNN predictions (crosses) and the actual values (lines). The colors distinguish two validation simulations.

5.3 Mechanical predictions

In this subsection, we compare the results of RNN-accelerated MOR with those of the conventional MOR and the DNS. We start with Fig. 9 in which the components of the homogenized Piola-Kirchhoff stress are presented for one of the cyclic loading validation simulations. We can see that the POD results match those of the DNS fairly accurately (albeit not perfectly), indicating that the number of 100 basis functions is sufficiently large. The results of the RNN-accelerated MOR also match those of the DNS and those of the MOR fairly accurately. Clearly, some differences are present, but the results indicate that the errors introduced by the RNN hardly influence the predicted macroscale stress.

Figure 9: Components of the macroscale Piola-Kirchhoff stress as functions of the deformation for a cyclic loading validation simulation predicted by the DNS (black solid), by the conventional MOR (blue dashed), and by the RNN-accelerated MOR (red dotted).

As the appeal of RNN-accelerated MOR is the preservation of detailed information (microstructural information in case of RVEs), we also compare the plastic variables predicted by the different approaches. Fig. 10 shows that the difference in the plastic variables predicted by the DNS and predicted by the MOR is not negligible, although also not completely unacceptable. In turn, the difference in the plastic variables predicted by the conventional MOR and predicted by the RNN-accelerated MOR is substantially smaller.

Figure 10: The plastic variable () computed by the three methods for one of the cyclic loading validation simulations. Top-left: the DNS results, top-right: the difference between the POD results and the DNS results, bottom-left: the difference between the RNN-POD results and the POD results, bottom-right; the difference between the RNN-POD results and the DNS results.

We continue with results for random loading scenarios. In Fig. 11, the components of the Piola-Kirchhoff stress are again presented, but now for one of the random loading validation simulations and as a function of the increment number (instead of the deformation). On the other hand, Fig. 12 shows the plastic variables predicted by the different approaches. Comparing Fig. 9 with Fig. 11 and Fig. 10 with Fig. 12, it can be concluded that the RNN-accelerated MOR is more accurate for cyclic loading than for random loading. The results can be argued to be sufficiently accurate, because, the random loading simulations were considered only to effectively train the RNN. But, in practice, the purpose of RNN accelerated MOR is to be utilized for loading cases arising in multi-scale simulations, which are closer to cyclic loading simulations.

Figure 11: Components of the macroscale Piola-Kirchhoff stress values as functions of the number of increments for a random loading validation simulation predicted by the DNS (black solid), by the conventional MOR (blue dashed), and by the RNN-accelerated MOR (red dotted).
Figure 12: The plastic variable () computed by the three methods for one of the random loading validation simulations. Top-left: the DNS results, top-right: the difference between the POD results and the DNS results, bottom-left: the difference between the RNN-POD results and the POD results, bottom-right; the difference between the RNN-POD results and the DNS results.

The computational time required to prepare the training data and to test the validations are summarized in Table 1 and 2. The training of the RNN was performed on HPC using 32GB of GPU computational resource for 7 days. Though the data preparation and training the RNN required a total of two weeks of computational time, the RNN-accelerated MOR is approximately 100 times as fast as the DNS and 22 times as fast as the conventional MOR in case of random loading. For cyclic loading on the other hand, the RNN-accelerated MOR is only 13 times as fast as the DNS, whilst the conventional MOR is hardly faster than the DNS.

The difference in time savings of the RNN-accelerated MOR for cyclic and random loading are because the DNS and conventional MOR require more iterations for random loading than for cyclic loading. The reasons are that (1) the loading paths for cyclic loading are substantially shorter than those for random loading whilst the same number of increments is employed, and (2) the previous plastic state is assumed at the start of each increment of a cyclic loading simulation (i.e. DNS and conventional MOR) in order to increase the speed of the cyclic loading simulations.

Data preparation POD RNN-POD
Cyclic loading 3501hr 3501hr
     +
3500.75hr
Random loading 100007hr 100007hr
     +
100001.5hr
Table 1: Computational time for data preparation
Online stage DNS POD RNN-POD
Cyclic loading 55 min 50 min 4 min
Random loading 7 hr 1.5 hr 4 min
Table 2: Computational time for validation simulations

6 Conclusion

In this contribution, a recurrent neural network (RNN) is used to emulate the basis function coefficients of projection-based model-order-reduction (MOR) for a representative volume element described by finite plasticity, subjected to cyclic loading and random loading. The RNN is simultaneously trained for cyclic loading and random loading. We have used an RNN, because elastoplasticity is history-dependent and in analogy to the plastic variables in elastoplasticity, an RNN uses hidden variables to quantify its history.

Our results have shown that the RNN-accelerated MOR yields speed ups between factors 13 and 100 relative to the direct numerical simulations (and between factors 13 and 22 relative to conventional MOR). The accuracy is similar to conventional MOR, which is not entirely negligible relative to the direct numerical simulations. Nevertheless, with speeds up of up to factors of 100, the RNN-acceleration of MOR seems to make MOR for finite plasticity an interesting possibility - and perhaps also for other non-elliptical problems.

Acknowledgement

S. Vijayaraghavan, L.A.A. Beex and S.P.A. Bordas gratefully acknowledge the financial support of the Fonds National de la Recherche Luxembourg (FNR) grant INTER/FNRS/15/11019432/EnLightenIt/Bordas. This project has received funding from the H2020-EU.1.2.1.-FET Open Programme project MOAMMM under grant No 862015 and the EU's H2020 project DRIVEN under grant No 811099. Computational resources have been provided by the supercomputing facilities CÉCI funded by FRS-FNRS, Belgium and by the HPC of the University of Luxembourg Varrette and others (2014).

Computational resources have been provided by the supercomputing facilities CÉCI funded by FRS-FNRS, Belgium and by the HPC of the University of Luxembourg Varrette and others (2014).

References

  • [1] K. Carlberg et al. (2011) . Int J Numer Methods Eng 86 (2), pp. 155–181. External Links: Document Cited by: §1.
  • [2] F. Fritzen, M. Fernández, and F. Larsson (2019) . 6, pp. 75. External Links: Link, Document, ISSN 2296-8016 Cited by: §1.
  • [3] F. Ghavamian and A. Simone (2019) . 357, pp. 112594. External Links: ISSN 0045-7825, Document, Link Cited by: §1, §1.
  • [4] M. B. Gorji et al. (2020) . 143, pp. 103972. External Links: ISSN 0022-5096, Document, Link Cited by: §1.
  • [5] P. Kerfriden et al. (2011) . Comput Method Appl M 200 (5-8), pp. 850–866. External Links: 1109.4795, ISSN 00457825, Link Cited by: §1.
  • [6] P. Kerfriden et al. (2013) . Comput Method Appl M 256, pp. 169–188. External Links: arXiv:1208.3940v2, ISBN 0045-7825 (Print) 0045-7825 (Linking), ISSN 00457825, Link Cited by: §1.
  • [7] E.D. Koronaki et al. (2019) . 121, pp. 148–157. External Links: ISSN 0098-1354, Document, Link Cited by: §1.
  • [8] H. J. Logarzo et al. (2021) . 373, pp. 113482. External Links: ISSN 0045-7825, Document, Link Cited by: §1.
  • [9] M. Meyer and H. G. Matthies (2003) . Comput. Mech 31 (1-2 SPEC.), pp. 179–191. External Links: ISSN 01787675 Cited by: §1.
  • [10] M. Mozaffar et al. (2019) . 116 (52), pp. 26414–26420. External Links: Document, ISSN 0027-8424, Link, https://www.pnas.org/content/116/52/26414.full.pdf Cited by: §1.
  • [11] C. Prud’homme et al. (2002-03) . J.Fluids Eng. 124, pp. 70. Cited by: §1.
  • [12] A. Quarteroni et al. (2011-01) . J. Math. Ind 1, pp. . Cited by: §1.
  • [13] I.B.C.M. Rocha et al. (2020) . Eur J Mech A Solids 82, pp. 103995. External Links: ISSN 0997-7538, Document, Link Cited by: §4.1.
  • [14] C. Settgast et al. (2020) . 126, pp. 102624. External Links: ISSN 0749-6419, Document, Link Cited by: §1.
  • [15] R. Swischuk et al. (2019) . 179, pp. 704–717. External Links: ISSN 0045-7930, Document, Link Cited by: §1.
  • [16] J. F. Unger and C. Könke (2008) . 86 (21), pp. 1994–2003. External Links: ISSN 0045-7949, Document, Link Cited by: §1.
  • [17] S. Varrette et al. (2014-07) . Bologna, Italy, pp. 959–967. Cited by: Acknowledgement, Acknowledgement.
  • [18] L. Wu and L. Noels . Cited by: §1.
  • [19] L. Wu et al. (2020) . Comput Method Appl M 369, pp. 113234. External Links: ISSN 0045-7825 Cited by: §1, §4.1, §4.2, §5.1.