Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems

A classical reduced order model for dynamical problems involves spatial reduction of the problem size. However, temporal reduction accompanied by the spatial reduction can further reduce the problem size without losing accuracy much, which results in a considerably more speed-up than the spatial reduction only. Recently, a novel space-time reduced order model for dynamical problems has been developed, where the space-time reduced order model shows an order of a hundred speed-up with a relative error of less than 0.1 problems. However, in order for the method to be applicable to a large-scale problem, an efficient space-time reduced basis construction algorithm needs to be developed. We present incremental space-time reduced basis construction algorithm. The incremental algorithm is fully parallel and scalable. Additionally, the block structure in the space-time reduced basis is exploited, which enables the avoidance of constructing the reduced space-time basis. These novel techniques are applied to a large-scale particle transport simulation with million and billion degrees of freedom. The numerical example shows that the algorithm is scalable and practical. Also, it achieves a tremendous speed-up, maintaining a good accuracy. Finally, error bounds for space-only and space-time reduced order models are derived.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

11/20/2020

Efficient space-time reduced order model for linear dynamical systems in Python using less than 120 lines of code

A classical reduced order model (ROM) for dynamical problems typically i...
12/13/2020

Mixed interpolatory and inference non-intrusive reduced order modeling with application to pollutants dispersion

On the basis of input-output time-domain data collected from a complex s...
03/05/2019

A reduction methodology using free-free component eigenmodes and Arnoldi enrichment

In order to perform faster simulations, the model reduction is nowadays ...
11/11/2021

Space-time reduced-order modeling for uncertainty quantification

This work focuses on the space-time reduced-order modeling (ROM) method ...
03/17/2020

Pressio: Enabling projection-based model reduction for large-scale nonlinear dynamical systems

This work introduces Pressio, an open-source project aimed at enabling l...
05/16/2017

Spectral Methods - Part 2: A comparative study of reduced order models for moisture transfer diffusive problems

This paper explores in details the capabilities of two model reduction t...
05/04/2020

Maximum Entropy Snapshot Sampling for Reduced Basis Generation

Snapshot back-ended reduced basis methods for dynamical systems commonly...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many computational models for physics simulations are formulated as linear dynamical systems. Examples of linear dynamical systems include the computational model for the signal propagation and interference in electric circuits, storm surge prediction models before an advancing hurricane, vibration analysis in large structures, thermal analysis in various media, neuro-transmission models in the nervous system, various computational models for micro-electro-mechanical systems, and various particle transport simulations. Depending on the complexity of geometries and desirable fidelity level, these problems become easily large-scale problems. For example, the Boltzmann Transport Equation (BTE) has seven independent variables, i.e., three spatial variables, two directional variables, one energy variable, and one time variable. It is not hard to see that the BTE can easily lead to a high dimensional discretized problem. Additionally, the complex geometry (e.g., reactors with thousands of pins and shields) can lead to a large-scale problem. As an example, a problem with 20 angular directions, a cubit spatial domain of 100 x 100 x 100 elements, 16 energy groups, and 100 time steps leads to 32 billion unknowns. The large-scale hinders a fast forward solve and prevents the multi-query setting problems, such as uncertainty quantification, design optimization, and parameter study, from being tractable. Therefore, developing a Reduced Order Model (ROM) that accelerates the solution process without losing much accuracy is essential.

There are several model order reduction approaches available for linear dynamical systems: (i) Balanced truncation [29, 28]

in control theory community is the most famous one. It has explicit error bounds and guarantees stability. However, it requires the solution of two Lyapunov equations to construct bases, which is a formidable task in large-scale problems. (ii) The moment-matching methods

[4, 19]

provide a computationally efficient framework using Krylov subspace techniques in an iterative fashion where only matrix-vector multiplications are required. The optimal

tangential interpolation for nonparametric systems

[19] is also available. The most crucial part of the moment-matching methods is location of samples where moments are matched. Also, it is not a data-driven approach, meaning that no data is used to construct ROM. (iii) Proper Generalized Decomposition (PGD) [2] was first developed as a numerical method of solving boundary value problems, later extended to dynamical problems [3]. The main assumption of the method is a separated solution representation in space and time, which gives a way for an efficient solution procedure. Therefore, it is considered as a model reduction technique. However, PGD is not a data-driven approach.

Often there are many data available either from experiments or high-fidelity simulations. Those data contain valuable information about the system of interest. Therefore, data-driven methods can maximize the usage of the existing data and enable the construction of an optimal ROM. Dynamic Mode Decomposition (DMD) is a data-driven approach that generates reduced modes that embed an intrinsic temporal behavior. It was first developed by Peter Schmid in [34] and populated by many other scientists. For more detailed information about DMD, we refer to this preprint [37]. As another data-driven approach, Proper Orthogonal decomposition (POD) [6] gives the data-driven optimal basis through the method of snapshots. However, most of POD-based ROMs for linear dynamical systems apply spatial projection only. Temporal complexity is still proportional to temporal discretization of its high-fidelity model. In order to achieve an optimal reduction, a space–time ROM needs to be built where both spatial and temporal projections are applied. In literature, some space–time ROMs are available [14, 38, 41, 40], but they are only applied to small-scale problems.

Recently, several ROM techniques have been applied to various types of transport equations. Starting with Wols’s work that used the POD to simulate the dynamics of an accelerator driven system (ADS) in 2010 [39]

, the publications on ROMs in transport problems have increased in number. For example, a POD-based ROM for eigenvalue problems to calculate dominant eigenvalues in reactor physics applications is developed by Buchan, et al. in

[10]. The corresponding Boltzmann transport equation was re-casted into its diffusion form, where some of dimensions were eliminated. Sartori, et al. in [33] also applied the POD-based ROM to the diffusion form and compared it with a modal method to show that the POD was superior to the modal method. Reed and Robert in [32] also used POD to expand energy that replaced the traditional Discrete Legendre Polynomials (DLP) or modified DLP. They showed that a small number of POD energy modes could capture many-group fidelity. Buchan, et al. in [11] also developed a POD-based reduced order model to efficiently resolve the angular dimension of the steady-state, mono-energetic Boltzmann transport equation. Behne, Ragusa, and Morel [5] applied POD-based reduced order model to accelerate steady state radiation multi-group energy transport problem. A Petrov-Galerkin projection was used to close the reduced system. Coale and Anistratov in [15] replaces a high-order (HO) system with POD-based ROM in high-order low-order (HOLO) approach for Thermal Radiative Transfer (TRT) problems.

There have been some interesting DMD works for transport problems. McClarren and Haut in [27]

used DMD to estimate the slowly decaying modes from Richardson iteration and remove them from the solution. Hardy, Morel, and Cory

[20] also explored DMD to accelerate the kinetics of subcritical metal systems without losing much accuracy. It was applied to a three-group diffusion model in a bare homogeneous fissioning sphere. An interesting work by Star, et al. in [36] exists, using the DMD method to identify non-intrusive POD-based ROM for the unsteady convection-diffusion scalar transport equation. Their approach applied the DMD method to reduced coordinate systems.

Several papers are found to use the PGD for transport problems. For example, Prince and Ragusa [30] applied the Proper Generalized Decomposition (PGD) to steady-state mono-energetic neutron transport equations where angular flux was sought as a finite sum of separable one-dimensional functions. However, the PGD was found to be ineffective for pure absorption problems because a large number of terms were required in the separated representation. Prince and Ragusa [31] also applied the PGD for uncertainty quantification process in the neutron diffusion–reaction problems. Dominesey and Ji used the PGD method to separate space and angle in [16] and to separate space and energy in [17].

However, all these model order reduction techniques for transport equations apply only spatial projection, ignoring the potential reduction in temporal dimension. In this paper, a Space–Time Reduced Order Model (ST-ROM) is developed and applied to large-scale linear dynamical problems. Our ST-ROM achieves complexity reduction in both space and time dimension, which enables a great maximal speed-up and accuracy. It is amenable to any time integrators. It follows the framework initially published in [14], but makes a new contribution by discovering a block structure in space–time basis that enables efficient implementation of the ST-ROM. The block structure in space–time basis allows us not to build a space–time basis explicitly. It enables the construction of space–time reduced operators with small additional costs to the space-only reduced operators. In turn, this allows us to apply the space–time ROM to a large-scale linear dynamical problem.

The paper is organized in the following way: Section 1.1 introduces useful mathematical notations that will be used throughout the paper. Section 2 describes a parametric linear dynamical system and how to solve high-fidelity model in a classical time marching fashion. The full-order space–time formulation is also presented in Section 2 to be reduced to form our space–time ROM in Section 3.2. Section 3 introduces both spatial and spatiotemporal ROMs. The basis generation is described in Section 4 where the traditional POD and incremental POD are explained in Sections 4.1 and 4.2, respectively. Section 4.3 reveals a block structure of the space–time reduced basis and derive each space–time reduced operators in terms of the blocks. We apply our space–time ROM to a large-scale linear dynamical problem, i.e., a neutron transport simulation of solving BTE. Section 6 explains a discretization derivation of the Boltzmann transport equation, using multigroup energy discretization, associated Legendre polynomials for surface harmonic, simple corner balance discretization for space and direction, and the discrete ordinates method. Finally, we present our numerical results in Section 7 and conclude the paper with summary and future works in Section 8.

1.1 Notations

We review some of the notation used throughout the paper. An norm is denoted as . For matrices and , the Kronecker (or tensor) product of and is the matrix denoted by

where . Kronecker products have many interesting properties. We list here the ones relevant to our discussion:

  • If and are nonsingular, then is nonsingular with ,

  • ,

  • Given matrices , , , and , , as long as both sides of the equation make sense,

  • , and

  • .

2 Linear dynamical systems

Parametric continuous dynamical systems that are linear in state are considered:

(1)
(2)

where denotes a parameter vector, denotes a time dependent state variable function, denotes an initial state, denotes a time dependent input variable function, and denotes a time dependent output variable function. The system operations, i.e., , , and , are real valued matrices, independent of state variables. We assume that the dynamical system above is stable, i.e., the eigenvalues of have strictly negative real parts.

Our methodology works for any time integrators, but for the illustration purpose, we apply a backward Euler time integrator to Eq. (1). At th time step, the following system of equations is solved:

(3)

where

denotes an identity matrix,

denotes th time step size with , and and denote state and input vectors at th time step, , respectively. A Full Order Model (FOM) solves Eq. (3) every time step. The spatial dimension, , and the temporal dimension, can be very large, which leads to a large-scale problem. We introduce how to reduce the high dimensionality in Section 3.

The single time step formulation in Eq. (3) can be equivalently re-written in the following discretized space-time formulation:

(4)

where the space–-time system matrix, , the space–time state vector, , the space–time input vector, , and the space–time initial vector, , are defined respectively as

(5)
(6)

A lower block-triangular matrix structure of comes from the backward Euler time integration scheme. Other time integrators will give other sparse block structures. No one will solve this space–time system directly because the specific block structure of lets one to solve the system in time-marching fashion. However, if the space–time formulation in Eq. (4) can be reduced and solved efficiently, then one might be interested in solving the reduced space–time system in its whole. Section 3.2 shows such a reduction is possible.

3 Reduced order models

We consider a projection-based reduced order model for linear dynamical systems. Section 3.1 shows a typical spatial reduced order model. A space–time reduced order model is described in Section 3.2.

3.1 Spatial reduced order models

A projection-based spatial reduced order model approximates the state variables as a linear combination of a small number of spatial basis vectors, , where , with , , i.e.,

(7)

where the spatial basis, with , is defined as

(8)

a reference state is denoted as , and a time-dependent reduced coordinate vector function is defined as . Substituting (7) to (1), gives an over-determined system of equations:

(9)

which can be closed, for example, by Galerkin projection, i.e., left-multiplying both sides of (9) and initial condition in (1) by , giving the reduced system of equations and initial conditions:

(10)

where denotes a reduced system matrix, denotes a reduced input matrix, denotes a reduced reference state vector, and denotes a reduced initial condition. Once and are known, the reduced operators, , , , and can be pre-computed. With the pre-computed operators, the system (10) can be solved fairly quickly, for example, by applying the backward Euler time integrator:

(11)

where . Then the output vector, , can be computed as

(12)

where denotes a reduced output matrix. The usual choices for include , , and some kind of average quantities. Note that if is used as , then , independent of , which is convenient. The POD for generating the spatial basis is described in Section 4.

3.2 Space–time reduced order models

The space–time formulation, (4), can be reduced by approximating the space–time state variables as a linear combination of a small number of space–time basis vectors, , where , with , i.e.,

(13)

where the space–time basis, is defined as

(14)

where , . The space–time reduced coordinate vector function is denoted as . Substituting (13) to (4), gives an over-determined system of equations:

(15)

which can be closed, for example, by Galerkin projection, i.e., left-multiplying both sides of (15) by , giving the reduced system of equations:

(16)

where denotes a reduced space–time system matrix, denotes a reduced space–time input vector, and denotes a reduced space–time initial state vector. Once is known, the space–time reduced operators, , , and , can be pre-computed, but its computation involves the number of operations in , which can be large. Sec. 4.3 explores a block structure of that shows an efficient way of constructing reduced space–time operators without explicitly forming the full-size space–time operators, such as , , , and .

4 Basis generation

4.1 Proper orthogonal decomposition

Figure 1: Illustration of spatial and temporal bases construction, using SVD with . The right singular vector, , describes three different temporal behaviors of a left singular basis vector , i.e., three different temporal behaviors of a spatial mode. Each temporal behavior is denoted as , , and .

We follow the method of snapshots first introduced by Sirovich [35]. Let be a set of parameter samples where we run full order model simulations. Let , , be a full order model state solution matrix for a sample parameter value, . Then a snapshot matrix, , is defined by concatenating all the state solution matrices, i.e.,

(17)

The spatial basis from POD is an optimally compressed representation of in a sense that it minimizes the difference between the original snapshot matrix and the projected one onto the subspace spanned by the basis, :

(18)

where denotes the Frobenius norm. The solution of POD can be obtained by setting , , in MATLAB notation, where is the left singular matrix of the following thin Singular Value Decomposition (SVD) with :

(19)
(20)

where and are orthogonal matrices and is a diagonal matrix with singular values on its diagonal. The equivalent summation form is written in (20), where is th singular value, and are th left and right singular vectors, respectively. Note that describes different temporal behavior of . For example, Figure 1 illustrates the case of , where , , and describe three different temporal behavior of a specific spatial basis vector, i.e., . For general , we note that describes different temporal behavior of th spatial basis vector, i.e., . We set to be th temporal snapshot matrix, where for . We apply SVD on :

(21)

Then, the temporal basis for th spatial basis vector can be set in MATLAB notation. Finally, a space–time basis vector, , in (14) can be constructed as

(22)

where denotes th spatial basis vector and denotes th temporal basis vector that describes a temporal behavior of . The computational cost of SVD for the snapshot matrix, , assuming , is and the computational cost of SVD for temporal snapshot matrices, , , is . For a large-scale problem, this may be a formidable task. Thus, we use an incremental SVD where a rank one update of existing SVD is achieved with much more memory-efficient way than the thin SVD in Eq. (19). The incremental SVD procedure is explained in Section 4.2.

POD is related to the principal component analysis in statistics

[22] and Karhunen–Loève expansion [26] in stochastic analysis. Since the objective function in (18) does not change even though is post-multiplied by an arbitrary orthogonal matrix, the POD procedure seeks the optimal -dimensional subspace that captures the snapshots in the least-squares sense. For more details on POD, we refer to [6, 21, 23].

4.2 Incremental space–time reduced basis

An incremental SVD is an efficient way of updating the existing singular value decomposition when a new snapshot vector, i.e., a column vector, is added. For a time dependent problem, we start with a first time step solution with a first parameter vector, i.e., . If its norm is big enough (i.e., ), then we set the first singular value , the first left singular vector be the normalized first snapshot vector, i.e., , and the right singular vector be . Otherwise, we set them empty, i.e., , , and . This initializing process is described in Algorithm 1. We pass to initializingIncrementalSVD function as an input argument to indicate th snapshot vector is being handled. Also, the rank of is denoted as . In general, because a snapshot vector will not be included if it is too small (i.e., Line 1 in Algorithm 1) or it is linearly dependent on the existing basis (i.e., Line 9 and 13 in Algorithm 2) or it generates a small eigenvalue (i.e., Line 18 in Algorithm 2).

Let’s assume that we have th SVD from previous snapshot vectors, i.e., , whose rank is . If a new snapshot vector, (e.g., th time step solution with the first sample parameter value, ) needs to be added to the existing SVD, the following factorization can be used [9]:

(23)
(24)

where denotes a reduced coordinate of that is projected onto the subspace spanned by , denotes the norm of the difference between and the projected one, and denotes a new orthogonal vector due to the incoming vector, . Note that the left and right matrices of the factorization, i.e., and are orthogonal matrices. Let denote the middle matrix of the factorization, i.e.,

(25)

The matrix, , is almost diagonal except for in the upper right block, i.e., one column bordered diagonal. Its size is not in . Thus, the SVD of is computationally cheap, i.e., . Let the SVD of be

(26)

where denotes the left singular matrix, denotes the singular value matrix, and denotes the right singular matrix of . Replacing in Eq. (24) with (26) gives

(27)
(28)

where denotes the updated left singular matrix, denotes the updated singular value matrix, and denotes the updated right singular matrix. This updating algorithm is described in Algorithm 2.

Algorithm 2 also checks if is linearly dependent on the current left singular vectors numerically. If , then we consider that it is linearly dependent. Thus, we set in , i.e., Line 10 of Algorithm 2. Then we only update the first components of the singular matrices in Line 14 of Algorithm 2. Otherwise, we follow the update form in Eq. (28) as in Line 16 in Algorithm 2.

Line 18-20 in Algorithm 2 checks if the updated singular value has a small value. If it does, we neglect that particular singular value and corresponding component in left and right singular matrices. It is because a small singular value causes a large error in left and right singular matrices [18].

Although the orthogonality of the updated left singular matrix, , must be guaranteed in infinite precision by the product of two orthogonal matrices in Line 14 or 16 of Algorithm 2

, it is not guaranteed in finite precision. Thus, we heuristically check the orthogonality in Lines 21-24 of Algorithm 

2 by checking the inner product of the first and last columns of . If the orthogonality is not shown, then we orthogonalize them by the QR factorization. Here denotes unit roundoff (e.g., eps in MATLAB).

The spatial basis can be set after incremental steps:

(29)

If all the time step solutions are taken incrementally and sequentially from different high-fidelity time dependent simulations, then the right singular matrix, , holds different temporal behavior for each spatial basis vector. For example, describes different temporal behavior of . As in Section 4.1, th temporal snapshot matrix can be defined as

(30)

where for . If we take the SVD of , then the temporal basis for th spatial basis vector can be set

(31)

[, , ] = initializingIncrementalSVD(, , )
Input: , ,
Output: , ,

1:  if  then
2:     , , and
3:  else
4:     , , and
5:  end if
Algorithm 1 Initializing incremental SVD

[, , ] = incrementalSVD(, , , , , , )
Input: , , , , , ,
Output: , ,

1:  if  or  then
2:     [, , ] = initializingIncrementalSVD(, , ), i.e., apply Algorithm 1
3:     return  
4:  end if
5:  
6:  
7:  
8:  
9:  if  then
10:     
11:  end if
12:   SVD update
13:  if  then
14:     ,  , and
15:  else
16:     ,  ,  and
17:  end if Neglect small singular values: truncation
18:  if  then
19:     ,  ,  
20:  end if Orthogonalize if necessary
21:  if  then
22:     
23:     
24:  end if
Algorithm 2 Incremental SVD,

4.3 Space–time reduced basis in block structure

Forming the space–time basis in Eq. (14) through the the Kronecker product in Eq. (22) requires multiplications. This is troublesome, not only because it is computationally costly, but also it requires too much memory. Fortunately, a block structure of the space–time basis in (14) is available:

(32)

where th time step temporal basis matrix, , is defined as

(33)

where denotes a th element of . Thanks to this block structure, the space–time reduced order operators, such as , , and can be formed without explicitly forming . For example, the reduced space–time system matrix, can be computed, using the block structures, as

(34)

where (,)th block matrix, , , is defined as

(35)

Note that the computations of and are trivial because they are diagonal matrix-products whose individual product requires scalar products. Additionally, , is a reduced order system operator that is used for the spatial ROMs, e.g., see Eq. (10). This can be pre-computed. It implies that the construction of requires computational cost that is a bit larger than the one for the spatial ROM system matrix. The additional cost to the spatial ROM system matrix construction is required.

Similarly, the reduced space–time input vector, , can be computed as

(36)

where the th block vector, , , is given as

(37)

Note that is used for the spatial ROMs, e.g., see Eq. (10). Also, needs to be computed in the spatial ROMs. These can be pre-computed. Other operations are related to the row-wise scaling with diagonal term, , whose computational cost is . If is constant throughout the whole time steps, i.e., , then Eq. (37) can be further reduced to

(38)

where you can compute the summation term first, then multiply the diagonal term with the precomputed term,