1 Introduction
Many computational models for physics simulations are formulated as linear dynamical systems. Examples of linear dynamical systems include the computational model for the signal propagation and interference in electric circuits, storm surge prediction models before an advancing hurricane, vibration analysis in large structures, thermal analysis in various media, neuro-transmission models in the nervous system, various computational models for micro-electro-mechanical systems, and various particle transport simulations. Depending on the complexity of geometries and desirable fidelity level, these problems become easily large-scale problems. For example, the Boltzmann Transport Equation (BTE) has seven independent variables, i.e., three spatial variables, two directional variables, one energy variable, and one time variable. It is not hard to see that the BTE can easily lead to a high dimensional discretized problem. Additionally, the complex geometry (e.g., reactors with thousands of pins and shields) can lead to a large-scale problem. As an example, a problem with 20 angular directions, a cubit spatial domain of 100 x 100 x 100 elements, 16 energy groups, and 100 time steps leads to 32 billion unknowns. The large-scale hinders a fast forward solve and prevents the multi-query setting problems, such as uncertainty quantification, design optimization, and parameter study, from being tractable. Therefore, developing a Reduced Order Model (ROM) that accelerates the solution process without losing much accuracy is essential.
There are several model order reduction approaches available for linear
dynamical systems: (i) Balanced truncation [29, 28] in control theory community is the most famous one. It
has explicit error bounds and guarantees stability. However, it requires the
solution of two Lyapunov equations to construct bases, which is a formidable
task in large-scale problems. (ii) The moment-matching methods
provide a computationally efficient
framework using Krylov subspace techniques in an iterative fashion where
only matrix-vector multiplications are required. The optimal tangential interpolation for nonparametric systems
Often there are many data available either from experiments or high-fidelity simulations. Those data contain valuable information about the system of interest. Therefore, data-driven methods can maximize the usage of the existing data and enable the construction of an optimal ROM. Dynamic Mode Decomposition (DMD) is a data-driven approach that generates reduced modes that embed an intrinsic temporal behavior. It was first developed by Peter Schmid in [34] and populated by many other scientists. For more detailed information about DMD, we refer to this preprint [37]. As another data-driven approach, Proper Orthogonal decomposition (POD) [6] gives the data-driven optimal basis through the method of snapshots. However, most of POD-based ROMs for linear dynamical systems apply spatial projection only. Temporal complexity is still proportional to temporal discretization of its high-fidelity model. In order to achieve an optimal reduction, a space–time ROM needs to be built where both spatial and temporal projections are applied. In literature, some space–time ROMs are available [14, 38, 41, 40], but they are only applied to small-scale problems.
Recently, several ROM techniques have been applied to various types of
transport equations. Starting with Wols’s work that used the POD to simulate
the dynamics of an accelerator driven system (ADS) in 2010
[39] , the publications on ROMs in transport problems
have increased in number. For example, a POD-based ROM for eigenvalue
problems to calculate dominant eigenvalues in reactor physics applications
is developed by Buchan, et al. in
There have been some interesting DMD works for transport problems.
McClarren and Haut in [27] used DMD to estimate
the slowly decaying modes from Richardson iteration and remove them from the
solution. Hardy, Morel, and Cory
Several papers are found to use the PGD for transport problems. For example, Prince and Ragusa [30] applied the Proper Generalized Decomposition (PGD) to steady-state mono-energetic neutron transport equations where angular flux was sought as a finite sum of separable one-dimensional functions. However, the PGD was found to be ineffective for pure absorption problems because a large number of terms were required in the separated representation. Prince and Ragusa [31] also applied the PGD for uncertainty quantification process in the neutron diffusion–reaction problems. Dominesey and Ji used the PGD method to separate space and angle in [16] and to separate space and energy in [17].
However, all these model order reduction techniques for transport equations apply only spatial projection, ignoring the potential reduction in temporal dimension. In this paper, a Space–Time Reduced Order Model (ST-ROM) is developed and applied to large-scale linear dynamical problems. Our ST-ROM achieves complexity reduction in both space and time dimension, which enables a great maximal speed-up and accuracy. It is amenable to any time integrators. It follows the framework initially published in [14], but makes a new contribution by discovering a block structure in space–time basis that enables efficient implementation of the ST-ROM. The block structure in space–time basis allows us not to build a space–time basis explicitly. It enables the construction of space–time reduced operators with small additional costs to the space-only reduced operators. In turn, this allows us to apply the space–time ROM to a large-scale linear dynamical problem.
The paper is organized in the following way: Section 1.1 introduces useful mathematical notations that will be used throughout the paper. Section 2 describes a parametric linear dynamical system and how to solve high-fidelity model in a classical time marching fashion. The full-order space–time formulation is also presented in Section 2 to be reduced to form our space–time ROM in Section 3.2. Section 3 introduces both spatial and spatiotemporal ROMs. The basis generation is described in Section 4 where the traditional POD and incremental POD are explained in Sections 4.1 and 4.2, respectively. Section 4.3 reveals a block structure of the space–time reduced basis and derive each space–time reduced operators in terms of the blocks. We apply our space–time ROM to a large-scale linear dynamical problem, i.e., a neutron transport simulation of solving BTE. Section 6 explains a discretization derivation of the Boltzmann transport equation, using multigroup energy discretization, associated Legendre polynomials for surface harmonic, simple corner balance discretization for space and direction, and the discrete ordinates method. Finally, we present our numerical results in Section 7 and conclude the paper with summary and future works in Section 8.
1.1 Notations
We review some of the notation used throughout the paper. An norm is denoted as . For matrices and , the Kronecker (or tensor) product of and is the matrix denoted by
where . Kronecker products have many interesting properties. We list here the ones relevant to our discussion:
-
If and are nonsingular, then is nonsingular with ,
-
,
-
Given matrices , , , and , , as long as both sides of the equation make sense,
-
, and
-
.
2 Linear dynamical systems
Parametric continuous dynamical systems that are linear in state are considered:
(1) | ||||
(2) |
where denotes a parameter vector, denotes a time dependent state variable function, denotes an initial state, denotes a time dependent input variable function, and denotes a time dependent output variable function. The system operations, i.e., , , and , are real valued matrices, independent of state variables. We assume that the dynamical system above is stable, i.e., the eigenvalues of have strictly negative real parts.
Our methodology works for any time integrators, but for the illustration purpose, we apply a backward Euler time integrator to Eq. (1). At th time step, the following system of equations is solved:
(3) |
where denotes an identity
matrix,
The single time step formulation in Eq. (3) can be equivalently re-written in the following discretized space-time formulation:
(4) |
where the space–-time system matrix, , the space–time state vector, , the space–time input vector, , and the space–time initial vector, , are defined respectively as
(5) |
(6) |
A lower block-triangular matrix structure of comes from the backward Euler time integration scheme. Other time integrators will give other sparse block structures. No one will solve this space–time system directly because the specific block structure of lets one to solve the system in time-marching fashion. However, if the space–time formulation in Eq. (4) can be reduced and solved efficiently, then one might be interested in solving the reduced space–time system in its whole. Section 3.2 shows such a reduction is possible.
3 Reduced order models
We consider a projection-based reduced order model for linear dynamical systems. Section 3.1 shows a typical spatial reduced order model. A space–time reduced order model is described in Section 3.2.
3.1 Spatial reduced order models
A projection-based spatial reduced order model approximates the state variables as a linear combination of a small number of spatial basis vectors, , where , with , , i.e.,
(7) |
where the spatial basis, with , is defined as
(8) |
a reference state is denoted as , and a time-dependent reduced coordinate vector function is defined as . Substituting (7) to (1), gives an over-determined system of equations:
(9) |
which can be closed, for example, by Galerkin projection, i.e., left-multiplying both sides of (9) and initial condition in (1) by , giving the reduced system of equations and initial conditions:
(10) |
where denotes a reduced system matrix, denotes a reduced input matrix, denotes a reduced reference state vector, and denotes a reduced initial condition. Once and are known, the reduced operators, , , , and can be pre-computed. With the pre-computed operators, the system (10) can be solved fairly quickly, for example, by applying the backward Euler time integrator:
(11) |
where . Then the output vector, , can be computed as
(12) |
where denotes a reduced output matrix. The usual choices for include , , and some kind of average quantities. Note that if is used as , then , independent of , which is convenient. The POD for generating the spatial basis is described in Section 4.
3.2 Space–time reduced order models
The space–time formulation, (4), can be reduced by approximating the space–time state variables as a linear combination of a small number of space–time basis vectors, , where , with , i.e.,
(13) |
where the space–time basis, is defined as
(14) |
where , . The space–time reduced coordinate vector function is denoted as . Substituting (13) to (4), gives an over-determined system of equations:
(15) |
which can be closed, for example, by Galerkin projection, i.e., left-multiplying both sides of (15) by , giving the reduced system of equations:
(16) |
where denotes a reduced space–time system matrix, denotes a reduced space–time input vector, and denotes a reduced space–time initial state vector. Once is known, the space–time reduced operators, , , and , can be pre-computed, but its computation involves the number of operations in , which can be large. Sec. 4.3 explores a block structure of that shows an efficient way of constructing reduced space–time operators without explicitly forming the full-size space–time operators, such as , , , and .
4 Basis generation
4.1 Proper orthogonal decomposition

We follow the method of snapshots first introduced by Sirovich [35]. Let be a set of parameter samples where we run full order model simulations. Let , , be a full order model state solution matrix for a sample parameter value, . Then a snapshot matrix, , is defined by concatenating all the state solution matrices, i.e.,
(17) |
The spatial basis from POD is an optimally compressed representation of in a sense that it minimizes the difference between the original snapshot matrix and the projected one onto the subspace spanned by the basis, :
(18) |
where denotes the Frobenius norm. The solution of POD can be obtained by setting , , in MATLAB notation, where is the left singular matrix of the following thin Singular Value Decomposition (SVD) with :
(19) | ||||
(20) |
where and are orthogonal matrices and is a diagonal matrix with singular values on its diagonal. The equivalent summation form is written in (20), where is th singular value, and are th left and right singular vectors, respectively. Note that describes different temporal behavior of . For example, Figure 1 illustrates the case of , where , , and describe three different temporal behavior of a specific spatial basis vector, i.e., . For general , we note that describes different temporal behavior of th spatial basis vector, i.e., . We set to be th temporal snapshot matrix, where for . We apply SVD on :
(21) |
Then, the temporal basis for th spatial basis vector can be set in MATLAB notation. Finally, a space–time basis vector, , in (14) can be constructed as
(22) |
where denotes th spatial basis vector and denotes th temporal basis vector that describes a temporal behavior of . The computational cost of SVD for the snapshot matrix, , assuming , is and the computational cost of SVD for temporal snapshot matrices, , , is . For a large-scale problem, this may be a formidable task. Thus, we use an incremental SVD where a rank one update of existing SVD is achieved with much more memory-efficient way than the thin SVD in Eq. (19). The incremental SVD procedure is explained in Section 4.2.
POD is related to the principal component analysis in
statistics
4.2 Incremental space–time reduced basis
An incremental SVD is an efficient way of updating the existing singular value decomposition when a new snapshot vector, i.e., a column vector, is added. For a time dependent problem, we start with a first time step solution with a first parameter vector, i.e., . If its norm is big enough (i.e., ), then we set the first singular value , the first left singular vector be the normalized first snapshot vector, i.e., , and the right singular vector be . Otherwise, we set them empty, i.e., , , and . This initializing process is described in Algorithm 1. We pass to initializingIncrementalSVD function as an input argument to indicate th snapshot vector is being handled. Also, the rank of is denoted as . In general, because a snapshot vector will not be included if it is too small (i.e., Line 1 in Algorithm 1) or it is linearly dependent on the existing basis (i.e., Line 9 and 13 in Algorithm 2) or it generates a small eigenvalue (i.e., Line 18 in Algorithm 2).
Let’s assume that we have th SVD from previous snapshot vectors, i.e., , whose rank is . If a new snapshot vector, (e.g., th time step solution with the first sample parameter value, ) needs to be added to the existing SVD, the following factorization can be used [9]:
(23) | ||||
(24) |
where denotes a reduced coordinate of that is projected onto the subspace spanned by , denotes the norm of the difference between and the projected one, and denotes a new orthogonal vector due to the incoming vector, . Note that the left and right matrices of the factorization, i.e., and are orthogonal matrices. Let denote the middle matrix of the factorization, i.e.,
(25) |
The matrix, , is almost diagonal except for in the upper right block, i.e., one column bordered diagonal. Its size is not in . Thus, the SVD of is computationally cheap, i.e., . Let the SVD of be
(26) |
where denotes the left singular matrix, denotes the singular value matrix, and denotes the right singular matrix of . Replacing in Eq. (24) with (26) gives
(27) | ||||
(28) |
where denotes the updated left singular matrix, denotes the updated singular value matrix, and denotes the updated right singular matrix. This updating algorithm is described in Algorithm 2.
Algorithm 2 also checks if is linearly dependent on the current left singular vectors numerically. If , then we consider that it is linearly dependent. Thus, we set in , i.e., Line 10 of Algorithm 2. Then we only update the first components of the singular matrices in Line 14 of Algorithm 2. Otherwise, we follow the update form in Eq. (28) as in Line 16 in Algorithm 2.
Line 18-20 in Algorithm 2 checks if the updated singular value has a small value. If it does, we neglect that particular singular value and corresponding component in left and right singular matrices. It is because a small singular value causes a large error in left and right singular matrices [18].
Although the orthogonality of the updated left singular matrix,
, must be guaranteed in infinite precision by the
product of two orthogonal matrices in Line 14 or 16 of
Algorithm 2 , it is not guaranteed in finite precision.
Thus, we heuristically check the orthogonality in Lines 21-24 of
Algorithm
The spatial basis can be set after incremental steps:
(29) |
If all the time step solutions are taken incrementally and sequentially from different high-fidelity time dependent simulations, then the right singular matrix, , holds different temporal behavior for each spatial basis vector. For example, describes different temporal behavior of . As in Section 4.1, th temporal snapshot matrix can be defined as
(30) |
where for . If we take the SVD of , then the temporal basis for th spatial basis vector can be set
(31) |
[, , ] =
initializingIncrementalSVD(, , )
Input: , ,
Output: , ,
[, , ] =
incrementalSVD(, , , ,
, , )
Input: , , , ,
, ,
Output: , ,
4.3 Space–time reduced basis in block structure
Forming the space–time basis in Eq. (14) through the the Kronecker product in Eq. (22) requires multiplications. This is troublesome, not only because it is computationally costly, but also it requires too much memory. Fortunately, a block structure of the space–time basis in (14) is available:
(32) |
where th time step temporal basis matrix, , is defined as
(33) |
where denotes a th element of . Thanks to this block structure, the space–time reduced order operators, such as , , and can be formed without explicitly forming . For example, the reduced space–time system matrix, can be computed, using the block structures, as
(34) |
where (,)th block matrix, , , is defined as
(35) |
Note that the computations of and are trivial because they are diagonal matrix-products whose individual product requires scalar products. Additionally, , is a reduced order system operator that is used for the spatial ROMs, e.g., see Eq. (10). This can be pre-computed. It implies that the construction of requires computational cost that is a bit larger than the one for the spatial ROM system matrix. The additional cost to the spatial ROM system matrix construction is required.
Similarly, the reduced space–time input vector, , can be computed as
(36) |
where the th block vector, , , is given as
(37) |
Note that is used for the spatial ROMs, e.g., see Eq. (10). Also, needs to be computed in the spatial ROMs. These can be pre-computed. Other operations are related to the row-wise scaling with diagonal term, , whose computational cost is . If is constant throughout the whole time steps, i.e., , then Eq. (37) can be further reduced to
(38) |
where you can compute the summation term first, then multiply the diagonal term with the precomputed term,
Comments
There are no comments yet.