## 1 Introduction

Simulating a quantum many-body system away from equilibrium is a challenging task. Although time-dependent physical observables can be computed from the solution of a time-dependent Schrödinger equation with a time-dependent Hamiltonian, such a brute-force approach is limited to small systems defined in a small dimensional Hilbert space. A more practical approach is to focus on the Green’s function which is a two-point correlator of the creation and annhilation field operators defined on the Keldysh contour Kadanoff and Baym (1962); Keldysh (1965). The equation of motion satisfied by the two-time Green’s function is a set of nonlinear integro-differential equations Lipavskỳ et al. (1986). Evolving the Green’s function numerically on a two-time grid is highly non-trivial, and the presence of the integral kernel in these equations makes both the memory requirement and computational cost high if the long-time behavior of a physical observable is to be examined.

In this paper, we show how the long-term characteristics of the physical observable can be analyzed and predicted using a model reduction technique – the dynamic mode decomposition (DMD) Kutz et al. (2016); Schmid (2010); Schmid et al. (2011); Tu et al. (2014). The DMD method is a practical data-driven model reduction method first proposed by Schimid Schmid (2010) to analyze the dynamics of a nonlinear and high-dimensional system. It extracts the spatial modes associated with temporal oscillations with distinct frequencies and growth/decay rates from a few samples of the trajectory. These spatial and temporal modes obtained from the DMD analysis of the dynamics within a limited time window can in turn be used to extrapolate and predict the dynamics on a much longer time scale.

One of the main advantages of DMD over other dimension reduction techniques such as the principal component analysis (PCA)

Jolliffe and Cadima (2016); Wold et al. (1987) and proper orthogonal decomposition (POD) Chatterjee (2000); Semeraro et al. (2012)is that DMD provides both the spatial and temporal modes at the same time. Furthermore, the spatial modes obtained from the DMD analysis is often more physical than the eigenvectors or singular vectors produced from PCA and POD.

To use DMD to predict long time behavior of certain physical observables (such as density) associated with the evolution of a many-body system out of equilibrium, we first solve the equation of motion satisfied by the two-time Green’s function, i.e. the Kadanoff-Baym equation within a small time window, and perform DMD analysis on the one-time physical observables that can be obtained from the Green’s function.

Our paper is organized as follows. In section 2, we describe the model problem we use to demonstrate the effectiveness of the DMD method and the equation of motion satisfied by the two-time Green’s function as well as one-time physical observable in both the equilibrium and non-equilibrium regimes. The mathematical foundation of the DMD analysis and the numerical procedure for performing such an analysis is presented in section 3. In section 4, we point out a potential problem of the DMD caused by an insufficient resolution in the spatial (or momentum) discretization of the state variable. We explain how this problem can be fixed by using a high-order DMD (HODMD) analysis which can be interpreted as time-delayed embedding of a nonlinear dynamical system.

The effectiveness of the DMD and HODMD procedures are reported and discussed in section 5. In particular, we demonstrate that, in the equilibrium limit when linear response analysis can be performed, the DMD modes obtained from a real-time, time-dependent Hartree-Fock (TD-HF) simulation match well the eigenvectors obtained from solving the Bethe-Salpeter in the Kohn-Sham basis. This agreement also appears to hold for weakly non-equilibrium dynamics driven by a low intensity field.

As the intensity of the driving field increases, the linear response theory does not hold. To validate DMD and HODMD results, we compare the DMD and HODMD modes with spatial and temporal modes identified by performing a Fourier analysis of the observable trajectory and show their differences. To demonstrate that HODMD modes are more relevant and meaningful, we compare the extrapolated trajectories produced by HODMD and a modified Fourier scheme that tries to recover the decay rate by solving a nonlinear optimization problem. Our numerical results show that the HODMD extrapolation is much more accurate than the Fourier extrapolation, and the HODMD procedure is numerically more stable than the modified Fourier extrapolation scheme.

## 2 The model problem and the Keldyish formalism

In this work, we focus on the dynamics of a simple two-band system, which exemplifies the semiconductor driven by an external light field Perfetto et al. (2019). The Hamiltonian consists of a time independent component that describes the many-body interaction as well as an external time dependent component that describes the light-matter coupling.

The system Hamiltonian has the form

(2.1) |

where () is the band energy of the valence (conduction) band with momentum , is the on-site interaction between the two bands, and is the number of sites in the system. The energy dispersion is given by

with as the band gap.

The light-matter coupling within the dipole approximation is

(2.2) |

where is a time-dependent intensity of the field, and is the dipole matrix element. For simplicity we set . (2.1) together with (2.2) describes how electrons and holes interact with each other and with a classical light field.

Although all time-dependent physical obserables can be obtained from the solution to the time-dependent Schrödinger’s equation

(2.3) |

where and is the initial state of the wavefunction at , the many-body nature of (2.1) renders the full solution of (2.3) difficult. The computational complexity of the exact numerical solution grows exponentially with the system size.

Since in most cases we are interested only in single particle physical observables, we apply the nonequilibrium Green’s function (NEGF) approach Kadanoff and Baym (1962) to map the dynamics of the many-body system to the two-time Greens function , where and are annihilation and creation operators in the Heisenberg picture, is the time-ordering operator, and the expectation is evaluated along the Keldysh contour Keldysh (1965). Since the model problem we focus on in this work consists of two bands, and the Green’s function of interest is , we will drop the band indices below and simply denote the Green’s function by .

It follows from the many-body perturbation theory that satisfies the following equation of motion

(2.4) |

where is now a single-particle Hamiltonian that includes the contribution of a time-dependent driving field, and is the self-energy that describes the many-body interaction. Equation (2.4) is accompanied by an ajoint equation which describes the time evolution over . These equations are coupled nonlinear integral differential equations that are collectively called the Kadanoff-Baym equations (KBE) Kadanoff and Baym (1962). They must be solved numerically. Once (2.4) and its ajoint equation are solved, the single particle physical observables can be computed through the relation between the density matrix and the time-diagonal part of the lesser Green’s function, .

In the NEGF approach, many-body interaction is captured by the self-energy term. Depending on the physical problem, a proper choice of self-energy is essential. In this work, we will use the Hartree-Fock (HF) and the second Born (2B) approximation of the self-energy, which can capture exciton physics and in addition, carrier scatterings respectively Stefanucci and van Leeuwen (2013).

Due to the presence of the integral term in (2.4) and its adjoint equation, the numerical solution of these coupled integral differential equations is nontrivial. Depending on the choice of the self-energy, the right-hand side of (2.4) may be a nonlinear function of the two-time Green’s function . As a result, each time evolution step would require solving a nonlinear system of equations. The computational complexity of solving the two-time KBE scales as in the worst case. This high complexity severely limits its application beyond HF approximation.

However, because the physical observable we are interested in, e.g., , is often a function of only, it may be possible to use a data-driven model order reduction technique to characterize the spatial and long-time temporal features of the dynamics satisfied by the one-time physical observe from samples of the observables sampled from a small time window. These samples are computed from the the numerical solution of the two-time KBE.

In principle, a one-time physical observable satisfies a one-time equation of motion. For example, is the solution to a differential equation of the form

(2.5) |

where may be a complicated nonlinear function of and for which an explicit analytic form is unknown.

For the model problem we will focus on in this work, we can write down the equation of motion for explicitly. We start from the equation of motion

(2.6) |

where is the total Hamiltonian. The matrix element of density matrix is defined as the expectation value . In general Eq. 2.6 couples to the density matrix with higher particle numbers and is not closed. We can close the equation of motion by taking the HF approximation of the interaction term and obtain

(2.7) |

where and are the occupation numbers of conduction and valence bands respectively. In the weak field limit, and . As a result, the off-diagonal matrix element of decouples from the diagonal terms to yield

(2.8) |

When or when is small, the solution of (2.8

) can be expressed in terms of the eigenvalues and eigenvectors of the Hamiltonian

(2.9) |

This is the Bethe-Salpeter linear response Hamiltonian Attaccalite et al. (2011). We will use the eigenvalues and eigenvectors of BSE Hamiltonian (2.9) to validate the spatial and temporal features of obtained from a data driven reduced order model to be presented below.

## 3 Dynamic mode decomposition

In this section, we briefly describe the basic principles of dynamic mode decomposition and the numerical procedure we use to perform this decomposition.

Dynamic mode decomposition (DMD) is a data-driven dimension reduction technique that can be used to extract important spatial and temporal features of a nonlinear dynamical system with a large number of degree of freedoms

Kutz et al. (2016); Mohan et al. (2018); Schmid (2010); Towne et al. (2018). Future states of the nonlinear system can be predicted based on the extracted modes and frequencies.Consider a dynamical system described by a nonlinear ordinary differential equation of the form

(3.1) |

where is a time-dependent state variable, and is a nonlinear function of and time . The goal of DMD is to identify a set of time independent spatial modes , , … and a set of frequencies , , … so that can be well-approximated by

(3.2) |

where ’s are set of coefficients, and is relatively small.

The general strategy for obtaining the dynamic modes and corresponding frequencies is to map the trajectory of the nonlinear dynamics to the state of an infinite-dimensional linear system that can easily be characterized via a spectral decomposition of the linear operator that defines such a linear system. This strategy follows from the Koopman theory Koopman (1931); Koopman and Neumann (1932); Takeishi et al. (2017) for reduce order modeling Bagheri (2012, 2013); Mezić (2013); Rowley et al. (2009).

In practice, we do not have the trajectory before (3.1) is solved. Yet, our hope is that the most important ’s and ’s can be obtained by analyzing a small set of snapshots (or samples) of that we can solve.

Suppose snapshots of are available at , where and is a small time interval. We denote these snapshots by , and define

It follows from Koopman’s theory that, in the limit of and , there exists an infinite-dimensional operator such that

(3.3) |

where

(3.4) |

where () is a set of coefficients Arbabi and Mezić (2017).

When is finite, (3.3) may not hold. In particular, we may not find a closure defined by the last column of . However, it is possible to construct a finite dimensional approximation to , denoted by that minimizes the difference between the leading columns between the left and right hand sides of (3.3). To simplify notation, let us define

(3.5) |

where

(3.6) |

It is easy to show that the minimizer of is

(3.7) |

where denotes the Moore-Penrose pseudoinverse of

, which can be obtained from the singular value decomposition (SVD)

Golub and Van Loan (2013) of , i.e.(3.8) |

where , , and , and .

For many large-scale problems, the snapshots contained in may have a low rank

, i.e., the singular values on the diagonal of

decay rapidly. In this case, important dynamic modes can be obtained by projecting into the subspace spanned by the leading right singular vectors of .Take

(3.9) |

then projects onto an -dimensional subspace. Substituting into (3.7) yields a rank-estimation of , i.e.,

(3.10) |

To propagate the original system (3.1), what remains to be solved is the eigenvalue problem

(3.11) |

where

(3.12) |

is composed of the eigenvalues, and the columns of give the corresponding eigenvectors. To obtain spectral modes in the original state space of , we perform the transformation

(3.13) |

The columns of are called the DMD modes. Denote

(3.14) |

then the dynamics of can be expressed as

(3.15) |

In the expression, the amplitude vector is taken to be the projection of initial value on to the DMD modes as

(3.16) |

or the least squares fit of (3.15) on the sampled trajectories:

(3.17) |

where denotes the standard Euclidean norm of a vector. This completes the procedures of DMD.

From the flow of DMD, it is straightforward to see that the major computational cost comes from SVD (3.8), which is .

As mentioned in Section 1, DMD is an equation-free, data-driven method. There is no need to know about the underlying dynamics function in (3.1). Based on the data from first few time steps, it is possible to predict future states of the system. Moreover, it only focuses on the principal dimensions instead of the overall

dimensions of the state in order to reduce computational cost. As a result, the method is probably of great value for many complicated nonlinear, or high-dimensional dynamical systems.

## 4 Higher order dynamic mode decomposition

The number of spatial (momentum) and temporal modes in (3.15) is determined by the dimension of the projected Koopman operator defined in (3.10), which is projected from the approximate Koopman operator (3.7) that maps to . When the snapshots are discretized on a small number of spatial or momentum grid points, the dimension of , and consequently the dimension of may be too small to accommodate the number of spatial and temporal modes present in the true dynamics of .

This problem can possibly be resolved by using a finer spatial or momentum discretization scheme to increase the dimension of and . However, this would inevitably increase the cost for generating the time snapshots for performing the DMD analysis. It is not clear, a priori, how fine a spatial or momentum grid one needs to resolve all significant spatial and temporal modes in the true dynamics satisfied by .

Fortunately, this problem can be addressed by using the technique of time-delay embedding Broomhead and King (1986); Packard et al. (1980); Pan and Duraisamy (2020); Takens (1981) to construct a better approximation to the Koopman operator without increasing number of spatial or momentum grid points in .

The key observation used in time-delay embedding can be described as follows. Let us partition a snapshot discretized on a fine spatial or momentum grid as , where corresponds to a subset of defined on a (coarse) subset of grid points. If is the Koopman operator that maps to , i.e.

(4.1) |

where is partitioned conformally with the partition of , then it is easy to show that

(4.2) |

If the last term in (4.2) is negligibly small, we can represent a coarsely sampled state as a linear combination of time delayed states for . If this relationship holds for all , we can construct an augmented Koopman operator

(4.3) |

that maps to where , , , , and

(4.4) |

An approximation to can then be obtained by solving the least squares problem

(4.5) |

where the snapshot matrices and are defined as

where is the total number of sampled snapshots of .

When is low rank, the solution to (4.5) can be approximated from a subspace defined by the singular vectors associated with dominant singular vectors of using the same procedure described in the previous section. This modified procedure yields the higher order dynamic mode decomposition (HODMD) described in Le Clainche and Vega (2017).

When each column of consists of the concatenation of consecutive snapshots, each spatial HODMD mode is a vector of length . To reconstruct or extrapolate the trajectory of by (3.15), we take to be the first elements of the th spatial HODMD mode.

Because the number of rows in the snapshot matrices and used in HODMD can be much larger than those in a standard DMD, the computational cost of HODMD is generally higher. Furthermore, when is relatively small, columns of the snapshot matrix can become more linearly dependent. Although this problem can in principle be resolved by the truncated SVD performed in (3.9), sometimes it may be difficult to choose an optimal singular value cutoff threshold for truncation. To reduce the computational cost and the level of linear dependency among columns of , we can increase the temporal distance between the augmented snapshots in and . For example, we can define them as

(4.6) |

where and are some integers that satisfy . A HODMD method associated with the parameters and will be denoted by HODMD(,).

## 5 Results and discussions

In this section, we give some examples on how to use DMD to extract spatial and temporal modes of the dynamics associated with the simple two-band system defined by (2.1) when it is driven by a time-dependent field through the light-matter interaction term (2.2). As we indicated in section 2, instead of solving the many-body problem directly, we use a NEGF formalism to compute a single particle Green’s function by solving the KBE (2.4) and its adjoint equation. As this is a two-band system, if we take -points, and evolve the system for time steps, then the results of Green’s function forms an matrix. From the solutions, the density matrix is obtained by , which gives an matrix. We use DMD to analyze and predict the dynamics of with the second and third indices fixed by 1 and 2, respectively. The data of can thus be seen as an -by- matrix, and we denote the entries by

(5.1) |

for simplicity, where

(5.2) |

Each snapshot can be represented as

(5.3) |

where stands for the transpose of a matrix. The data matrices and are then constructed through (3.6).

We consider both the HF and second Born self-energies in (2.4). We also test DMD for different levels of field intensity . In all cases, we compare the DMD modes with spectral modes obtained from the Fourier analysis of the density trajectory.

In the weak intensity limit, it is known that we can perform a linear response analysis to obtain the spectral modes of the dynamics by solving the Casida equation or the Bethe-Salpeter equations (BSE) for two-particle neutral excitations. In this regime, we can compare the modes extracted by DMD with the BSE eigenvectors. On the other hand, when is sufficiently large, linear reponse can no longer accurately capture the dynamics of the Green’s function whereas DMD can still be performed because it is designed to analyze nonlinear dynamics.

### 5.1 KBE with Hatree-Fock self-energy approximation

When and the self-energy term is chosen to be the HF approximation, which is static, the HF self-energy term can be absorbed into the single particle Hamiltonian. As a result, the KBE reduces to time-dependent Hartree-Fock equations. We solved this time-dependent problem by using a second-order Runge-Kutta integrator within the time interval , with a time step of . Note that the time unit is defined as 1/energy unit. We have not assign specific unit to either the time or energy. Four -points are sampled in the Brillouin zone, i.e., . Therefore, each snapshot of the data matrices is a vector with 4 elements.

We took the first out of a total of snapshots to perform the DMD of . The nonzero singular values of the snapshot matrix are plotted in Figure 5.1.

We can clearly see that the first three singular values are orders of magnitudes larger than the last singular value in this case. Consequently, we can approximate by the three leading singular values and vectors. This approximation also yields three DMD modes obtained from (3.13), which we plot in the left panel of Figure 5.2. The frequencies associated with these DMD modes, which are obtained from the eigenvalues of the matrix defined in (3.10) are

(5.4) |

Due to the convention we used in (3.2), the real part of corresponds to the frequency of temporal oscillation, and the imaginary part represents the rate of exponential growth or decay of the oscillation in time. In this case, the imaginary part of should be zero. The small imaginary components in (5.4) are introduced by the numerical error in the approximate solution to the TDHF equation.

It is well known that for TDHF, the absorption energy of the 2-band system and the corresponding exciton wavefunction can be obtained by performing a linear response analysis of the TDHF equation and solving the corresponding Bethe-Salpeter (or Casida) equation, which is an eigenvalue problem. The right panel of Figure 5.2 shows the magnitudes of three eigenvectors of BSE match well with those of the DMD modes shown in the left panel. The corresponding eigenvalues are

(5.5) |

which match well with the DMD frequencies listed in (5.4). The small difference between and is again due to the small numerical error present in the Runge-Kutta approximate solution of the TDHF equation. The excellent agreement suggests that the DMD modes are physical, and they properly describe the underlying exciton dynamics defined by the TDHF equation.

### 5.2 KBE with second Born approximation of the self-energy

In this example, the second Born approximation is used to construct the self-energy in the KBE (2.4). In addition, we assume that the system is driven by an instantaneous pulse with being the field amplitude. We sample the Brillouin zone with -points. For testing purpose, we solve the KBE for the time interval with time step . We experimented with pulse amplitudes , and energy intensity. In each case, we used the first out of total snapshots to perform the DMD analysis. The singular values of the snapshot matrix in (3.6) for the weak () and strong () pulses are plotted in Figure 5.3. We can clearly see that, in both cases, the leading singular values are orders of magnitude larger than other singular values. This is also the case for the generated from . Consequently, in all cases, we can approximate by the leading 11 singular values and vectors.

When the intensity of the pulse is small, we can perform a linear response analysis of by solving a BSE eigenvalue problem. Figure 5.4 shows that the eigenvectors of the BSE Hamiltonian (with Tamm-Dancoff approximation) associated with 11 smallest eigenvalues match well with the DMD modes computed from (3.13). The eigenvalues of the BSE Hamiltonian also match well with the real part of for .

### 5.3 Comparison with Fourier Analysis

In this section, we compare DMD analysis with Fourier spectral analysis, which has traditionally been used to identify key features of a one-dimensional trajectory

. In such an analysis, we perform a discrete Fourier transform (DFT) of a uniformly sampled trajectory

to obtainwhere

(5.6) |

Note that the term is included above to match some frequencies computed from DMD that are not within the same period. Clearly, the larger the magnitude of , the more important is for describing the dynamics exhibited by . If can be characterized by a few frequencies, will exhibit a few peaks.

In Figure 5.5, we plot the magnitude of obtained from performing a DFT of the polarizability defined as

(5.7) |

where denotes a -point, and is a dipole matrix. In our two band model with a constant dipole matrix element approximation, the polarization is simply the sum of the two off-diagonal elements of the density matrix. By analyzing the polarization in the linear response regime, we can get the exciton energy and wavefunctions.

The positions of the three peaks of in Figure 5.5 are

These frequencies match well with the real parts of the three DMD frequencies shown in (5.4). When the Fourier analysis is applied to the polarizability obtained from the numerical solution of the KBE with a second Born approximation to the self-energy, we observe a similar match between the peak positions of and the real part of obtained from (3.11) and (3.14) as long as the pulse intensity is relatively small. This can be clearly seen in Figure 5.6 in which the magnitude of is plotted for . The positions of the peaks match well with the real part of the DMD frequencies which are marked by vertical dotted lines.

In addition to frequencies, the DMD analysis also provides spatial (momentum) modes associated with different frequencies. Similar type of modes, which we will call Fourier modes, can be obtained from the Fourier analysis, although the procedure for finding these modes is a bit cumbersome.

Instead of performing a DFT to the polarizability, we can perform a set of DFTs to for each -point to obtain , where if we have DFT frequencies, and . The vector

(5.8) |

defines the th Fourier mode associated with the frequency . The th Fourier mode makes a significant contribution to if is sufficiently large for some .

Figure 5.7 shows that the first four Fourier modes obtained from the solution of the KBE with a second Born self-energy approximation and driven by a pulse with intensity match well with the corresponding DMD modes after being scaled by a phase factor. (i.e. like eigenvectors, DMD modes are unique up to a phase scaling factor for some phase angle .)

However, when the pulse intensity increases to and , not all frequencies identified from the Fourier analysis can be matched with those obtained from the DMD analysis. In fact, cannot be characterized by a few isolated peaks. Figure 5.8 shows that when , we can only match six frequencies obtained from the Fourier analysis with those obtained from DMD (marked by blue dotted lines). These matches are not perfect.

In Figure 5.9, we compare 4 most significant modes obtained from DMD and DFT. The significance of each DFT mode can be quantified by the height of the peak associated with the frequency of that mode. The significance of each DMD mode can be measured by the magnitude of the corresponding expansion coefficient in the reconstruction or extrapolation (see the next section) of the trajectory as described by (3.15).

The frequencies of the four most significant DMD modes are

The real parts of these frequencies which describe the oscillatory behavior of in real time by our convention do not closely match (with the exception of ) the four most significant frequencies obtained from the DFT, which are

Note that the real part of matches well with . For these two matching frequencies, the corresponding DMD and Fourier modes also match well as we can see in Figure 5.9. All the other three DMD modes are different from the other three Fourier modes. In particular, the most significant DMD mode is not seen in the Fourier analysis. The DMD mode looks similar to both and . However, their values are clearly different at and .

### 5.4 Extrapolation of the density dynamics

The discrepancy between the frequencies identified by Fourier analysis and DMD analysis raises the question about which analysis is more useful or reliable. In the case of a low intensity driving field, we can compare the spectral modes with the eigenvectors of the BSE problems. However, when the pulse intensity is high, we can no longer rely on the BSE which, is validate in the linear response regime, to validate the spectral modes.

One way to assess the validity or quality of the spectral modes is to examine how well they can be used to reconstruct the sampled density trajectory and extrapolate the density dynamics outside of the sampling window.

For DMD analysis, we follow (3.15) to reconstruct and extrapolate the density trajectory at k-point as

(5.9) |

where are obtained either from the projection of initial density onto the DMD modes as (3.16), or from performing a linear least squares fit on the snapshots used to perform the DMD analysis as given in (3.17).

On the other hand, the Fourier analysis in the above subsection cannot be directly applied to do the extrapolation, as the whole trajectory instead of the sampled trajectory is required there. In order to use the sampled snapshots only, we need to first pad them with zeros before taking the discrete Fourier transform, i.e., we construct

as(5.10) |

for , and take the discrete Fourier transform of the polarization of these trajectories.

This is equivalent to convolving the discrete Fourier transform of (5.7) with a sinc function. Such a convolution broadens the high peaks in and introduces artifical wiggles not present in as can be seen in Figure 5.10 where the discrete Fourier transforms of the full polarization trajectory and the truncated and zero padded polarization are compared for models with driving field intensities and . We take and respectively for the two cases, in order to show the comparison clearly.

Once the frequency peaks of are identified, we denote them by , to reconstruct an approximate trajectory as

(5.11) |

where is the DFT mode associated with as defined in (5.8), and are parameters to be determined, with being introduced to account for the exponential decay of the dynamics. We fit the model (5.11) to the sampled snapshots in the least-squares sense, and obtain the fitting coefficients and by solving a nonlinear least squares problem.

To compare the reconstructed trajectories, we perform a renormalization so that

(5.12) |

where , and is the sampled data.

Figure 5.11 shows that, when , matches with much better than at . Similar results can be obtained for other -points. In both extrapolations, the number of sampled snapshots is , as shown by the shaded window.

To evaluate the quality of reconstruction/extrapolation quantitatively, we introduce a metric defined in terms of the cosine of the angles between the original KBE trajectory vector and the extrapolated trajectory vector at each -point, i.e.,

(5.13) |

where is either DMD or DFT, and denotes the standard Euclidean inner product of two complex vectors. It is clear that . If the reconstruction/extrapolation fits the original trajectory well, then should be close to for all . On the contrary, a small value of suggests a large deviation of from at the -point .

Figure 5.12 shows that, when , is between and for all -points, indicating a good agreement between the extrapolated trajectory from DMD and the original trajectory. By contrast, is small for most -points, which means the extrapolation from DFT does not give good results. These are consistent with the results in Figure 5.11.

When the pulse intensity is increased to or , it is observed from Figure 5.13 that the values of is significantly lower. In both cases, is less than for all -points, which suggests fails to capture the features of from the sampled trajectories. Such failure can also be clearly seen in Figure 5.14 where we plot the magnitude of and at the zero -point, and compare them with that of . We remark that when , although the values of are large for all -points, there is still noticeable difference between and as we can see in Figure 5.14 within the sampling window. The reason that the value of is close to 1.0 in this case is that the inner product between and is largely determined by the tails of these two trajectories, which are both nearly zero. In fact, having a value close to 1 is a necessary but not sufficient condition for a good extrapolation. Nonetheless, in this case, the Fourier based extrapolation appears to do a better job in capturing the general trend of the dynamics than the DMD based approach.

We believe the reason why DMD fails to capture the decay rate when and is that the stronger driving fields introduce more spectral degrees of freedom that are not fully captured by the projected Koopman operator when it is constructed by simply mapping the vector to . In other words, as explained in section 4, there is a discrepancy between the dimension of the projected Koopman operator, i.e., the value of in (3.15), and the intrinsic number of spectral components in the dynamics of .

As we indicated in section 4, there are two possible ways to address this problem. One is to increase the number of -points. But it is not clear how many -points are needed to ensure the rank of the projected Koopman operator is sufficiently large. In fact, we have tried to increase the number of -points to . That does not appear to yield significant improvement in extrapolation accuracy, but incurs a significant increase in computational and memory cost. This is because the two-time Green’s function we need to compute by solving the KBE in order to generate the one time snapshots have more degrees of freedom.

The other remedy is to use the HODMD algorithm to increase the dimension of the projected Koopman operator by augmenting a single snapshot with consecutive snapshots as explained in section 4. These snapshots are concatenated into a single vector and the DMD procedure is then applied to the augmented snapshots (4.4).

Although it may be possible to estimate the minimum number of time delays to be concatenated into a single column of a snapshot matrix analytically Pan and Duraisamy (2020), we experimented with several values of numerically, and found that for the model problem we tested with and , is a good choice.

Furthermore, as we discussed in section 4, to reduce the computational cost of HODMD and the potential level of linear dependency among the column vectors in the snapshot matrix, we can increase the temporal distance between adjacent columns to yield matrices of the form (4.6). Both Figure 5.13 and Figure 5.14 show that by introducing time delays by 5 ’s and increasing the temporal distance between adjacent columns to 5 , which yields the HODMD(5,5) scheme, we can obtain extrapolated trajectories that are nearly indistinguishable from the true trajectories (obtained by solving the KBE numerically) even when the driving field intensities are increased to and respectively.

Figures 5.15 and 5.16 show that with a reduced number of sampled snapshots, the extrapolated trajectories produced by HODMD(5,5) still match perfectly with the true trajectories associated with the driving field intensity and respectively. They are clearly better than the trajectories produced from the DFT based extrapolation, which fails to accurately capture the decay rate of the dynamics. Moreover, the computational cost of the DFT based extrapolation is much higher because a nonlinear least squares optimization problem needs to be solved in order to determine the coefficients in the extrapolation model. The solution of the optimization problem depends sensitively on the initial guess of the coefficients.

In addition, it can be clearly seen from Figure 5.16 that when , HODMD() fails to extrapolate the trajectory correctly with snapshots. The failure is likely due to the linear dependency among columns of the snapshot matrix, and a less optimal singular value threshold used in the truncated SVD performed on this matrix.

To demonstrate that HODMD indeed captures the fast decay of than DMD, we list the DMD and HODMD(5,5) frequencies associated with the four most significant modes in Table 5.1 for the simulation with , and Table 5.2 for the simulation with .

1 | 2 | 3 | 4 | |
---|---|---|---|---|

1 | 2 | 3 | 4 | |
---|---|---|---|---|

From these tables, we observe that, overall, the imaginary parts of are much larger than those of . By the convention we use in (5.9), a large imaginary part corresponds to a more rapid decay of the DMD or HODMD mode.

One practical question one may ask is how many snapshots () we need to collect in order to accurately extrapolate the full (long-time) trajectory of . This question is difficult to answer a priori in general. For the model problem we examined, we experimented numerically with values ranging from 21 to 500. For each , we compute the root mean square error of the extrapolated trajectory defined as

Comments

There are no comments yet.