1 Introduction
The discretization of partial differential equations (PDEs) by classical methods like finite element, spectral method, or finite volume leads to dynamical models with very large statespace dimensions, typically of the order of millions of degrees of freedom to obtain an accurate solution. MOR
[1] is an effective method for reducing the complexity of such models while capturing the essential features of the system state. Starting from the Truncated Balanced Realization, introduced by Moore [2] in 1981, several other reduction techniques have been developed and flourished during the last 40 years, including the Hankelnorm reduction [3], the proper orthogonal decomposition (POD) [4] and the PadéviaLanczos (PVL) algorithm [5]. More recently, there has been a focus on the physical interpretability of the reduced models. Failure to preserve structures, invariants, and intrinsic properties of the approximate model, besides raising questions about the validity of the reduced models, has been associated with instabilities and exponential error growth, independently of the theoretical accuracy of the reduced solution space. Stable reduced models have been recovered by enforcing constraints on the reduced dynamics obtained using standard reduction tools. Equality and inequality constraints have been considered to control the amplitude of the POD modes [6], the fluid temperature in a combustor [7], and the aerodynamic coefficients [8]. Other methods directly incorporate the quantity of interest into the reduced system, producing infsup stable [9], fluxpreserving [10], and skewsymmetric
[11] conservative reduced dynamics. Even though great effort has been spent developing time integrators that preserve the symplectic flow underlying Hamiltonian systems, interest in geometric model order reduction initiated more recently, with efforts to preserve the Lagrangian structures [12].The remainder of the paper is organized as follows. In Section 2, we present the structure characterizing the dynamics of Hamiltonian systems and the concept of symplectic transformations. In Section 3, we show that linear symplectic maps can be used to guarantee that the reduced models inherit the geometric formulation from the full dynamics. Different strategies to generate such maps are investigated in Section 4, with thoughts on optimality results and computational complexities. A novel approach deviating from the linearity of the projection map is briefly discussed in Section 5. Finally, we discuss applications of structurepreserving reduction techniques to two more general classes of problems in Section 6, and some concluding remarks are offered in Section 7.
2 Symplectic geometry and Hamiltonian systems
Let us first establish some definitions and properties concerning symplectic vector spaces.
Definition 2.1.
Let be a finitedimensional real vector space and a bilinear map. is called antisymmetric if
It is nondegenerate if
Definition 2.2.
Let be a finitedimensional vector space with an antisymmetric bilinear form on . The pair is a symplectic linear vector space if is nondegenerate. Moreover, has to be dimensional.
Since we are interested in structurepreserving transformations, preserving the structure means to preserve the antisymmetric bilinear form, as stated in the following definition.
Definition 2.3.
Let and be two symplectic vector spaces with . The differentiable map is called a symplectic transformation (symplectomorphism) if
where is the pullback of with .
One of the essential properties of Euclidean spaces is that all the Euclidean spaces of equal dimensions are isomorphic. For the symplectic vector spaces, a similar result holds, since two dimensional symplectic vector spaces are symplectomorphic to one another. They therefore are fully characterized by their dimensions (As a consequence of the following theorem).
Theorem 2.1 (Linear Darboux’ theorem [13]).
For any symplectic vector space , there exists a basis of such that
(1) 
The basis is called Darboux’ chart or canonical basis.
The proof of Theorem 2.1
is based on a procedure similar to the GramSchmidt process to generate the symplectic basis, known as symplectic GramSchmidt
[14].The canonical basis allows representing the symplectic form as
(2) 
where are the expansion coefficients of with respect to the basis and
(3) 
is known as the Poisson tensor, with
and denoting the zero and identity matrices, respectively. As a direct result, the matrix representation of the symplectic form in the canonical basis is . More generally, using a noncanonical basis, the form reduces to , with being an invertible constant skewsymmetric matrix.While symplectic vector spaces are helpful for the analysis of dynamical problems in Euclidean spaces and to define geometric reducedorder models, the constraint to the Euclidean setting is not generally adequate. In particular, the abstraction of the phase spaces of classical mechanics over arbitrary manifolds requires the definition of more general symplectic manifolds. We refer the reader to [15] for a more comprehensive description of the topic. In this work, we limit ourselves to introducing a significant result regarding the evolution of the state of Hamiltonian systems.
Definition 2.4.
Let be a symplectic manifold and a form. We refer to the unique vector field , which satisfies
as the Hamiltonian vector field related to , where denotes the contraction operator and is the exterior derivative. The function is called the Hamiltonian of the vector field .
Suppose is also compact, then is complete [16] and can be integrated, i.e., there exists an integral curve of , parametrized by the real variable , that is the solution of
(4) 
Equation (4) is referred to as Hamilton’s equation of evolution or Hamiltonian system. Darboux’ theorem, as a generalization of Theorem 2.1, states that two symplectic manifolds are only locally symplectomorphic. Using this result, the Hamiltonian vector field admits the local representation
(5) 
with is a local basis, leading to the following representation of (4), expressed directly in terms of .
Proposition 2.1.
Let be a dimensional symplectic vector space and let be a canonical system of coordinates. Hamilton’s equation is defined by
(6) 
for , which is a first order system in the space, or generalized phasespace.
Thus, if the state vector is introduced, (6) takes the form
(7) 
where is the naive gradient of . The flow of Hamilton’s equation has some interesting properties.
Proposition 2.2.
Let be the flow of a Hamiltonian vector field . Then is a symplectic transformation.
We rely on a geometric perspective of linear vector spaces to highlight the importance of Proposition 2.2. Given two coefficient vectors and in , the symplectic form (2) can be interpreted as the sum of the oriented areas of the orthogonal projection of the parallelogram defined by the two vectors on the planes. Definition 2.3, in case of dimensional symplectic vector space with canonical coordinates, is equivalent to stating that a map is a symplectic transformation if and only if its Jacobian satisfies everywhere
(8) 
Property (8) can be used to show that a symplectic transformation preserves the bilinear form in the sense that [15]
(9) 
Hence, the symplectic map represents a volumepreserving transformation. However, being symplectic is a more restrictive condition than being volumepreserving, as shown in the Nonsqueezing theorem [17].
We conclude this section by noting that if the Hamiltonian function does not depend explicitly on time, its value is conserved along the solution trajectory.
Proposition 2.3.
For Hamiltonian systems (7), the Hamiltonian function is a first integral.
3 Symplectic Galerkin projection
The motivation of MOR is to reduce the computational complexity of dynamical systems in numerical simulations. In the context of structurepreserving projectionbased reduction, two key ingredients are required to define a reduced model. First, we need a lowdimensional symplectic vector space that accurately represents the solution manifold of the original problem. Then, we have to define a projection operator to map the symplectic flow of the Hamiltonian system onto the reduced space, while preserving its delicate properties.
Let us assume there exists a canonical basis such that Hamilton’s equation can be written in canonical form
(10) 
and the related symplectic vector space is denoted by . Symplectic projectionbased model order reduction adheres to the key idea of more general projectionbased techniques [18] to approximate in a lowdimensional symplectic subspace of dimension . In particular, we aim at to have a clear reduction, and therefore, significant gains in terms of computational efficiency. Let be a reduced basis for the approximate symplectic subspace and construct the linear map given by
(11) 
where
belongs to the set of symplectic matrices of dimension , also known as the symplectic Stiefel manifold, defined by
Differential maps are often used to transfer structures from welldefined spaces to unknown manifolds. In this context, using the symplecticity of , it is possible to show [22] that Definition 2.3 holds, with the right inverse of represented by , and that there exists a symplectic form on given by
(12) 
As a result, is a symplectic vector space. In the following, for the sake of notation, we use to indicate the reduced symplectic manifold paired with its bilinear form.
Given a symplectic matrix , its symplectic inverse is defined as
(13) 
Even though different from the pseudoinverse matrix , the symplectic inverse plays a similar role and, in the following proposition, we outline its main properties [21].
Proposition 3.1.
Suppose is a symplectic matrix and is its symplectic inverse. Then

.

.

.

If A is orthogonal then .
Using (12), the definition of and the symplectic GramSchmidt process, it is possible to construct a projection operator , that, differently from the POD orthogonal projection [19], can be used to approximate (10) with Hamiltonian system of reduceddimension , characterized by the Hamiltonian function
(14) 
In particular, in the framework of Galerkin projection, using (11) in (10) yields
(15) 
with
being the residual term. Utilizing the chain rule and the second property of
in Proposition 3.1, the gradient of the Hamiltonian in (15) can be recast asBy assuming that the projection residual is orthogonal with respect to the symplectic bilinear form to the space spanned by , we recover
(16) 
System (16) is known as a symplectic Galerkin projection of (10) onto . The preprocessing stage consisting of the collection of all the computations required to assemble the basis is known as the offline stage. The numerical solution of the lowdimensional problem (16) represents the online stage, and provides a fast approximation to the solution of the highfidelity model (10) by means of (11). Even though the offline stage is possibly computationally expensive, this splitting is beneficial in a multiquery context, when multiple instances of (16) have to be solved, e.g. for parametric PDEs.
Traditional projectionbased reduction techniques do not guarantee stability, even if the highdimensional problem admits a stable solution [20], often resulting in a blowup of system energy. On the contrary, by preserving the geometric structure of the problem, several stability results hold for the reduced Hamiltonian equation (16). In [22, Proposition 15, page A2625], the authors show that the error in the Hamiltonian is constant for all . We detail two relevant results in the following, suggesting that structure and energy preservation are key for stability.
Theorem 3.1 (Boundedness result [21]).
Consider the Hamiltonian system (10), with Hamiltonian and initial condition such that , with symplectic basis. Let (16) be the reduced Hamiltonian system obtained as the symplectic Galerkin projection induced by of (10). If there exists a bounded neighborhood in such that , or , for all on the boundary of , then both the original system and the reduced system constructed by the symplectic projection are uniformly bounded for all .
Theorem 3.2 (Lyapunov stability [21, 22]).
Consider the Hamiltonian system (10) with Hamiltonian and the reduced Hamiltonian system (16). Suppose that is a strict local minimum of . Let be an open ball around such that and , for all and some , and for some , where is the boundary of . If there exists an open neighborhood of such that , then the reduced system (16) has a stable equilibrium point in .
For the timediscretization of (16), the use of a symplectic integrator [23] is crucial for preserving the symplectic structure at the discrete level. In particular, the discrete flow obtained using a symplectic integrator satisfies a discrete version of Proposition 2.2.
In the next section, we introduce different strategies to construct symplectic bases as results of optimization problems.
4 Proper symplectic decomposition
Let us consider the solution vectors (the socalled solution snapshots) obtained, for different time instances , , by time discretization of (10) using a symplectic integrator. Define the snapshot matrix
(17) 
as the matrix collecting the solution snapshots as columns. In the following, we consider different algorithms stemming from the historical method of snapshots [4], as the base of the proper orthogonal decomposition (POD). To preserve the geometric structure of the original model, we focus on a similar optimization problem, the proper symplectic decomposition (PSD), which represents a datadriven basis generation procedure to extract a symplectic basis from . It is based on the minimization of the projection error of on and it results in the following optimization problem for the definition of the symplectic basis :
(18)  
with and is the Frobenius norm. Problem (18) is similar to the POD minimization, but with the feasibility set of rectangular orthogonal matrices, also known as the Stiefel manifold
replaced by the symplectic Stiefel manifold. Recently there has been a great interest in optimization on symplectic manifolds, and a vast literature is available on the minimization of the leastsquares distance from optimal symplectic Stiefel manifolds. This problem has relevant implications in different physical applications, such as the study of optical systems [24] and the optimal control of quantum symplectic gates [25].
Unfortunately, with respect to POD minimization, problem (18) is significantly more challenging for different reasons. The nonconvexity of the feasibility set and the unboundedness of the solution norm precludes standard optimization techniques. Moreover, most of the attention is focused on the case , which is not compatible with the reduction goal of MOR.
Despite the interest in the topic, an efficient optimal solution algorithm has yet to be found for the PSD.
Suboptimal solutions have been attained by focusing on the subset of the orthosymplectic matrices, i.e.,
(19) 
In [21], while enforcing the additional orthogonality constraint in (18), the optimization problem is further simplified by assuming a specific structure for . An efficient greedy method, not requiring any additional block structures to , but only its orthogonality and simplecticity, has been introduced in [22]. More recently, in [26], the orthogonality requirement has been removed, and different solution methods to the PSD problem are explored. In the following, we briefly review the abovementioned approaches.
4.1 SVDbased methods for orthonormal symplectic basis generation
In [21], several algorithms have been proposed to directly construct orthosymplectic bases. Exploiting the SVD decomposition of rearranged snapshots matrices, the idea is to search for optimal matrices in subsets of . Consider the more restrictive feasibility set
Then holds if and only if , i.e. . Moreover, we have that . The cost function in (18) becomes
(20) 
with , where and are the generalized phasespace components of . Thus, as a result of the Eckart–Young–Mirsky theorem, (20
) admits a solution in terms of the singularvalue decomposition of the data matrix
. This algorithm, formally known as Cotangent Lift, owes its name to the interpretation of the solution to (20) in as the cotangent lift of linear mappings, represented by and , between vector spaces of dimensions and . Moreover, this approach constitutes the natural outlet in the field of Hamiltonian systems of the preliminary work of Lall et al. [12] on structurepreserving reduction of Lagrangian systems. However, there is no guarantee that the Cotangent Lift basis is close to the optimal of the original PSD functional.A different strategy, known as Complex SVD decomposition, relies on the definition of the complex snapshot matrix , with being the imaginary unit. Let , with
, be the unitary matrix solution to the following accessory problem:
(21)  
As for the Cotangent Lift algorithm, the solution to (21) is known to be the set of the leftsingular vectors of
corresponding to its largest singular values. In terms of the real and imaginary parts of
, the orthogonality constraint implies(22) 
Consider the orthosymplectic matrix, introduced in [21], and given by
(23) 
Using (22), it can be shown that such an is the optimal solution of the PSD problem in
that minimizes the projection error of , also known as the rotated snapshot matrix, with given in (17). In [26], extending the result obtained in [27] for square matrices, it has been shown that (23) is a complete characterization of symplectic matrices with orthogonal columns, meaning that all the orthosymplectic matrices admit a representation of the form (23), for a given , and hence . In the same work, Haasdonk et al. showed that an orthosymplectic matrix that minimizes the projection error of is also a minimizer of the projection error of the original snapshot matrix , and vice versa. This is been achieved by using an equivalence argument based on the POD applied to the matrix . Thus, combining these two results, the Complex SVD algorithm provides a minimizer of the PSD problem for orthosymplectic matrices.
4.2 SVDbased methods for nonorthonormal symplectic basis generation
In the previous section, we showed that the basis provided by the Complex SVD method is not only nearoptimal in , but is optimal for the cost functionals in the space of orthosymplectic matrices. The orthogonality of the resulting basis is beneficial [28], among others, for reducing the condition number associated with the fully discrete formulation of (16). A suboptimal solution to the PSD problem not requiring the orthogonality of the feasibility set is proposed in [21], as an improvement of the SVDbased generators of orthosymplectic bases using the Gappy POD [29], under the name of nonlinear programming approach (NLP). Let be a basis of dimension generated using the Complex SVD method. The idea of the NLP is to construct a target basis , with , via the linear mapping
(24) 
with . Using (24) in (18) results in a PSD optimization problem for the coefficient matrix , of significant smaller dimension ( parameters) as compared to the original PSD problem ( parameters) with unknown. However, no optimality results are available for the NLP method.
A different direction has been pursued in [26], based on the connection between traditional SVD and Schur forms and the matrix decompositions, related to symplectic matrices, as proposed in the following theorem.
Theorem 4.1 (SVDlike decomposition [30, Theorem 1, page 6]).
If , then there exists , and of the form
(25) 
with , , such that
(26) 
Moreover, rank and are known as symplectical singular values.
Let us apply the SVDlike decomposition to the snapshot matrix (17), where represents the number of snapshots, and define its weighted symplectic singular values as
with being the th column of and the Euclidean norm. The physical interpretation of the classical POD approach characterizes the POD reduced basis as the set of a given cardinality that captures most of the energy of the system. The energy retained in the reduced approximation is quantified as the sum of the squared singular values corresponding to the left singular vectors of the snapshot matrix representing the columns of the basis. A similar guiding principle is used in [26], where the energy of the system, i.e., the Frobenius norm of the snapshot matrix, is connected to the weighted symplectic singular values as
(27) 
Let be the set of indices corresponding to the largest energy contributors in (27),
(28) 
Then, the PSD SVDlike decomposition defines a symplectic reduced basis by selecting the pairs of columns from the symplectic matrix corresponding to the indices set
(29) 
Similarly to the POD, the reconstruction error of the snapshot matrix depends on the magnitude of the discarded weighted symplectic singular values as
(30) 
Even though there are no proofs that the PSD SVDlike algorithm reaches the global optimum in the sense of (18), some analysis and numerical investigations suggest that it provides superior results as compared to orthonormal techniques [26].
4.3 Greedy approach to symplectic basis generation
The reduced basis methodology is motivated and applied within the context of realtime and multiqueries simulations of parametrized PDEs. In the framework of Hamiltonian systems, we consider the following parametric form of (10)
(31) 
with being a dimensional parameter space. Let be the set of solutions to (31) defined as
For the sake of simplicity, in the previous sections we have only considered the nonparametric case. The extension of SVDbased methods for basis generations to (31) is straightforward on paper, but it is often computationally problematic in practice as the number of snapshots increases. Similar to other SVDbased algorithms, the methods described in the previous sections require the computation of the solution to (31) corresponding to a properly chosen discrete set of parameters and time instances , defined a priori,
and constituting the sampling set . Random or structured strategies exist to define the set , such as the Monte Carlo sampling, Latin hypercube sampling, and sparse grids [31], while is a subset of the timediscretization, usually dictated by the integrator of choice. The set of snapshots corresponding to the sampling set must provide a “good” approximation of the solution manifold and should not miss relevant parts of the timeparameter domain.
Once the sampling set has been fixed, the matrix
, , or , depending on the method of choice, is assembled, and its singular value decomposition is computed. Even though a certain amount of computational complexity is tolerated in the offline stage to obtain a significant speedup in the online stage, the evaluation of the highfidelity solution for a large sampling set and the SVD of the corresponding snapshot matrix are often impractical or not even feasible.
Hence, an efficient approach is an incremental procedure. The reduced basis, in which the column space represents the approximating manifold, is improved iteratively by adding basis vectors as columns. The candidate basis vector is chosen as the maximizer of a much cheaper optimization problem. This summarizes the philosophy of the greedy strategy applied to RB methods
[33, 34], which requires two main ingredients: the definition of an error indicator and a process to add a candidate column vector to the basis.
Let be an orthonormal reduced basis produced after steps of the algorithm. In its idealized form, introduced in [32], the greedy algorithm uses the projection error
(32) 
to identify the snapshot that is worst approximated by the column space of over the entire sampling set . Let be the vector obtained by orthonormalizing with respect to . Then the basis is updated as . To avoid the accumulation of rounding errors, it is preferable to utilize backward stable orthogonalization processes, such as the modified GramSchmidt orthogonalization. The algorithm terminates when the basis reaches the desired dimension, or the error (32) is below a certain tolerance. In this sense, the basis is hierarchical because its column space contains the column space of its previous iterations. This process is referred to as strong greedy
method. Even though introduced as a heuristic procedure, interesting results regarding algebraic and exponential convergence have been formulated in
[33, 34], requiring the orthogonality of the basis in the corresponding proofs. However, in this form, the scheme cannot be efficiently implemented: the error indicator (32) is expensive to calculate because it requires all the snapshots of the training set to be accessible, relieving the computation only of the cost required for the SVD.An adjustment of the strong greedy algorithm, known as weak greedy algorithm, assembles the snapshot matrix corresponding to iteratively while expanding the approximating basis. The idea is to replace (32) with a surrogate indicator that does not demand the computation of the highfidelity solution for the entire timeparameter domain.
In the case of elliptic PDEs, an aposteriori residualbased error indicator requiring a polynomial computational cost in the approximation space dimension has been introduced in [35]. The substantial computational savings allow the choice of a more refined, and therefore representative, sampling set . One might also use a goaloriented indicator as the driving selection in the greedy process to obtain similar computational benefits. In this direction, in the framework of structurepreserving model order reduction, [22] suggests the Hamiltonian as a proxy error indicator. Suppose , with , is a given orthosymplectic basis and consider
(33) 
By [22, Proposition 15], the error in the Hamiltonian depends only on the initial condition and the symplectic reduced basis. Hence, the indicator (33) does not require integrating in time the full system (31) over the entire set , but only over a small fraction of the parameter set, making the procedure fast. Hence, the parameter space can be explored first,
(34) 
to identify the value of the parameter that maximizes the error in the Hamiltonian as a function of the initial condition. This step may fail if , . Then (31) is temporally integrated to collect the snapshot matrix . Finally, the candidate basis vector is selected as the snapshot that maximizes the projection error
(35) 
Standard orthogonalization techniques, such as QR methods, fail to preserve the symplectic structure [36]. In [22], the SR method [37], based on the symplectic GramSchimidt, is employed to compute the additional basis vector that conforms to the geometric structure of the problem. To conclude the th iteration of the algorithm, the basis is expanded in
We stress that, with this method, known as symplectic greedy RB, two vectors, and , are added to the symplectic basis at each iteration, because of the structure of orthosymplectic matrices. A different strategy, known as PSDGreedy algorithm and partially based on the PSD SVDlike decomposition, has been introduced in [38], with the feature of not using orthogonal techniques to compress the matrix . In [22], following the results given in [33], the exponential convergence of the symplectic strong greedy method has been proved.
Theorem 4.2 ([33, Theorem 20, page A2632]).
Let be a compact subset of . Assume that the Kolmogorov width of defined as
decays exponentially fast, namely with . Then there exists such that the symplectic basis generated by the symplectic strong greedy algorithm provides exponential approximation properties,
(36) 
for all and some .
Theorem 4.2 holds only when the projection error is used as the error indicator instead of the error in the Hamiltonian. However, it has been observed for different symplectic parametric problems [22] that the symplectic method using the loss in the Hamiltonian converges with the same rate of (36). The orthogonality of the basis is used to prove the convergence of the greedy procedure. In the case of a nonorthonormal symplectic basis, supplementary assumptions are required to ensure the convergence of the algorithm.
5 Dynamical lowrank reduced basis methods for Hamiltonian systems
The Kolmogorov mwidth of a compact set describes how well this can be approximated by a linear subspace of a fixed dimension . A problem (31) is informally defined reducible if decays sharply with , implying the existence of a lowdimensional representation of . A slow decay limits the accuracy of any efficient projectionbased reduction on linear subspaces, including all the methods discussed so far. For Hamiltonian problems, often characterized by the absence of physical dissipation due to the conservation of the Hamiltonian, we may have in case of discontinuous initial condition [39] for wavelike problems.
Several techniques, either based on nonlinear transformations of the solution manifold to a reducible framework [40] or presented as online adaptive methods to target solution manifolds at fixed time [41], have been introduced to overcome the limitations of the linear approximating spaces. In different ways, they all abandon the framework of symplectic vector spaces. Therefore, none of them guarantees conservation of the symplectic structure in the reduction process. Musharbash et al. [42] proposed a dynamically orthogonal (DO) discretization of stochastic wave PDEs with a symplectic structure. In the following, we outline the structurepreserving dynamic RB method for parametric Hamiltonian systems, proposed by Pagliantini [44] in the spirit of the geometric reduction introduced in [43]. In contrast with traditional methods that provide a global basis, which is fixed in time, the gist of a dynamic approach is to evolve a localintime basis to provide an accurate approximation of the solution to the parametric problem (31). The idea is to exploit the local lowrank nature of Hamiltonian dynamics in the parameter space.
From a geometric perspective, the approximate solution evolves according to naturally constrained dynamics, rather than weakly enforcing the required properties, such as orthogonality or symplecticity of the RB representation, via Lagrange multipliers. This result is achieved by viewing the flow of the reduced model as prescribed by a vector field that is everywhere tangent to the desired manifold.
Suppose we are interested in solving (31) for a set of vectorvalued parameters , sampled from . Then the Hamiltonian system, evaluated at , can be recast as a set of ODEs in the matrix unknown ,
(37) 
where is a vectorvalued Hamiltonian function, the th column of is such that , and . We consider an approximation of the solution to (37) of the form
(38) 
where , and is such that its th column collects coefficients, with respect to the basis , of the approximation of . Despite being cast in the same framework of an RB approach, a stark difference between (38) and (11) lies in the timedependency of the basis in (38).
Consider the manifold of matrices having at most rank , and defined as
(39) 
with the technical requirement
(40) 
This represents a fullrank condition on to ensure uniqueness of the representation (38) for a fixed basis. The tangent vector at is given by , where and correspond to the tangent directions for the timedependent matrices and , respectively. Applying the orthogonality and symplecticity condition on , for all times , results in
(41) 
respectively. Using (41) and an additional gauge constraint to uniquely parametrize the tangent vectors by the displacements and , the tangent space of at can be characterized as
The reduced flow describing the evolution of the approximation is derived in [44] by projecting the full velocity field in (37) onto the tanget space of at , i.e.,
(42) 
To preserve the geometric structure of the problem, the projection operator is a symplectomorphism (see Definition 2.3) for each realization of the parameter , in the sense given in the following proposition.
Proposition 5.1 ([44, Proposition 4.3, page 420]).
Let . Then, the map
(43)  
is a symplectic projection, in the sense that
where is the symplectic form associated with the parameter .
The optimality of the reduced dynamics, in the Frobenius norm, follows from (42), where the flow of is prescribed by the best lowrank approximation of the Hamiltonian velocity field vector into the tangent space of the reduced manifold . Using (43) and (42), it is straightforward to derive the evolution equations for and :
(44) 
with .
The coefficients evolve according to a system of independent Hamiltonian equations, each in unknowns, corresponding to the symplectic Galerkin projection onto for each parameter instance in , similarly to the global symplectic RB method (16). In (44), however, the basis evolves in time according to a matrix equation in unknowns, affecting the projection.
A crucial property of the structure of is given in the following proposition.
Standard numerical integrators, applied to (44), do not preserve, at the timediscrete level, the property in Proposition 5.2 and the orthosymplectic structure is compromised after a single time step. In [44], two different intrinsic integrators have been investigated to preserve the orthosymplecticity of the basis, based on Lie groups and tangent techniques. Both methods require the introduction of a local chart defined on the tangent space of the manifold at , with
and being the vector space of skewsymmetric and Hamiltonian real square matrices. In terms of differential manifolds, represents, together with the Lie bracket defined as the matrix commutator , with , the Lie algebra corresponding to the Lie group . The idea is to recast the basis equation in (44) in an evolution equation in the corresponding Lie algebra. The linearity of Lie algebras allows to compute, via explicit RungeKutta methods, numerical solutions that remain on the Lie algebra. Finally, the Cayley transform is exploited to generate local coordinate charts and retraction /inverse retraction maps, used to recover the solution in the manifold of rectangular orthosymplectic matrices. In [45]
, the structurepreserving dynamical RBmethod has been paired with a rankadaptive procedure, based on a residual error estimator, to dynamically update also the dimension of the basis.
6 Extensions to more general Hamiltonian problems
6.1 Dissipative Hamiltonian systems
Many areas of engineering require a more general framework than the one offered by classical Hamiltonian systems, requiring the inclusion of energydissipating elements. While the principle of energy conservation is still used to describe the state dynamics, dissipative perturbations must be modelled and introduced in the Hamiltonian formulation (10). Dissipative Hamiltonian systems, with socalled Rayleigh type dissipation, are considered a special case of forced Hamiltonian systems, with the state , with , following the time evolution given by
(45) 
where is a velocity field, introducing dissipation, of the form
(46) 
We require to satisfy , , to represent a dissipative term and therefore
(47) 
In terms of Rayleigh dissipation theory, there exists a symmetric positive semidefinite matrix such that and (47) reads
Several strategies have been proposed to generate stable reduced approximations of (45), based on Krylov subspaces or POD [46, 48]. In [47], without requiring the symplecticity of the reduced basis, the gradient of the Hamiltonian vector field is approximated using a projection matrix , i.e., , which results in a noncanonical symplectic reduced form. The stability of the reduced model is then achieved by preserving the passivity of the original formulation. A drawback of such an approach is that, while viable for nondissipative formulations, it does not guarantee the same energy distribution of (45) between dissipative and null energy contributors. In the following, we show that the techniques based on symplectic geometry introduced in the previous sections can still be used in the dissipative framework described in (45) with limited modifications to obtain consistent and structured reduced models. Let us consider an orthosymplectic basis and the reduced basis representation , with being the reduced coefficients of the representation and being the generalized phase coordinates of the reduced model. The basis can be represented as
(48) 
with being the blocks, the indices of which are chosen to represent the interactions between the generalized phase coordinates of the two models, such that and . Following [49], the symplectic Galerkin projection of (45) reads
(49) 
with