I Introduction
The goal of this paper is to develop a machine learning framework that identifies the phase space structures governing phase space transport in data sets constructed from trajectories of Hamilton’s equations. Our focus in this paper will be on two degreesoffreedom Hamiltonian systems.
The Hamiltonian function typically has the form of the sum of a kinetic energy, a function of the momentum variables and the potential energy, a function of the position variables. For Hamilton’s equations in canonical form each momentum variable is canonically conjugate to one position variable. Hence there are an equal number of momentum and position variables. The space of both the momentum and position variables is referred to as the phase space and the space of only the position variables is referred to as the configuration space. The number of degrees of freedom of a Hamiltonian system is the number of configuration space variables.
Hamilton’s equations describe dynamics in phase space. Nevertheless, the topography of the potential energy surface plays an important role in defining the reaction dynamics problem for a specific Hamiltonian system. In particular, potential wells are identified as reactants and products. Wells are separated by saddle points and the configuration space picture of reaction dynamics is that of a trajectory evolving between wells by “crossing” the saddle point.
Indexone saddle points on a potential energy surface (PES) are the “seeds” for the phase space structures from which the theory of reactive islands is constructed. Conley [1, 2] was the first to analyze the phase space geometry and dynamics near an indexone saddle in two degreesoffreedom (DoF) Hamiltonian systems in his studies of the circular restricted three body problem.
The Lyapunov subcenter theorem [3, 4] is fundamental for passing from an equilibrium point (the indexone saddle) to dynamical behavior. It states that for a range of energies above the energy of the indexone saddle (the exact range is not given in the theorem) there exists an hyperbolic periodic orbit having twodimensional stable and unstable manifolds. In the threedimensional energy surface, stable and unstable manifolds have the geometry of cylinders (“tubes”) and these stable and unstable cylinders mediate transport across the region of the indexone saddle.
The cylindrical structure of the stable and unstable manifolds, together with their invariance, implies that trajectories starting within the tubes must remain in the tubes for all positive and negative time. All trajectories inside the stable cylinder approach the hyperbolic periodic orbit in positive time, pass through the region bounded by the periodic orbit, and then exit the region through the unstable cylinder. The stable cylinder (and the unstable cylinder) has two “branches” joined at the periodic orbit, one emanating to each “side” of the orbit.
In the context of escape from a potential well, escaping trajectories must lie in a branch of the stable cylinder. Trajectories starting in the branch of the stable cylinder lying in the potential well correspond to forward escape trajectories, i.e. trajectories that escape in forward time. Trajectories starting in the branch of the stable cylinder lying outside the potential well correspond to backward escape trajectories, i.e. trajectories that are captured in the well in forward time. After escape from/capture in the well, trajectories are guided by an unstable cylinder. This is how stable and unstable cylinders govern the dynamics in phase space.
We describe a lower dimensional technique for probing the geometry of the cylinders. In the potential well we construct a two dimensional Poincaré section, i.e. a two dimensional surface where the Hamiltonian vector field is everywhere transverse to the surface. The Poincaré section is constructed in such a way that the stable (and unstable) cylinder intersects it in a topological circle. The region bounded by this topological circle is referred to as a “reactive island”, where ‘reactive’ refers to the occurence of a chemical reaction, an analogue of the escape from a potential well considered here. The Poincaré map of the Poincaré section into itself is the map that associates to a point its first return to the Poincaré section under the flow generated by the Hamiltonian vector field. The inverse of this Poincaré map of the Poincaré section into itself is the map that associates to a point its first return to the Poincaré section under negative time, i.e. the point “where it came from”.
We consider the preimages of this reactive island by considering its evolution backwards in time under the inverse of the Poincaré map. In this way one obtains a reactive island on the Poincaré section that returns to the original reactive island, and then escapes for positive time evolution. By repeating this construction we obtain a sequence of reactive islands, ordered in time, which sequentially map to each other before reacting. Possible pathological intersections of the cylinders with the Poincaré section can occur, and are discussed in [5]. A theory of reaction dynamics for two DoF systems based on the geometry of these stable and unstable cylinders was developed in the late 1980’s and 90’s in [6, 5, 7, 8, 9, 10, 11, 12], although some of the ideas appeared in earlier work [13, 14], and it goes by the name of “reactive island theory”.
Our goal is to consider a data set consisting of points on the energy surface that are labeled based on the evolution of the corresponding trajectory and “learn” which points lead to a escape from a potential well and which do not. We have chosen to use support vector machines (SVM), a class of supervised learning classification and regression algorithms
[15, 16, 17], which has been introduced to nonlinear dynamical systems by Ref. 18. We employ the classification algorithms, which define a decision function by determining the boundary between different classes of data. To be precise, SVM identifies a subset of the data set referred to as “support vectors” to calculate the boundary for which the distance to data in both classes is locally maximal. In our case, the classes in the classification correspond to initial conditions leading to “qualitatively different” dynamical behaviour. The learned boundary between escaping and nonescaping trajectories in phase space consists of the invariant manifolds discussed above and thereby SVM enables us to determine the geometrical structures in phase space governing reaction dynamics.The reasons for choosing support vector machine (SVM) to identify the phase space structures are: (i) Nonlinear kernels in SVM provide means to approximate curves such as reactive islands which form nonlinear boundaries between reactive and nonreactive trajectories. We would like to note that methods for nonlinear clustering [19] and other kernel methods [20] are also candidates for developing similar approaches, but beyond the scope of this study. (ii) We design our approach with extensions to higherdimensional applications in mind. SVM is known to work well even with small amounts of data, therefore our approach is computationally better suited than existing methods for highdimensional systems and systems where integrating trajectories is expensive. This makes SVM suitable for the reactive island theory of three DoF [21] and systembath models of isomerization [22] which has been developed recently and supports the case for a computationally efficient approach to phase space structures presented here.
The Hamiltonian system that we will use to benchmark our approach will be the HénonHeiles system [23]. This is a two degree of freedom Hamiltonian system that serves as a paradigm for understanding complex dynamics in a variety of settings. As a function of the energy, it can display dynamical behavior that numerically appears to be “near integrable” to completely chaotic. Moreover, this system has three index one saddles that define three distinct reaction channels having the geometric structure discussed above. The geometry of reaction dynamics geometry for a similar system has been analyzed and discussed in Ref. [24].
This paper is outlined as follows. In Section II we describe the two degreesoffreedom HénonHeiles Hamiltonian system and the nature of reaction dynamics in the context of this model. In Section III
we describe the machine learning technique of support vector machines (SVM) and the approach of active learning. We describe how trajectories are constitute into data sets and the use of Lagrangian descriptors as a trajectory diagnostic. In Section
IV we describe our results, and in Section V we present our conclusions and the outlook for further research.Ii HénonHeiles system and reaction dynamics
We use the HénonHeiles Hamiltonian with three indexone saddles in bottlenecks through which trajectories can escape. This escape can be interpreted as a reaction by crossing the potential energy barrier. This model system and its high dimensional analog has been studied in great detail in nonlinear dynamical systems, statistical mechanics, for developing molecular simulation algorithms [25, 26, 27, 28, 29]. In this study, we will define the three exits as follows: the entering via the top indexone saddle corresponds to the formation of a molecule complex by combination of two atoms or molecules, while the left and right exits correspond to dissociation of the molecule into two different products with structural symmetry.
The Hamiltonian is given by
(1) 
where all the parameters are set to to be comparable with the known results in the literature [23, 30, 31]. The vector field is given by
(2)  
and the equilibrium points are
(3) 
The eigenvalues of the linearized system in Appendix
9 at the equilibrium points gives the origin is a center center equilibrium and the three remaining are saddle center equilibrium points.We illustrate the dynamics given by the vector field (2) by showing four sample trajectories in the Fig. 1 at same energy starting at the center center equilibrium point with different momenta and . In Fig. 1 (a,c), coordinates only differ in the second significant digit, and yet the escape from the potential well occurs via different bottlenecks and at different times. However, in Fig. 1 (b,d), a similar difference in coordinates only changes the escape time. This illustrates the challenge of sampling an ensemble of escape (transition) trajectories. Furthermore, the challenge in identifying different timescales of escape trajectories is due to the large variation in time to escape from the potential well when there is merely a small difference in . We will next show the underlying phase space structure of such escape behavior and then show the use of trajectory data in identifying the structure. Finally, it is important to note that for the total energies that we consider the energy surface is unbounded. This implies that notions of recurrence from standard ergodic theory, such as Poincaré recurrence and ergodicity, do not apply because escaping trajectories become unbounded and never return.
The trajectory behavior shown in Fig. 1 is mediated by the stable manifolds (cylindrical geometry or tubes) of the hyperbolic periodic orbits associated with the indexone saddles. For the two degreesoffreedom Hamiltonian system, tube manifolds can be computed and visualized in their complete geometry as shown in Fig. 2 (a) for the indexone saddle at . The stable manifolds associated with the three saddles are projected on the configuration space are shown in Fig. 2 (b) (In Appendix. A, Fig. 9 shows all the tube manifolds in 3D). In this article, we compute the stable manifolds using the procedure described in the Appendix A to obtain the segments which intersect a section with and shown in Fig. 2(b). Given such a section of the three dimensional energy surface, the first order reactive islands of escape are defined as the last intersection of the stable manifolds with the section. By last intersection, we mean the trajectories in forward time intersect the section and then leave the potential well without returning to the section and also referred to as the imminent escape.
Iii Support vector machines and active learning
The classification algorithms for SVM [15, 16, 17] construct a boundary between different classes of data. In our case, the classes correspond to areas of qualitatively different dynamics on the section and
. By qualitatively different dynamics we mean either areas leading to imminent escape from the potential well over the corresponding saddle or the complementary area that does not lead to imminent escape. The exact boundaries between these areas are the reactive islands formed by stable and unstable invariant manifolds of hyperbolic periodic orbits discussed above. A SVM classification algorithm, also referred to as support vector classifier (SVC), therefore approximates reactive islands in this setting.
Similarly to 32, we use the scikitlearn
[33] implementation of SVM [34]
. The implementation can be used with various kernels, of which the radial basis function kernel is best suited to approximate reactive islands in the HénonHeiles system, which are topological circles. With this kernel, a previously unseen data point
is predicted to belong to a class using the decision function(4) 
where controls the width of the Gaussian, are class labels of training data and are weights calculated by the algorithm, of which only a number of the weights is nonzero. These weights correspond to the training data called support vectors. The weights are calculated by SVC such that the distance between the predicted boundary and the closest points of every class is maximised, as illustrated in Fig. 3.
The upper bound on weights is a user defined value that controls the complexity of the decision boundary  a low value of gives a smoother decision boundary, while a high value of leads to higher accuracy. In this article, we first perform a search over a wide interval of values as shown in Fig. 4. Then, a smaller interval for both parameters is chosen for each of the support vector classifier approaches. The crossvalidation ensures that the trained model does not suffer from overfitting by splitting the training data into 5 folds, each of which is used as a test set with the remaining four as training set.
While it is possible to apply SVM to a fixed training data set, such as a regular grid, the accuracy of the resulting decision boundary will be limited by the amount of training data and its spacing. A significantly less dataintensive approach is offered by active learning [35, 36], where the ‘learner’ biases its sampling based on information obtained from previous samples. To do this, we start SVM with a coarse grid of data points and iteratively add data points in the proximity of the decision boundary and rerun SVM. This allows the algorithm to explore the intricate structures usually formed by invariant manifolds in systems describing chemical reaction dynamics. At every iteration we randomly add one data point near
randomly selected support vectors. The point is added using the multivariate normal distribution
, where is the support vector andthe identity matrix.
We would like to point out the importance of the precise problem formulation. Homoclinic and heteroclinic intersections of invariant manifolds lead to fractal structures, that is a fractal boundary between classes of dynamics. There is no known way to resolve fractal structures with finite precision and a finite number of data points. Thus approximating the boundary using fixedwidth Gaussians is bound to fail. In many systems it is possible to avoid these fractal structures by carefully selecting a surface of initial conditions and studying the dynamics under the corresponding return (Poincaré) map.
Iv Results and discussion
In the figures below, the cyan dots denote the support vectors used by the classifier to learn the decision boundary and the black dashed line denotes the learned decision boundary. The true reactive islands are shown as red, green, and blue curves.
Fixed training data. We sample initial conditions on the twodimensional section of the three dimensional energy surface, , with and . In this study, we present the results for two sections at to denote distinct sections at the location of the well and near the bottleneck. We also show the results for in increments of to illustrate the approach for various imbalance (ratio of reactive to nonreactive trajectories) in the training data corresponding to different excess energies. We generate a grid of initial conditions and sample the momenta using the fixed energy condition. Then, we run trajectories with the initial conditions for a prediction time horizon of time units. Then, we classify the escape trajectories as reactive through bottleneck 1 or 2 or 3 if they cross the line , or , or , respectively. If an initial condition does not satisfy any of the above conditions for the chosen time interval, we label it as nonreactive denoted by 0. Thus, we obtain a multilabel training dataset with two feature vectors, coordinates to discover the distribution of reactive trajectories on the twodimensional section corresponding to the two coordinates. We use a smaller interval for the Gaussian radial basis function parameters, and , to perform a grid search for high accuracy along with 5fold crossvalidation during the training.
The decision boundaries as learned by the SVC trained using fixed size training dataset and the reactive islands obtained from the direct computation of the tube manifolds are compared for verification in Fig. 5. The learned decision boundaries track the true reactive islands to a high accuracy and the classification accuracy is above 99% for all the energies considered here. In fact, for the low energy case (, which is above the energy of the saddle, the accuracy is similar to the highest energy case, . These two energies represent two extreme training data sets as the fraction of reactive trajectories increases with total energy. Thus, for low energy case a uniform sampling is bound to give small number of reactive trajectories and vice versa for high energy case. Even though this can be corrected using a nonuniform sampling for low energy, we show that learning the reactive islands leads to high accuracy because it is the fundamental phase space structure underlying the reactive trajectories.
Active learning. In this approach, we developed an iterative method for active learning of the reactive islands. The steps in this iterative method are: (i) Begin training step with a relatively small training data that represents a coarse sampling of the two dimensional section using a regular grid. (ii) This training data is then used to train a support vector classifier which learns a coarse decision boundary. (iii) The method adds new training data around the support vectors used by the classifier in the second step. The new data consists of 10 points generated around each support vector using a normal distribution and is added to the initial training data. (iv) The new data set is used in training the classifier which then relearns reactive islands. Steps (iii) and (iv) are repeated until a desired accuracy, for example 99%, is achieved. For this approach, we use, and , to perform a grid search for highest accuracy along with 5fold crossvalidation during the training.
The decision boundaries as learned by the SVC using an active learning approach and the reactive islands obtained from the direct computation of the tube manifolds are compared for verification in Fig. 6.
Trajectory geometry enabled learning. In this approach, we use a positive scalar quantity for encoding the geometry of a trajectory called the Lagrangian descriptor (LD) [37, 38] to generate a new feature for training a SVC. Lagrangian descriptors have been shown to detect phase space structures that mediate transition dynamics in general nonlinear dynamical systems (see Ref. [39] and references therein). Furthermore, as a trajectory based diagnostic of the phase space, it can be computed onthefly with the trajectory. This gives it a twofold merit as a feature: (i) encodes the geometry of the trajectory, thus incorporates the phase space perspective and (ii) efficient computation along with trajectory generation.
We briefly describe the method of Lagrangian descriptors which reveals regions with qualitatively distinct dynamical behavior by showing the intersection of the invariant manifolds with the two dimensional section. For a general timedependent dynamical system given by
(5) 
where the vector field is assumed to be sufficiently smooth both in space and time. The vector field can be prescribed by an analytical model or given from numerical simulations as a discrete spatiotemporal data set. For instance, the vector field could represent the velocity field of oceanic or atmospheric currents obtained from satellite measurements or from the numerical solution of geophysical models. For any initial condition , the system of first order nonlinear differential equations given in Eqn. (5) has a unique solution represented by the trajectory that starts from that initial point at time .
In this study, we adopt the LD definition
(6) 
where is the the component of the vector field, Eqn. (5) and use . We note that the integral can be split into its forward and backward time parts to detect the intersection of stable and unstable manifolds separately. This relates to finding the escape and entry channels into the potential well. In this study, we keep the forward part of the integral given by
(7) 
Although this definition of LD does not have an intuitive physical interpretation as that of the arclength definition [38], it allows for a rigorous proof that the “singular features” (nondifferentiable points) in the LD contour map identify intersections with stable and unstable invariant manifolds [40]. Another important aspect of what is known in LD literature as the (quasi)norm is that degrees of freedom with relevance in escape/transition (reaction) dynamics can be decomposed and computed. This definition was used to show that the method can be used to successfully detect NHIMs and their stable and unstable manifolds in HénonHeiles Hamiltonian [31, 24]. For this system, where both fixed (or variable) integration time is used, it has also been shown that the LD scalar field attains a minimum (or maximum) value along with singularity at the intersections of the stable and unstable manifolds, and given by
(8) 
where are the stable manifolds calculated at time and argmin denotes the phase space coordinates on the two dimensional section that minimize the scalar field, , over the integration time, . Thus, the scalar field plotted as a contour map identifies the intersection of the stable manifold with a two dimensional section. This ability of LD contour map to partition trajectories with different phase space geometry is shown in Fig. 7(ac) as values of LD inside the reactive islands are close to constant.
We construct the training data with three features   and a fixed size dataset. Then, we implement a SVC as in the Fixed training data approach with the parameters for the Gaussian radial basis function and . In Fig. 8 (di), we show the reactive islands along with the predictions of a SVC trained using the trajectory geometry given by LD as a feature and with a training data size of 10000 points. We note that when such a data set is used the support vectors used by the model increases around the boundary. However, this approach encodes the geometry of a trajectory in phase space and hence leads to robust classification as the total energy parameter is increased.
V Conclusions and outlook
On a transverse two dimensional section of the energy surface, reactive islands are the intersections of stable and unstable invariant manifolds of hyperbolic periodic orbits. Thus, the one dimensional boundaries of the reactive islands separate transition and nontransition trajectories. In this article, we presented three support vector classifier approaches for learning the reactive islands: fixed dataset training, active learning, and trajectory geometry enabled training. The advantages of our approach are as follows: (a) avoiding the need to compute hyperbolic periodic orbits and the associated invariant manifolds, in favour of finding the reactive islands directly on a surface of section as the boundary between classes of qualitatively different dynamics, (b) minimising computational cost of trajectory calculations by sampling the section near a boundary learned from a coarser sampling (c) using trajectory geometry as a dynamical (phase space) feature in the training data and compressing the high dimensional trajectory into a smaller feature set. Inheriting low data requirements from SVM, our approach is designed to work well for systems where integrating trajectories is expensive and is expected to generalise well systems with more than two degrees of freedom.
Our work intends to simplify the process of finding reactive islands, making it easier to generalise and more accessible to a wider scientific audience. Future work in this direction will involve the application to a model of chemical reaction and examples of high dimensional phase space of a systembath model.
Vi Acknowledgments
We acknowledge the support of EPSRC Grant No. EP/P021123/1 and Office of Naval Research (Grant No. N000140110769).
Vii Data Availability
The code that support the findings of this study are openly available on the corresponding author’s GitHub repository. The data used in training the machine learning model is available from the corresponding author upon reasonable request.
References
 Conley [1969] C. C. Conley, “On the ultimate behavior of orbits with respect to an unstable critical point i. oscillating, asymptotic, and capture orbits,” Journal of Differential Equations 5, 136–158 (1969).
 Conley [1968] C. C. Conley, “Low energy transit orbits in the restricted threebody problems,” SIAM Journal on Applied Mathematics 16, 732–746 (1968).
 Moser [1958] J. Moser, “On the generalization of a theorem of a. liapounoff,” Communications on Pure and Applied Mathematics 11, 257–271 (1958).
 Kelley [1967] A. Kelley, “On the liapounov subcenter manifold,” Journal of mathematical analysis and applications 18, 472–478 (1967).
 De Leon and Marston [1989] N. De Leon and C. C. Marston, “Order in chaos and the dynamics and kinetics of unimolecular conformational isomerization,” The Journal of Chemical Physics 91, 3405–3425 (1989).
 Marston and De Leon [1989] C. C. Marston and N. De Leon, “Reactive islands as essential mediators of unimolecular conformational isomerization: A dynamical study of 3‐phospholene,” The Journal of Chemical Physics 91, 3392–3404 (1989).
 Ozorio de Almeida et al. [1990] A. M. Ozorio de Almeida, N. de Leon, M. A. Mehta, and C. C. Marston, “Geometry and dynamics of stable and unstable cylinders in hamiltonian systems,” Phys. D 46, 265 (1990).
 De Leon, Mehta, and Topper [1991a] N. De Leon, M. A. Mehta, and R. Q. Topper, “Cylindrical manifolds in phase space as mediators of chemical reaction dynamics and kinetics. i. theory,” The Journal of Chemical Physics 94, 8310–8328 (1991a).
 De Leon, Mehta, and Topper [1991b] N. De Leon, M. A. Mehta, and R. Q. Topper, “Cylindrical manifolds in phase space as mediators of chemical reaction dynamics and kinetics. ii. numerical considerations and applications to models with two degrees of freedom,” The Journal of Chemical Physics 94, 8329–8341 (1991b).
 De Leon [1992] N. De Leon, “Cylindrical manifolds and reactive island kinetic theory in the time domain,” The Journal of Chemical Physics 96, 285–297 (1992).
 Mehta [1990] M. A. Mehta, Classical and quantum dynamics of phase space cylindrical manifolds, Ph.D. thesis, Yale University (1990).
 Fair, Wright, and Hutchinson [1995] J. R. Fair, K. R. Wright, and J. S. Hutchinson, “Impulsive energy transfer during unimolecular reaction via reactive cylinders in phase space,” The Journal of Physical Chemistry 99, 14707–14718 (1995).
 De Vogelaere and Boudart [1955] R. De Vogelaere and M. Boudart, “Contribution to the theory of fast raction rates,” J. Chem. Phys. 23 (1955), 10.1063/1.1742248.

Pollak and Child [1980]
E. Pollak and M. S. Child, “Classical mechanics of a collinear exchange reaction: A direct evaluation of the reaction probability and product distribution,”
J. Chem. Phys. 73 (1980), 10.1063/1.440720.  Cortes and Vapnik [1995] C. Cortes and V. Vapnik, “Supportvector networks,” Machine learning 20, 273 (1995).

Vapnik, Golowich, and Smola [1996]
V. Vapnik, S. E. Golowich, and A. Smola, “Support vector method for function approximation, regression estimation, and signal processing,” in
Advances in Neural Information Processing Systems 9 (MIT Press, 1996) pp. 281–287. 
Vapnik [2000]
V. Vapnik,
The Nature of Statistical Learning Theory
, 2nd ed. (Springer, 2000).  Mukherjee, Osuna, and Girosi [1997] S. Mukherjee, E. Osuna, and F. Girosi, “Nonlinear prediction of chaotic time series using support vector machines,” in Neural Networks for Signal Processing VII. Proceedings of the 1997 IEEE Signal Processing Society Workshop (1997) pp. 511–520.
 Wang and Lai [2016] C.D. Wang and J.H. Lai, “Nonlinear clustering: Methods and applications,” in Unsupervised Learning Algorithms, edited by M. E. Celebi and K. Aydin (Springer International Publishing, Cham, 2016) pp. 253–302.
 Schölkopf [2000] B. Schölkopf, “Statistical learning and kernel methods,” Tech. Rep. MSRTR200023 (2000).
 Krajňák, GarcíaGarrido, and Wiggins [2021] V. Krajňák, V. J. GarcíaGarrido, and S. Wiggins, “Reactive islands for three degreesoffreedom hamiltonian systems,” Physica D 425, 132976 (2021).
 Naik and Wiggins [2020] S. Naik and S. Wiggins, “Detecting reactive islands in a systembath model of isomerization,” Phys. Chem. Chem. Phys. 22, 17890–17912 (2020).
 Hénon and Heiles [1964] M. Hénon and C. Heiles, “The applicability of the third integral of motion: Some numerical experiments,” The Astronomical Journal 69, 73 (1964).
 Naik and Wiggins [2019] S. Naik and S. Wiggins, “Finding normally hyperbolic invariant manifolds in two and three degrees of freedom with HénonHeiles type potential,” Phys Rev E 100, 022204 (2019).
 Noid, Koszykowski, and Marcus [1977] D. W. Noid, M. L. Koszykowski, and R. A. Marcus, “A spectral analysis method of obtaining molecular spectra from classical trajectories,” The Journal of Chemical Physics 67, 404–408 (1977).
 Pullen and Edmonds [1981] R. A. Pullen and A. R. Edmonds, “Comparison of classical and quantal spectra for the HénonHeiles potential,” Journal of Physics A: Mathematical and General 14, L319–L327 (1981).
 Berdichevsky and Alberti [1991] V. L. Berdichevsky and M. V. Alberti, “Statistical mechanics of HénonHeiles oscillators,” Physical Review A 44, 858–865 (1991).
 Contopoulos and Harsoula [2005] G. Contopoulos and M. Harsoula, “Systems with Escapes,” Annals of the New York Academy of Sciences 1045, 139–167 (2005).
 Zhao and Du [2007] H. J. Zhao and M. L. Du, “Threshold law for escaping from the HénonHeiles system,” Physical Review E 76, 027201 (2007).
 Hénon [1974] M. Hénon, “Integrals of the Toda lattice,” Physical Review B 9, 1921–1923 (1974).
 Demian and Wiggins [2017] A. S. Demian and S. Wiggins, “Detection of periodic orbits in Hamiltonian systems using Lagrangian descriptors,” Int J Bifur Chaos 27, 1750225 (2017).
 Pozun et al. [2012] Z. D. Pozun, K. Hansen, D. Sheppard, M. Rupp, K.R. Müller, and G. Henkelman, “Optimizing transition states via kernelbased machine learning,” J. Chem. Phys. 136, 174101 (2012).
 Pedregosa et al. [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikitlearn: Machine learning in Python,” Journal of Machine Learning Research 12, 2825–2830 (2011).
 Chang and Lin [2011] C.C. Chang and C.J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology 2, 27 (2011).
 Settles [2009] B. Settles, “Active learning literature survey,” Computer Sciences Technical Report 1648 (University of Wisconsin–Madison, 2009).
 Kremer, Pedersen, and Igel [2014] J. Kremer, K. S. Pedersen, and C. Igel, “Active learning with support vector machines,” WIREs Data Mining Knowl. Discov. 4, 313 (2014).
 Jiménez Madrid and Mancho [2009] J. A. Jiménez Madrid and A. M. Mancho, “Distinguished trajectories in time dependent vector fields,” Chaos 19, 013111 (2009).
 Mancho et al. [2013] A. M. Mancho, S. Wiggins, J. Curbelo, and C. Mendoza, “Lagrangian descriptors: A method for revealing phase space structures of general time dependent dynamical systems,” Communications in Nonlinear Science and Numerical Simulation 18, 3530–3557 (2013).
 Agaoglou et al. [2020] M. Agaoglou, B. AguilarSanjuan, V. J. García Garrido, F. GonzálezMontoya, M. Katsanikas, V. Krajňák, S. Naik, and S. Wiggins, Lagrangian Descriptors: Discovery and Quantification of Phase Space Structure and Transport (Zenodo, 2020) EPSRC Grant Number: EP/P021123/1.
 Lopesino et al. [2017] C. Lopesino, F. BalibreaIniesta, V. J. GarcíaGarrido, S. Wiggins, and A. M. Mancho, “A theoretical framework for lagrangian descriptors,” Int J Bifurc Chaos 27, 1730001 (2017).
 Koon et al. [2011] W. S. Koon, M. W. Lo, J. E. Marsden, and S. D. Ross, Dynamical systems, the threebody problem and space mission design (Marsden books, 2011) p. 327.
 Koon et al. [2000] W. S. Koon, M. W. Lo, J. E. Marsden, and S. D. Ross, “Heteroclinic connections between periodic orbits and resonance transitions in celestial mechanics,” Chaos: An Interdisciplinary Journal of Nonlinear Science 10, 427–469 (2000).
 Naik and Ross [2017] S. Naik and S. D. Ross, “Geometry of escaping dynamics in nonlinear ship motion,” Commun Nonlinear Sci Numer Simul 47, 48–70 (2017).
 Parker and Chua [1989] T. S. Parker and L. O. Chua, Practical Numerical Algorithms for Chaotic Systems (SpringerVerlag New York, Inc., New York, NY, USA, 1989).
 Meyer, Hall, and Offin [2009] K. R. Meyer, G. R. Hall, and D. Offin, Applied Mathematical Sciences (Springer, 2009).
 Marsden and Ross [2006] J. E. Marsden and S. D. Ross, “New methods in celestial mechanics and mission design,” Bulletin of the American Mathematical Society 43, 43 – 73 (2006).
Appendix A Hyperbolic periodic orbit and invariant manifolds
a.1 The Linearized Hamiltonian System
The linearized equation of motion around the indexone saddle equilibria with coordinates is given by expanding the terms of the Hamiltonian (1) and keeping the quadratic terms. After making a coordinate to as the origin, the quadratic Hamiltonian function gives the linear system at the equilibrium point
(9) 
The linearized dynamics near the indexone saddle equilibrium points extend to the full nonlinear system due to Moser’s generalization of Lyapunov’s theorem.
Let us assume the eigenvector to be
, and the eigenvalue problem becomes . This gives the expressions(10)  
(11)  
(12)  
(13) 
Let , then using Eqns. (10)and (11) the eigenvector becomes .
Thus, the eigenvectors corresponding to , , can be written as
(14)  
respectively. Thus, the general solution of the linear system near the saddle equilibrium point is given by
(15) 
with being real and being complex.
a.2 Computing the hyperbolic periodic orbit and associated tube manifolds at the indexone saddle
For discussing the geometry, we call the equilibrium with positive ycoordinate and negative ycoordinate and .
Select appropriate energy above the critical value — For computation of manifolds that act as boundary between the escape and nonescape trajectories, we select the total energy, , above the energy of the indexone saddle, , and thus the excess energy .
Differential correction and numerical continuation — We present a procedure which computes the hyperbolic periodic orbits associated with an indexone saddle using a small “seed” initial conditions obtained from the linearized equations of motion at the indexone saddles. Then, corrects the guess using the linearization about the trajectory obtained by evolving the initial guess in the full nonlinear equations. This is called the differential correction (more details can be found in Chapter 4 of the book Koon et al. and other application of this method includes celestial mechanics, ship dynamics, models of dissociation and isomerization reaction [42, 43, 24, 22]).
Guess initial condition of the periodic orbit — The linearized equations of motion at an equilibrium point gives the initial guess for the differential correction method. Let us select an equilibrium point, . The linearization yields an eigenvalue problem , where is the Jacobian matrix in Eqn. (9) evaluated at the equilibrium point, is the eigenvalue, and is the corresponding eigenvector. The idea is to use the complex eigenvalue and the corresponding eigenvector to obtain a guess for the initial condition on the periodic orbit and its period , which should be close to (generalization of Lyapunov’s theorem) and increase monotonically with excess energy, .
The initial condition for a periodic orbit of xamplitude, can be computed by letting and in Eqn. (15), and (this choice is made to get rid of factor 2) denotes a small amplitude in the general linear solution. Thus, using the eigenvector in the center projection we can write
(16)  
where we consider, without loss of generality, the left indexone saddle equilibrium point.
Differential correction of the initial condition — In this step, we introduce correction to one of the coordinates of the initial guess such that the numerical periodic orbit satisfies
(17) 
for some tolerance . For the HénonHeiles Hamiltonian, we hold coordinate constant and correct the initial guess of the coordinate, use coordinate for terminating eventbased integration, and coordinate to test convergence of the periodic orbit. Let us denote the flow map of a differential equation with initial condition by . Thus, the displacement of the final state under a perturbation is given by
(18) 
with respect to the trajectory . Thus, measuring the displacement at and expanding into Taylor series gives
(19) 
where the first term on the right hand side is the state transition matrix, , when and can be obtained by solving the variational equations numerically along with the trajectory [44]. Let us suppose we want to reach the desired point , we have
(20) 
which has an error and needs correction. This correction to the first order can be obtained from the state transition matrix at and gives a new guess of the periodic orbit. This new initial condition can then be evolved and corrected as an iterative procedure with “small” (first order or differential) correction, this gives convergence in few steps. For the indexone saddle equilibrium points in the HénonHeiles Hamiltonian, we initialize the guess as
(21) 
and using numerical integrator we obtain the orbit until next the halfperiod event a high tolerance (typically ). So, we obtain which for the guess periodic orbit denotes the halfperiod point, and compute the state transition matrix . Using the entries of the state transition matrix, we derive the correction for the coordinate assuming is constant. Thus, the first order correction is given by
(22)  
(23) 
where is the entry of and the acceleration terms come from the equations of motion evaluated at the crossing when . Thus, we obtain the first order correction as
(24)  
(25) 
which is iterated until for some tolerance , since we want the final periodic orbit to be of the form
(26) 
This procedure yields an accurate initial condition for a periodic orbit of small amplitude , since our initial guess is based on the linearization at the equilibrium point.
Numerical continuation to periodic orbit at arbitrary energy.— The procedure described above yields an accurate initial condition for a periodic orbit from a single initial guess. If our initial guess came from the linear approximation near the equilibrium point, from Eqn. (15), it has been observed numerically that we can only use this procedure for small amplitude, of order , periodic orbits around . This small amplitude correspond to small excess energy, typically of the order , and if we want to compute the periodic orbit of arbitrarily large amplitude, we resort to numerical continuation for generating a family which reaches the appropriate total energy. This is done using two nearby periodic orbits of small amplitude to obtain initial guess for the next periodic orbit and performing differential correction to this guess. To this end, we proceed as follows. Suppose we find two small nearby periodic orbit initial conditions, and , correct to within the tolerance , using the differential correction procedure described above. We can generate a family of periodic orbits with successively increasing amplitudes around in the following way. Let
(27) 
A linear extrapolation to an initial guess of slightly larger amplitude, is given by
(28)  
Thus, keeping fixed, we can use differential correction on this initial condition to compute an accurate solution from the initial guess and repeat the process until we have a family of solutions. We can keep track of the energy of each periodic orbit and when we have two solutions, and , whose energy brackets the appropriate energy, , we can resort to combining bisection and differential correction to these two periodic orbits until we converge to the desired periodic orbit to within a specified tolerance. Thus, the result is a periodic orbit at desired total energy and of some period with an initial condition .
Globalization of invariant manifolds — We find the local approximation to the unstable and stable manifolds of the periodic orbit from the eigenvectors of the monodromy matrix. Next, the local linear approximation of the unstable (or stable) manifold in the form of a state vector is integrated in the nonlinear equations of motion to produce the approximation of the unstable (or stable) manifolds, such as those shown in Fig. 9. This procedure is known as globalization of the manifolds and we proceed as follows.
First, the state transition matrix along the periodic orbit with initial condition can be obtained numerically by integrating the variational equations along with the equations of motion from to . This is known as the monodromy matrix and the eigenvalues can be computed numerically. For Hamiltonian systems (see [45] for details), tells us that the four eigenvalues of are of the form
(29) 
The eigenvector associated with eigenvalue is in the unstable direction, the eigenvector associated with eigenvalue is in the stable direction. Let denote the normalized (to 1) stable eigenvector, and denote the normalized unstable eigenvector. We can compute the manifold by initializing along these eigenvectors as:
(30) 
for the stable manifold at along the periodic orbit as
(31) 
for the unstable manifold at . Here the small displacement from is denoted by and its magnitude should be small enough to be within the validity of the linear estimate, yet not so small that the time of flight becomes too large due to asymptotic nature of the stable and unstable manifolds. Ref. [41] suggests typical values of corresponding to nondimensional position displacements of magnitude around . By numerically integrating the unstable vector forwards in time, using both and , for the forward and backward branches respectively, we generate trajectories shadowing the two branches, and , of the unstable manifold of the periodic orbit. Similarly, by integrating the stable vector backwards in time, using both and , for forward and backward branch respectively, we generate trajectories shadowing the stable manifold, . For the manifold at , one can simply use the state transition matrix to transport the eigenvectors from to as
(32) 
It is to be noted that since the state transition matrix does not preserve the norm, the resulting vector must be normalized. The globalized invariant manifolds associated with indexone saddles are known as ConleyMcGehee tubes [46]. These tubes form the skeleton of transition dynamics by acting as conduits for the states inside them to travel between potential wells.
The computation of codimension1 separatrix associated with the hyperbolic periodic orbit around a indexone saddle begins with the linearized equations of motion. This is obtained after a coordinate transformation to the saddle equilibrium point and Taylor expansion of the equations of motion. Keeping the first order terms in this expansion, we obtain the eigenvalues and eigenvectors of the linearized system. The eigenvectors corresponding to the center direction provide the starting guess for computing the hyperbolic periodic orbits for small excess energy, , above the saddle’s energy. This iterative procedure performs small correction to the starting guess based on the terminal condition of the periodic orbit until a desired tolerance is satisfied. This procedure is known as differential correction and generates hyperbolic periodic orbits for small excess energy. Next, a numerical continuation is implemented to follow the small energy (amplitude) periodic orbits out to high excess energies.
Comments
There are no comments yet.