1 Introduction
Recent breakthroughs in computer vision, natural language processing, selfdriving vehicles, medical diagnostics, braincomputer interfaces, robotics, particle physics, nano sciences spurred on by datadriven methods such as machine learning (ML), deep learning (DL), and reinforcement learning (RL) has sparked the interest in leveraging these methods’ capabilities for approximation and extrapolation in computational and theoretical chemistry
^{1, 2, 3, 4}. In the context of chemical physics, the main focus has been on learning force field parameters from ab initio calculations ^{5, 2, 6, 7} and building accurate potential energy surfaces ^{8}. Only a few such works has been directed towards constructing timedependent dividing surfaces using the dynamical systems viewpoint ^{9, 10, 11}. In this paper we use support vector machines (SVM) to develop a machine learning framework to discover the phase space structures that can distinguish between distinct reaction pathways. The machine learning model is trained using trajectory data from Hamilton’s equations but lends itself for use in molecular dynamics simulation. The framework is specifically designed to require minimal a priori knowledge of the dynamics in a system. We benchmark our approach with a model Hamiltonian for the reaction of an ion and a molecule due to Chesnavich ^{12}. In particular, Chesnavich considers the reaction(1) 
and a model consisting of two parts: a rigid, symmetric top representing the ion, and a mobile H atom. A detailed derivation of this model can be found in reference ^{13}. The reaction of an ion and a molecule is a topic of fundamental and continuing interest in chemistry, and the topic is reviewed in ^{14, 15, 16, 17, 18, 19}.
In recent years this model has seen applications to understanding mechanisms responsible for the roaming reaction mechanism ^{20}. Roaming is a new mechanism for dissociation of a molecule. In it corresponds to the dissociation of a hydrogen atom. In ^{21} it is asserted that a dissociating molecule should possess two essential characteristics in order to label the reaction as “roaming”. In particular, the molecule should have competing dissociation channels, such as dissociation to molecular and radical products, and there should exist a long range attraction between fragments of the molecule. In a recent review article ^{22} Suits refines this definition of roaming even further by stating that “A roaming reaction is one that yields products via reorientational motion in the longrange (Å) region of the potential.” Thus, a method that can distinguish between roaming, direct dissociation, and other competing reaction pathways is of interest in understanding mechanisms.
In ^{21, 23, 24, 25} phase space analyses of roaming were carried out that first determined invariant geometrical structures in phase space that formed the boundaries between the competing dissociation channels. In other words, the strategy was to determine the phase space structures that separated the trajectories into different classes of dynamical behavior. Our machine learning approach is different, and effectively an opposite approach. We start with trajectories (“the data”) and use support vector machines to determine the boundaries between initial conditions corresponding to different classes of trajectories. While it would appear that this is a more direct approach in the sense that we are dealing first, and directly, with trajectories, it will turn out that these boundaries between different classes of trajectories are actually invariant phase space structures of the same type observed in the earlier analyses of Chesnavich’s model.
Our goal is to train a machine learning model using a labeled data set consisting of initial conditions on a two dimensional section of the energy surface. The labels denote the distinct end products formed via distinct reaction pathways. We use support vector machines (SVM), a class of supervised learning classification and regression algorithms
^{26, 27, 28}, which has been used for chaotic time series regression in ^{29}. We employ the classification algorithm, which constructs a decision function with maximum margin to approximate the boundary between different classes of data. In our case, the classes in the classification correspond to initial conditions leading to “qualitatively different” dynamical behaviour. The learned boundary between trajectories leading to distinct end product is also verified using the dynamical systems theory of reactive islands ^{30, 31, 32, 33, 34, 35, 36, 37} and therefore we confirm that the SVM approach enables us to determine the geometrical structures in phase space governing distinct reaction pathways.We choose support vector machines for the following reasons. (i) We aim to approximate nonlinear boundaries between classes of trajectories, for which nonlinear kernels in SVM are designed. (ii) SVM has been known to provide good predictions even with small training datasets. Consequently our approach is wellsuited for systems where obtaining training data is expensive either due to computational complexity or high dimension. This makes our SVMbased approach a computationally efficient gateway to the approximation of phase space structures in higher dimensional systems, such as the systembath models of isomerization ^{38}. It also offers the possibility to capitalise on the recent extension of reactive island theory to systems with three DoF ^{39}.
This paper is outlined as follows. In Section 2.1 we describe the Chesnavich model and some of its previous uses. In Section 2.2
we introduce the methodology of SVM and describe its use for classifying trajectories. In Section
2.2.1we describe a data efficient procedure to sample initial conditions of trajectories based on the idea of active learning. In Section
2.3 we describe four different classes on trajectories based on expected behavior in the system. In Section 2.3.1 and Section 2.3.2 we provide a more detailed discussion of how we define the trajectory classes of dissociation and association based on the properties and geometry of the potential energy surface. In Section 2.4 we discuss the choice of initial surfaces from which we sample initial conditions requiring minimal a priori knowledge of the system that guide our initial classification of trajectories. In Section 3.1 we discuss the active learning of trajectory classes without the need to compute trajectories. In Section 3.2 we discuss the phase space structures that divide classes of trajectories known from previous work. In Section 3.3 we discuss how our machine learning approach based on SVM recovers these phase space structures. In Section 3.4 we discuss some issues related to how SVM deals with the type of fractal structures arising from the intersection of stable and unstable manifolds. In Section 4 we describe our conclusions and some possible future directions of investigation.2 Model and method
2.1 Chesnavich’s CH system
Chesnavich’s CH
system is a phenomenological model with 2 degrees of freedom introduced in
^{12, 13} to study CH CH H, representing reactions that involve the passage through multiple transition states. The system was also investigated in the context of roaming ^{21, 23, 25, 40}.Chesnavich’s CH system is defined by the Hamiltonian
(2) 
where are the polar coordinates describing the position of the mobile H atom with respect to the centre of mass of the rigid body CH and are the respective canonically conjugate momenta. The parameter is the reduced mass of the system , where u and u, and uÅ
is the moment of inertia of CH
. A contour plot of is shown in Fig. 3. For simplicity we consider the system at a fixed total energy kcal/mol. We remark that we obtain similar results for kcal/mol and the system does not exhibit roaming for kcal/mol ^{25}.In Fig. 1, we show a contour plot of the potential^{12} which is characterised by two potential wells separated by two areas of high potential at a short distance away from the origin. With increasing , the potential becomes increasingly independent of and converges to zero from below, .
Since the system is unbounded for , the ergodic hypothesis fails and Poincaré recurrence does not apply at the energies considered here.
2.2 Support vector machine for classifying trajectories
The classification algorithms for support vector machines (SVM) ^{26, 27, 28}
construct a nonlinear boundary between different classes by lifting into a high dimensional space and constructing a hyperplane that separates the classes. In our case, the classes correspond to trajectories with qualitatively distinct dynamics and initialized on the section
and . For Chesnavich’s model, by qualitatively distinct dynamics we mean trajectories that lead to direct dissociation, roaming, isomerisation, or nonreactive (as shown in Fig. 3). As we will discuss in Section 3.2, the nonlinear boundaries that classify the initial conditions are obtained from the phase space analysis. To be precise, we will show that the boundaries between initial conditions are given by the intersection of stable and unstable invariant manifolds of unstable periodic orbits with the chosen section. Therefore, a SVM classification algorithm, also referred to as support vector classifier (SVC), can approximate these manifolds given appropriate training data.Following Pozun et al., we use the scikitlearn
^{41} ^{42}
library’s implementation of support vector classifier. The implementation can be used with various kernels, of which the radial basis function kernel is best suited to approximate nonlinear boundaries, which are topological circles as discussed in the later section
3.2, between the different classes of trajectories. After learning the kernel (or optimization problem to find the hyperparameters) using the training data, a previously unseen initial condition
is predicted to belong to a class using the decision function(3) 
where controls the width of the Gaussian, are class labels of training data , are weights calculated by the algorithm, of which only a number of the weights is nonzero, and is the number of initial conditions on the twodimensional section. These weights correspond to the subset of the training data called support vectors. The weights are calculated by SVC such that the distance between the predicted boundary and the closest points of every class is maximised, as illustrated in Fig. 2.
The upper bound on weights is a user defined value that controls the complexity of the decision boundary  a low value of gives a smoother decision boundary, while a high value of leads to higher accuracy. In this article, we first perform a search over a wide interval of values as shown in ^{37}. Then, a smaller interval for both parameters is chosen for each of the support vector classifier approaches. The crossvalidation ensures that the trained model does not suffer from overfitting by splitting the training data into 5 folds, each of which is used as a test set with the remaining four as training set.
2.2.1 Sampling procedure to generate training data
Sampling the energy surface using a regular grid on a twodimensional surface involves a considerable amount of data from regions with dynamics that is fairly regular on a short timescale. Furthermore, the accuracy of the learned decision boundary is also limited by the amount of training data and its spacing, and thus the learning will worsen as the volume of the energy surface increases. A significantly less dataintensive approach is offered by active learning ^{43, 44}, where the “learner” biases its sampling based on information obtained from previous samples. Following the principles of active learning ^{43, 44}, the work of ^{9} and our recent work on HénonHeiles Hamiltonian ^{37}
, it is more dataefficient to add sample points gradually based on the coarse boundary learned from the previous step. Briefly, we initialize training a support vector classifier with a coarse grid of data points and iteratively add data points on the decision boundary learned from the previous step and rerun SVM with the expanded training data. In this work we start with a coarse regular grid (5x5 or 10x10) and subsequently introduce additional points randomly selected using a uniform distribution on the boundary between classes as predicted by SVM. This allows the algorithm to explore the intricate structures (due to “twists” and “turns” in the phase space) usually formed by the globalized invariant manifolds that mediate the evolution of a trajectory.
We would like to point out the importance of the precise problem formulation. Homoclinic and heteroclinic intersections of invariant manifolds lead to fractal structures, that is a fractal boundary between classes of dynamics. There is no known way to resolve fractal structures with finite precision and a finite number of data points. Thus approximating the boundary using fixedwidth Gaussians is bound to fail. In many systems it is possible to avoid these fractal structures by carefully selecting a surface of initial conditions and studying the dynamics under the corresponding return (Poincaré) map.
2.3 Definition of trajectory classes
Trajectories in this system exhibit dynamical behaviour of four types: direct dissociation, isomerisation, nonreaction and roaming. Examples of trajectories of each type are shown in Fig. 3. These trajectories behave differently to each other, because they are separated by phase space structures asymptotic to unstable periodic orbits that govern the dynamics in this system. A precise definition is based on dividing surfaces constructed using unstable periodic orbits, as dissociation and access to the well are controlled by unstable periodic orbits ^{23}. The boundaries between the four types of trajectories are known to be fractal ^{25}.
To overcome the difficulties posed by fractal boundaries, we focus on learning and predicting imminent dissociation and imminent association of CH and H. Given a twodimensional surface of initial conditions on the energy surface , imminent dissociation/association is exhibited by those trajectories that lead to dissociation/association without returning to the initial surface. Based on these properties we distinguish the following classes of trajectories:

imminent dissociation and association  trajectories that enter a well in backward time and dissociate in forward time,

imminent association only  trajectories that enter a well in backward time and return to the initial surface(s) in forward time,

imminent dissociation only  trajectories that return to the initial surface(s) in backward time and dissociate in forward time,

other  trajectories that return to the initial surface(s) in backward and forward time.
2.3.1 Dissociation threshold
The threshold for considering a trajectory dissociated is of dynamical origin. In systems with 2 degrees of freedom, it is an unstable periodic orbit that is due to a centrifugal barrier. It was proven to be present in all systems in which the potential as ^{25}. Since the periodic orbit is unstable and rotationally symmetric (required by the proof), all initial conditions with a larger and diverge to infinity. Using the constructive proof, it can be found as a rotationally invariant solution of the system (2), given an energy value . In this case a solution is rotationally invariant if and
(4) 
where, without the loss of generality, we take the derivative at . Following (4) and the energy condition
(5) 
we find, in agreement with existing results, that for , vanishes slightly below and for slightly below . We use these values as the threshold for dissociation.
2.3.2 Association threshold
Access to each potential well is governed by an unstable periodic orbit, that is responsible for allowing some trajectories while repelling others. This way the unstable periodic orbit can be viewed as the dynamical boundary of the potential well and therefore a correct (being in the phase space) indicator of whether a bond is formed (entry into the potential well) or broken (escape from the potential well).
Once a trajectory enters a well, it will continue evolving in the direction of decreasing potential. In Chesnavich’s system, this is predominantly captured by the dynamics in the radial degree of freedom . While the precise moment of entry can only be determined using the aforementioned periodic orbit, it is possible to tell the turning points of trajectories solely from the Hamiltonian vector field.
At , the vector field at every turning point in the radial direction for and for every has
(6) 
where is given implicitly by . Therefore all trajectories that are not monotonic in evolve towards the bottom of the well at . The same is true for and . Hence we consider any trajectory that reaches as being in the well at these energies.
2.4 Definition of initial surfaces
Following our goal to devise an approach that uses minimal a priori knowledge of the system, we do not fix a surface of section suited to investigate boundaries between the aforementioned classes. Instead, our approach searches multiple surfaces at the same time and identifies the surface on which SVM attains the highest accuracy. Since there may be features that significantly distort the boundaries between trajectory classes or even cause them to be discontinuous, SVM is going to attain the highest accuracy on a surface on which the boundaries are the simplest. As a measure of simplicity, we show the number of support vectors in the accuracy plots in Sec. 3.1.
We consider surfaces of the form , evenly distributed between the association and dissociation threshold from Sec. 2.3. Considering multiple surfaces comes at a low computational costs, as every time we initiate a point on one of the surfaces, we find the intersections (if they exist) with all remaining surfaces during the forward and backward integration that is needed to classify the trajectory. We integrate each trajectory from its initial condition forward and backward in time until it reaches the association threshold, the dissociation threshold, the initial surface(s) or 100 time units.
Before training SVM, we split the data set into training data and test data and calculate accuracy (ratio of correct predictions to total number of predictions), such as in Fig. 5
, using unseen data. We remark that most data points lie near the class boundaries in the active learning approach. The calculated accuracy is therefore not comparable to the probability of correctly predicting a random point in the entire domain.
We remark that the choice of surfaces influences the definition of classes, as we terminate trajectories that return to any of the initial , regardless of . While the surfaces can be defined differently, classification based on imminent dissociation and association cannot be directly related to direct dissociation, roaming and isomerisation without the knowledge of invariant phase space structures. While the majority of the trajectories in each classes can be related to a dynamical behaviour, trajectories near the boundaries are more difficult. As presented in Sec. 3.3, we still find good agreement of the class boundaries predicted by SVM for kcal/mol (and kcal/mol) with the invariant manifolds that separate different dynamical behaviour.
3 Results and discussion
3.1 Prediction using active learning
Under the assumption that integrating trajectories is more expensive than retraining SVM, we propose starting with a coarse initial grid and adaptively adding new points on the predicted boundary randomly sampled using a uniform distribution. As new points are added using successively accurate approximation of the boundary between the classes, this approach falls under active learning ^{43, 44}. This way it is possible to sample the surface more densely near boundaries between the classes and less densely elsewhere. By adding new points on the predicted boundary, SVM requires relatively few points to reach a qualitatively correct approximation of the class boundaries.
For the sake of shorter computational time we adaptively sample 4 points at every iteration when calculating results shown in Fig. 8 and we expect more accurate results when the model is retrained after every new sample point. We discard sample points outside of the energetically accessible area. The predicted boundary is obtained as a set of (unique) points that form the contours of an array of predicted class labels on a regular grid using scikitimage’s ^{45} find_contours function. If required, the sampling may be biased toward a particular class by duplicating the corresponding contour line or sampling each contour line separately.
Classifying points on multiple surfaces simultaneously leads to a more accurate predictions and allows us to identify the most appropriate surface for further phase space analysis with minimal a priori knowledge of the system. Fig. 4 shows the class boundaries predicted by SVM on the surfaces for an initial 5x5 grid and 4 adaptively sampled points per iteration until we reach a total of 200 points. At every iteration we add points on a different surface. Even though the initial data set is unbalanced, all but one of the initial 5x5 points correspond to class 1 (imminent dissociation and association) and class 3 (imminent dissociation only), the qualitatively correct structure of invariant manifolds is identified.
The most appropriate surface for a subsequent analysis is the surface with the simplest class boundaries, which is the one where SVM reaches the highest accuracy and requires the fewest support vectors as shown in Fig. 5. This agrees with the geometric interpretation of SVM.
Despite the relatively high prediction accuracy on achieved for 200 points, it is evident from Fig. 5 that the results are not robust. Accuracy varies between and depending on the test/train split, which suffices for qualitative purposes, but may be limiting for quantitative investigations. For 500 points, such as the example shown in Fig. 8, this fluctuation reduces to the range between and . We note that the prediction accuracy can be improved by optimising the choice of surfaces, density of the initial grid and number of points added at every iteration.
On the other hand, a noticeably lower accuracy reached on and higher number of support vectors correspond to the increasingly distorted boundaries with approaching the centrifugal barrier slightly below . Increasing the number of sample points to 500 does not improve accuracy significantly. We remark that none of the sampled point in class 2 and class 4 corresponds to a trajectory that reaches and all of them return to one of the surfaces . Thus SVM attains a lower accuracy on despite the classification being binary. Consequently SVM not only provides the class boundaries and a decision function, but it can be used to evaluate how good/bad a given surface is to study the given class boundaries.
3.2 Phase space structures
Here we briefly discuss known phase space structures that are approximated by the class boundaries predicted by SVM. The class boundaries are formed by the intersection of the stable and unstable invariant manifolds of unstable periodic orbits with the surface. The invariant manifolds guide the trajectories from the potential well to dissociation or back to one of the potential wells ^{23, 21, 25}.
There are three important families of periodic orbits and at every fixed energy each family contains two orbits related by symmetry:

two inner periodic orbits (orange in Fig. 6) delimit two potential wells corresponding to bound isomers of CH,

two outer orbits (one clockwise, one counterclockwise) due to a centrifugal barrier ^{25}, that delimit the dissociated states (red in Fig. 6),

two middle orbits (one clockwise, one counterclockwise) in the flat region crucial for the definition of roaming.
For every orbit with the configuration space projection , the corresponding dividing surface (DS) is defined as the set of all points, , on the energy surface such that . The inner DSs are known to be spheres, one corresponding to each periodic orbit, while the middle and outer DSs are known to be tori, each containing two orbits ^{46}. Because a trajectory can intersect a DS at a given point in two different directions, for example and in case of the outer DS, in all of the following when we refer to a DS we mean its outward half.
Directly dissociating trajectories evolve from a well to the dissociated states passing through an inner, the middle and the outer DS. Isomerising trajectories originate in a well, pass through the middle DS and end in a well without ever reaching the outer DS. Similarly nonreactive trajectories go from dissociated states to dissociated states via the middle DS without ever reaching an inner DS. Roaming is defined as the evolution of trajectories from either of the potential wells to the dissociated states while crossing the middle DS more than once ^{23}.
Being crossed by trajectories of all of the abovementioned types at least once, the middle DS was used in previous works to investigate the dynamics and roaming in Chesnavich’s CH system ^{25, 40}. Fig. 7 shows the intersection of the middle DS with the unstable invariant manifolds of the inner orbits, guiding trajectories away from the inner DSs, and the stable invariant manifolds of the outer orbits conveying trajectories to the outer DS and into dissociated states. We would like to point out that the invariant manifolds intersect this surface infinitely many times, forming a fractal structure similar to that of Smale’s horseshoe map ^{47} Sec. 16.2. We only show the first intersection, that is the closest along the manifold to the corresponding periodic. In dynamical systems literature, these intersections are known as the first reactive islands ^{30, 31, 34, 35, 49}. Every subsequent reactive islands is an image of the first island under the return map for unstable manifolds or the preimage of the return map for stable manifolds.
The first reactive islands allow us to identify the first and last intersections of the enclosed trajectories with the middle DS. It allows us to conclude whether a trajectory does immediately originate from a well or not, and whether a trajectory is going to dissociate immediately or not. Consequently we can distinguish between direct dissociation, roaming + isomerisation, roaming + nonreactivity, and other trajectories that remain in the region between the inner and outer DSs as shown in Fig. 7.
Once the first reactive islands have been identified, their images under the return map can be calculated numerically. This approach of obtaining higher order reactive islands is preferable to directly using SVM, because in case of a fractal structure, it may not be possible to separate classes using hyperplanes.
3.3 Discovering phase space structures with SVM
As remarked in Sec. 2.4, the majority, but not all, of initial conditions in each class can be related to a dynamical behaviour induced by the intersection of stable and unstable invariant manifolds mentioned above. Initial conditions in Class 1 lead predominantly to direct dissociation, class 2 largely correspond to roaming + isomerisation, class 3 to roaming + nonreactivity and class 4 to other trajectories that remain between the inner and outer DSs.
The class boundaries identified by SVM ultimately approximate invariant manifolds identified in previous works and as discussed in Sec. 3.2 to mediate the reaction dynamics in this system. Fig. 8 illustrates the convergence of SVM predictions and their agreement with known invariant manifolds, showing the initial 5x5 grid and subsequently at a total of 100, 200 and 500 sample points. Most notably, the intersection of invariant manifolds that causes roaming ^{25} is identified quickly. While SVM gives a clear indication on where roaming and isomerising trajectories are located, the precise distinction between roaming and isomerisation is not possible due to the fractal boundary separating them.
3.4 Comment on fractal structures
For some systems, higher order reactive islands can be determined in the same fashion as the first ones. As a consequence of heteroclinic intersections, higher order reactive islands in this system have a fractal structure, which is not suited for SVM due to the following reasons:

Depending on sampling density, a finite amount of data resolves fractal structures only up to a limited resolution.

The heteroclinic intersections result in parts of a reactive island to be deformed into narrowing bands. Radial basis functions underlying our SVM approach have a fixed radius which are not suitable to approximate structures with varying “widths”. While it will perform well in situations when the boundaries/manifolds between two classes are sufficiently far apart, the proximity of multiple boundaries/manifolds leads to inaccuracies.
4 Conclusions and Outlook
In this paper we have shown how to develop a machine learning approach to discover the phase space structure that distinguishes between distinct reaction pathways using support vector machines (SVM). The approach uses minimal a priori knowledge of the dynamics in the system. Given the definitions of classes of trajectories, we derive association and dissociation thresholds from properties of the potential energy surface. Our approach then classifies initial conditions on multiple surfaces using an active learning procedure based on SVM and selects the most appropriate surface to study the identified class boundaries. We show that the class boundaries approximate known phase space structures that have been computed in previous works on Chesnavich’s model as the structures governing the dynamics in the model. The advantage of our SVMbased approach is that it requires calculating significantly fewer trajectories. This suggests that our approach may be successful in higher dimensional models of reaction dynamics, and this will be a focus for future work. In particular, the work on reactive islands in three degreesoffreedom Hamiltonian systems described in ^{39} could serve as a benchmark.
Acknowledgments
We acknowledge the support of EPSRC Grant no. EP/P021123/1 and Office of Naval Research (Grant No. N000141712220).
References
 Rupp et al. 2018 Rupp, M.; von Lilienfeld, O. A.; Burke, K. The Journal of Chemical Physics 2018, 148, 241401.
 Unke et al. 2020 Unke, O. T.; Koner, D.; Patra, S.; Käser, S.; Meuwly, M. Machine Learning: Science and Technology 2020, 1, 013001.
 Meuwly 2021 Meuwly, M. arXiv:2101.03530 [physics] 2021, arXiv: 2101.03530.
 Westermayr et al. 2021 Westermayr, J.; Gastegger, M.; Schütt, K. T.; Maurer, R. J. The Journal of Chemical Physics 2021, 154, 230903.
 Li et al. 2017 Li, Y.; Li, H.; Pickard, F. C.; Narayanan, B.; Sen, F. G.; Chan, M. K. Y.; Sankaranarayanan, S. K. R. S.; Brooks, B. R.; Roux, B. Journal of Chemical Theory and Computation 2017, 13, 4492–4503.
 Qiao et al. 2020 Qiao, Z.; Welborn, M.; Anandkumar, A.; Manby, F. R.; Miller, T. F. The Journal of Chemical Physics 2020, 153, 124111.
 Käser et al. 2020 Käser, S.; Koner, D.; Christensen, A. S.; von Lilienfeld, O. A.; Meuwly, M. The Journal of Physical Chemistry A 2020, 124, 8853–8865.
 Koner and Meuwly 2020 Koner, D.; Meuwly, M. Journal of Chemical Theory and Computation 2020, 16, 5474–5484.
 Pozun et al. 2012 Pozun, Z. D.; Hansen, K.; Sheppard, D.; Rupp, M.; Müller, K.R.; Henkelman, G. J. Chem. Phys. 2012, 136, 174101.
 Carpenter et al. 2018 Carpenter, B. K.; Ezra, G. S.; Farantos, S. C.; Kramer, Z. C.; Wiggins, S. The Journal of Physical Chemistry B 2018, 122, 3230–3241.
 Schraft et al. 2018 Schraft, P.; Junginger, A.; Feldmaier, M.; Bardakcioglu, R.; Main, J.; Wunner, G.; Hernandez, R. Physical Review E 2018, 97, 042309.
 Chesnavich 1986 Chesnavich, W. J. J. Chem. Phys. 1986, 84, 2615–2619.
 Ezra and Wiggins 2019 Ezra, G. S.; Wiggins, S. Int. J. Bifurcation Chaos 2019, 29, 1950025.
 Futrell and Tiernan 1968 Futrell, J. H.; Tiernan, T. O. Science 1968, 162, 415–422.
 Stevenson 1957 Stevenson, D. P. The Journal of Physical Chemistry 1957, 61, 1453–1456.
 Ferguson 1975 Ferguson, E. E. Annu. Rev. Phys. Chem. 1975, 26, 17–38.
 Chesnavich and Bowers 1982 Chesnavich, W. J.; Bowers, M. T. Theory of IonNeutral Interactions: Application of Transition State Theory Concepts to Both Collisional and Reactive Properties of Simple Systems; Pergamon Press, 1982.
 Franklin 2012 Franklin, J. Ionmolecule reactions; Springer Science & Business Media, 2012; Vol. 1.
 Meyer and Wester 2017 Meyer, J.; Wester, R. Annu. Rev. Phys. Chem. 2017, 68, 333–353.
 Bowman and Suits 2011 Bowman, J. M.; Suits, A. G. Phys. Today 2011, 64, 33.
 Mauguière et al. 2014 Mauguière, F. A. L.; Collins, P.; Ezra, G. S.; Farantos, S. C.; Wiggins, S. Chem. Phys. Lett. 2014, 592, 282–287.
 Suits 2020 Suits, A. G. Annual Review of Physical Chemistry 2020, 71, 77–100.
 Mauguière et al. 2014 Mauguière, F. A. L.; Collins, P.; Ezra, G. S.; Farantos, S. C.; Wiggins, S. J. Chem. Phys. 2014, 140, 134112.
 Mauguière et al. 2017 Mauguière, F. A. L.; Collins, P.; Kramer, Z. C.; Carpenter, B. K.; Ezra, G. S.; Farantos, S. C.; Wiggins, S. Annu. Rev. Phys. Chem. 2017, 68.
 Krajňák and Waalkens 2018 Krajňák, V.; Waalkens, H. J. Math. Chem. 2018, 1–38.
 Cortes and Vapnik 1995 Cortes, C.; Vapnik, V. Machine learning 1995, 20, 273.

Vapnik et al. 1996
Vapnik, V.; Golowich, S. E.; Smola, A. Support Vector Method for Function Approximation, Regression Estimation, and Signal Processing. Advances in Neural Information Processing Systems 9. 1996; pp 281–287.

Vapnik 2000
Vapnik, V.
The Nature of Statistical Learning Theory
, 2nd ed.; Springer, 2000. 
Mukherjee et al. 1997
Mukherjee, S.; Osuna, E.; Girosi, F. Nonlinear prediction of chaotic time series using support vector machines. Neural Networks for Signal Processing VII. Proceedings of the 1997 IEEE Signal Processing Society Workshop. 1997; pp 511–520.
 De Vogelaere and Boudart 1955 De Vogelaere, R.; Boudart, M. J. Chem. Phys. 1955, 23, 1236.
 Pollak and Child 1980 Pollak, E.; Child, M. S. J. Chem. Phys. 1980, 73.
 De Leon and Berne 1981 De Leon, N.; Berne, B. J. The Journal of Chemical Physics 1981, 75, 3495–3510.
 Berne et al. 1982 Berne, B. J.; De Leon, N.; Rosenberg, R. O. The Journal of Physical Chemistry 1982, 86, 2166–2177.
 De Leon and Marston 1989 De Leon, N.; Marston, C. C. The Journal of Chemical Physics 1989, 91, 3405–3425.
 Marston and De Leon 1989 Marston, C. C.; De Leon, N. The Journal of Chemical Physics 1989, 91, 3392–3404.
 Ozorio de Almeida et al. 1990 Ozorio de Almeida, A. M.; De Leon, N.; Mehta, M. A.; Marston, C. C. Physica D: Nonlinear Phenomena 1990, 46, 265 – 285.
 Naik et al. 2021 Naik, S.; Krajňák, V.; Wiggins, S. Support vector machines for learning reactive islands. 2021; https://arxiv.org/abs/2107.08429.
 Naik and Wiggins 2020 Naik, S.; Wiggins, S. Phys. Chem. Chem. Phys. 2020, 22, 17890–17912.
 Krajňák et al. 2021 Krajňák, V.; GarcíaGarrido, V. J.; Wiggins, S. Physica D 2021, 425, 132976.
 Krajňák and Wiggins 2018 Krajňák, V.; Wiggins, S. J. Chem. Phys. 2018, 149, 094109.
 Pedregosa et al. 2011 Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; Duchesnay, E. Journal of Machine Learning Research 2011, 12, 2825–2830.
 Chang and Lin 2011 Chang, C.C.; Lin, C.J. ACM Transactions on Intelligent Systems and Technology 2011, 2, 27.
 Settles 2009 Settles, B. Active Learning Literature Survey; Computer Sciences Technical Report 1648, 2009.
 Kremer et al. 2014 Kremer, J.; Pedersen, K. S.; Igel, C. WIREs Data Mining Knowl. Discov. 2014, 4, 313.
 van der Walt et al. 2014 van der Walt, S.; Schönberger, J. L.; NunezIglesias, J.; Boulogne, F.; Warner, J. D.; Yager, N.; Gouillart, E.; Yu, T.; the scikitimage contributors, PeerJ 2014, 2, e453.
 Mauguière et al. 2016 Mauguière, F. A. L.; Collins, P.; Kramer, Z. C.; Carpenter, B. K.; Ezra, G. S.; Farantos, S. C.; Wiggins, S. J. Chem. Phys. 2016, 144.
 Hirsch et al. 2004 Hirsch, M. W.; Smale, S.; Devaney, R. L. Differential Equations, Dynamical Systems, and an Introduction to Chaos; Elsevier, 2004.
 48 Ref. 47, Sec. 16.2.
 Ozorio de Almeida et al. 1990 Ozorio de Almeida, A. M.; de Leon, N.; Mehta, M. A.; Marston, C. C. Phys. D 1990, 46, 265.
Comments
There are no comments yet.