Machine Learning for Molecular Dynamics on Long Timescales

12/18/2018 ∙ by Frank Noé, et al. ∙ Freie Universität Berlin 0

Molecular Dynamics (MD) simulation is widely used to analyze the properties of molecules and materials. Most practical applications, such as comparison with experimental measurements, designing drug molecules, or optimizing materials, rely on statistical quantities, which may be prohibitively expensive to compute from direct long-time MD simulations. Classical Machine Learning (ML) techniques have already had a profound impact on the field, especially for learning low-dimensional models of the long-time dynamics and for devising more efficient sampling schemes for computing long-time statistics. Novel ML methods have the potential to revolutionize long-timescale MD and to obtain interpretable models. ML concepts such as statistical estimator theory, end-to-end learning, representation learning and active learning are highly interesting for the MD researcher and will help to develop new solutions to hard MD problems. With the aim of better connecting the MD and ML research areas and spawning new research on this interface, we define the learning problems in long-timescale MD, present successful approaches and outline some of the unsolved ML problems in this application field.

READ FULL TEXT VIEW PDF

Authors

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Molecular dynamics (MD) simulation is a widely used method of computational physics and chemistry to compute properties of molecules and materials. Examples include to simulate how a drug molecule binds to and inhibits a protein, or how a battery material conducts ions. Despite its high computational cost, researchers use MD in order to get a principled understanding of how the composition and the microscopic structure of a molecular system translate into such macroscopic properties. In addition to scientific knowledge, this understanding can be used for designing molecular systems with better properties, such as drug molecules or enhanced materials.

MD has many practical problems, but at least four of them can be considered to be fundamental, in the sense that none of them is trivial for a practically relevant MD simulation, and there is extensive research on all of them. We refer to these four fundamental MD problems as SAME (Sampling, Analysis, Model, Experiment):

  1. Sampling: To compute expectation values via MD simulations the simulation time needs to significantly exceed the slowest equilibration process in the molecular system. For most nontrivial molecules and materials, the presence of rare events and the sheer cost per MD time step make sufficient direct sampling unfeasible.

  2. Analysis: If enough statistics can be collected, we face huge amounts of simulation data (e.g., millions of time steps, each having 100,000s of dimensions). How can we analyze such data and obtain comprehensive and comprehensible models of the most relevant states, structures and events sampled in the data?

  3. Model: MD simulations employ an empirical model of the molecular system studied. As the simulation computes forces from an energy model, this model is often referred to a MD force field. MD energy models are build from molecular components fitted to quantum mechanical and experimental data. The accuracy of such a model is limited by the accuracy of the data used and the errors involved in transferring the training data usually obtained for small molecules to the often larger molecules simulated.

  4. Experiment: Experiments and simulations cannot access the same observables. While in MD simulation, the positions and velocities of all particles are available at all times, experiments usually probe complex functions of the positions and velocities, such as emission or absorption spectra of certain types of radiation. Computing these functions from first principles often requires the solution of a quantum-mechanical calculation with an accuracy that is unfeasible for a large molecular system. The last problem thus consists of finding good approximations to compute how an experiment would “see” a given MD state.

Machine Learning (ML) has the potential to tackle these problems, and has already had profound impact on alleviating them. Here I will focus on the analysis problem and its direct connections to the sampling problem specifically for the case of long-time MD where these problems are most difficult and interesting. I believe that the solution of these problems lies on the interface between Chemical Physics and ML, and will therefore describe these problems in a language that should be understandable to audiences from both fields.

Let me briefly link to MD problems and associated ML approaches not covered by this chapter. The present description focuses on low-dimensional models of long-time MD and these can directly be employed to attack the sampling problem. The direct effect of these models is that short MD simulations that are individually not sampling all metastable states can be integrated, and thus an effective sampling that is much longer than the individual trajectory length, and on the other of the total simulation time can be reached (PrinzEtAl_JCP10_MSM1, ). The sampling efficiency can be further improved by adaptively selecting the starting points of MD simulations based on the long-time MD model, and iterating this process (hinrichs-pande:jcp:2007:adaptive-sampling, ; WojtasRouxBerneche_JCTC13_AdaptiveUmbrellaSampling, ; PretoClementi_PCCP14_AdaptiveSampling, ; DoerrDeFabritiis_JCTC14_OnTheFly, ; ZimmermanBowman_JCTC15_FAST, ; DoerrEtAl_JCTC16_HTMD, ; PlattnerEtAl_NatChem17_BarBar, ). This approach is called “adaptive sampling” in the MD community, which is an active learning approach in ML language. Using this approach, time-scales beyond seconds have been reached and protein-protein association and dissociation has recently been sampled for the first time with atomistic resolution (PlattnerEtAl_NatChem17_BarBar, ).

A well establish approach to speed up rare events in MD is to employe so-called enhanced sampling methods that change the thermodynamic conditions (temperature, adding bias potentials, etc.) (Torrie_JCompPhys23_187, ; Grubmueller_PhysRevE52_2893, ; Hansmann1997, ; LaioParrinello_PNAS99_12562, ; FukunishiEtAl_JCP02_HamiltonianREMD, ), and to subsequently reweight to the unbiased target ensemble (FerrenbergSwendsen_PRL89_WHAM, ; BartelsKarplus_JCC97_WHAM-ML, ; Bartels_CPL00_MBAR, ; GallicchioLevy_JPCB05_TWHAM, ; ShirtsChodera_JCP08_MBAR, ; MeyWuNoe_xTRAM, ). Recently, ML methods have been used to adaptively learn optimal biasing functions in such approaches (ValssonParrinello_PRL14_VariationalMeta, ; RibeiroTiwary_JCP18_RAVE, ). A conceptually different approach to sampling is the Boltzmann Generator (NoeWu_18_BoltzmannGenerators, ), a directed generative network to directly draw statistically independent samples from equilibrium distributions. While these approaches are usually limited to compute stationary properties, ML-based MD analysis models have recently been integrated with enhance sampling methods in order to also compute unbiased dynamical properties (WuMeyRostaNoe_JCP14_dTRAM, ; WuNoe_MMS14_TRAM1, ; RostaHummer_DHAM, ; WuEtAL_PNAS16_TRAM, ). These methods can now also access all-atom protein dynamics beyond seconds timescales (PaulEtAl_PNAS17_Mdm2PMI, ).

ML methods that use MD trajectory data to obtain a low-dimensional models of the long-time dynamics are extensively discussed here. Not discussed are manifold learning methods that purely use the data distribution, such as kernel PCA (SchoelkopfSmolaMueller_NeurComp98_kPCA, ), isomap (Tenenbaum_Science290_2319, ; DasEtAl_PNAS08_Isomap, ) or diffusion maps (CoifmanLafon_PNAS05_DiffusionMaps, ; RohrdanzClementi_JCP134_DiffMaps, ). Likewise, there is extensive research on geometric clustering methods – both on the ML and the MD application side – which only plays a minor role in the present discussion.

Learning an accurate MD model – the so-called force-field problem – is one of the basic and most important problems of MD simulation. While this approach has traditionally been addressed by relatively ad hoc

parametrization methods it is now becoming more and more a well-defined ML problem where universal function approximators (neural networks or kernel machines) are trained to reproduce quantum-mechanical potential energy surfaces with high accuracy

(BehlerParrinello_PRL07_NeuralNetwork, ; RuppEtAl_PRL12_QML, ; BartokKondorCsanyi_PRB13_SOAP, ; SchuettEtAl_NatComm17_tensornetworksMD, ; Schuett_SchNet, ; BereauEtAl_JCP18_Physical, ). See other chapters in this book for more details. A related approach to scale to the next-higher length-scale is the learning of coarse-grained MD models from all-atom MD data (WangEtAl_arxiv18_MLCoarseGraining, ; ZhangEtAl_JCP18_DeePCG, ; WangBombarelli_Autograin, ). These approaches have demonstrated that they can reach high accuracy, but employing the kernel machine or neural network to run MD simulations is still orders of magnitude slower than simulating a highly optimized MD code with an explicitly coded model. Achieving high accuracy while approximately matching the computational performance of commonly used MD codes is an important future aim.

Much less ML work has been done on the interpretation and integration of experimental data. MD models are typically parametrized by combining the matching of energies and forces from quantum-mechanical simulations with the matching of thermodynamic quantities measured by experiments, such as solvation free energies of small molecules. As yet, there is no rigorous ML method which learns MD models following this approach. Several ML methods have been proposed to integrate simulation data on the level of a model learned from MD simulation data (e.g., a Markov state model), typically by using information-theoretic principles such as maximum entropy or maximum caliber (HummerKoefinger_JCP15_Bayesian, ; OlssonEtAl_PNAS17_AugmentedMarkovModels, ; DixitDill_JCTRC18_MaxCalMSM, ). Finally, there is an emerging field of ML methods that predict experimental quantities, such as spectra, from chemical or molecular structures, which is an essential task that needs to be solved to perform data integration between simulation and experiment. An important step-stone for improving our ability to predict experimental properties are the availability of training datasets where chemical structures, geometric structures and experimental measurements under well-defined conditions are linked.

Ii Learning Problems for Long-time Molecular Dynamics

ii.1 What would we like to compute?

The most basic quantitative aim of MD is to compute equilibrium expectations. When is state of a molecular system, such coordinates and velocities of the atoms in a protein system in a periodic solvent box, the average value of an observable is given by:

(1)

where is the equilibrium distribution, i.e.

, the probability to find a molecule in state

at equilibrium conditions. A common choice is the Boltzmann distribution in the canonical ensemble at temperature :

(2)

where is a potential energy and the input constant

is the mean thermal energy per degree of freedom. The observable

can be chosen to compute, e.g., the probability of a protein to be folded at a certain temperature, or the probability for a protein and a drug molecule to be bound at a certain drug concentration, which relates to how much the drug inhibits the protein’s activity. Other equilibrium expectations, such as spectroscopic properties, do not directly translate to molecular function, but are useful to validate and calibrate simulation models.

Molecules are not static but change their state over time. Under equilibrium conditions, these dynamical changes are due to thermal fluctuations, leading to trajectories that are stochastic. Given configuration at time , the probability of finding the molecule in configuration at a later time can be expressed by the transition density :

(3)

Thus, a second class of relevant quantities is that of dynamical expectations:

(4)

As above, the observable determines which dynamical property we are interested in. With an appropriate choice we can measure the average time a protein takes to fold or unfold, or dynamical spectroscopic expectations such as fluorescence correlations or dynamical scattering spectra.

ii.2 What is Molecular Dynamics?

MD simulation mimics the natural dynamics of molecules by time-propagating the state of a molecular system, such coordinates and velocities of the atoms in a protein system in a periodic solvent box. MD is a Markov process involving deterministic components such as the gradient of a model potential and stochastic components, e.g. from a thermostat. The specific choice of these components determine the transition density (3). Independent of these choices, a reasonable MD algorithm should be constructed such that it samples from in the long run:

(5)

Thus, if a long enough MD trajectory can be generated, the expectation values (1) and (4) can be computed as direct averages. Unfortunately, this idea can only be implemented directly for very small and simple molecular systems. Most of the interesting molecular systems involve rare events, and as a result generating MD trajectories that are long enough to compute the expectation values (1) and (4) by direct averaging becomes unfeasible. For example, the currently fastest special-purpose supercomputer for MD, Anton II, can generate simulations on the order of 50 s per day for a protein system (ShawEtAl_Anton2, ). The time for two strongly binding proteins to spontaneously dissociate can take over an hour, corresponding to a simulation time of a century for single event (PlattnerEtAl_NatChem17_BarBar, ).

ii.3 Learning Problems for long-time MD

Repeated sampling from “simulates” the MD system in time steps of length and will, due to (5), result in configurations sampled from . Hence, knowing is sufficient to compute any stationary or dynamical expectation (1,4

). The primary ML problem for long-time MD is thus to learn a model of the probability distribution

from simulation data pairs which allows to be efficiently sampled. However, this problem is almost never addressed directly, because it is unnecessarily difficult. Configurations live in a very high-dimensional space (typically to dimensions), the probability distributions and are multimodal and complex such that direct sampling is not tractable, and because of the exponential relationship between energies and probabilities (2), small mistakes in sampling will lead to completely unrealistic molecular structures.

Because of these difficulties, ML methods for long-time MD usually take the detour of finding a low-dimensional representation, often called latent space representation, , using the encoder , and learning the dynamics in that space

A relatively recent but fundamental insight is that for many MD systems there exists a natural low-dimensional representation in which the stationary and dynamical properties can be represented exactly if we give up time resolution by choosing a large lag time . Thus, for long-time MD the intractable problem to learn can be broken down into three learning problems (LPs) out of which two are much less difficult, and the third one does not need to be solved in order to compute stationary or dynamical expectations (1,4), and that will be treated in the remainder of the article:

  1. LP1: Learn propagator in representation . The simplest problem is to learn a model to propagate the latent state in time for a given encoding . This model is often linear using the propagator matrix , and hence shallow learning methods such as regression are used. In addition to obtaining an accurate model, it is desirable for to be compact and easily interpretable/readable for a human specialist.

  2. LP2: Learn encoding to representation . Learning the generally nonlinear encoding is a harder problem. Both shallow methods (Regression in kernel and feature spaces, clustering and likelihood maximization) as well as deep methods (neural networks) are used. LP1 and LP2 can be coupled to an end-to-end learning problem for

    . LP2 has only become a well-defined ML problem recently with the introduction of a variational approach that defines a meaning loss function for LP2.

  3. LP3: Learn decoding / to configuration space. The most difficult problem is to decode the latent representation back to configuration space. Because configuration space is much higher dimensional than latent space, this is an inverse problem. The most faithful solution is to learn a generator

    , representing a conditional probability distribution,

    . This problem contains the hardest parts of the full learning problem for and addressing it is still in its infancy.

These learning problems lead to different building blocks that can be implemented by neural networks or linear methods and can be combined towards different architectures (Fig. 1).

Figure 1: Overview of network structures for learning Markovian dynamical models

Iii LP1: Learning Propagator in Feature Space

The simplest and most established learning problem is to learn a propagator, , for a given, fixed encoding . Therefore we discuss this learning problem first before defining what a “good” encoding is and how to find it. As will be discussed below, for most MD systems of interest, there exists an encoding to a spectral representation in which the dynamics is linear and low-dimensional. Although this spectral representation can often not be found exactly, it can usually be well enough approximated such that a linear dynamic model

(6)

is an excellent approximation as well. denotes an expectation value over time that accounts for stochasticity in the dynamics, and can be omitted for deterministic dynamical systems. For example, if indicates which state the system is in at time , corresponds to a probability distribution over states.

Finding a linear model

is a shallow, unsupervised learning problem that in many cases has an algebraic expression for the optimum. Having a linear propagator also has great advantages for the analysis of the dynamical system. The analyses that can be done depend on the type of the representation and the mathematical properties of

. If

performs a one-hot-encoding that indicates which “state” the system is in, then the pair

is called Markov state model (MSM (SchuetteFischerHuisingaDeuflhard_JCompPhys151_146, ; SwopePiteraSuits_JPCB108_6571, ; NoeHorenkeSchutteSmith_JCP07_Metastability, ; ChoderaEtAl_JCP07, ; BucheteHummer_JPCB08, ; PrinzEtAl_JCP10_MSM1, )), and

is the transition matrix of a Markov chain whose elements

are nonnegative and can be interpreted as the conditional probabilities to be in a state at time given that the system was in a state at time (Sec. III.2 and III.3). For MSMs, the whole arsenal of Markov chains analysis algorithms is available, e.g. for computing limiting distributions, first passage times or the statistics of transition pathways (MetznerSchuetteVandenEijnden_TPT, ; NoeSchuetteReichWeikl_PNAS09_TPT, ). If the transition matrix additional has a real-valued spectrum, which is associated with dynamics at thermodynamic equilibrium conditions (Sec. III.3

), additional analyses are applicable, such as the computation of metastable (long-lived) sets of states by spectral clustering

(SchuetteFischerHuisingaDeuflhard_JCompPhys151_146, ; DeuflhardWeber_LAA05_PCCA+, ; NoeHorenkeSchutteSmith_JCP07_Metastability, ).

A broader class of propagators arise from encodings that are partitions of unity, i.e. where and for all (KubeWeber_JCP07_CoarseGraining, ; MardtEtAl_VAMPnets, ). Such encodings correspond to a “soft clustering”, where every configuration can still be assigned to a state, but the assignment is no longer unique. The resulting propagators are typically no longer transition matrices whose elements can be guaranteed to be nonnegative, but they can still be used to propagate probability densities by means of Eq. (6

), and if they have a unique eigenvalue of

, the corresponding eigenvector

still corresponds to the unique equilibrium distribution:

(7)

For arbitrary functions , we can still use

to propagate state vectors according to Eq. (

6), although these state vectors do no longer have a probabilistic interpretation, but are simply coefficients that model the configuration in the representation’s basis. Owing to the Markovianity of the model, we can test how well the time-propagation of the model in time coincides with an estimation of the model at longer times, by means of the Chapman-Kolmogorov equation:

(8)

In order to implement this equation, one has to decide which matrix norm should be used to compare the left and right hand side. A common choice is to compare the leading eigenvalues . As these decay exponentially with time in a Markov process, it is common to transform them to relaxation rates or timescales by means of:

(9)

A consequence of the Chapman-Kolmogorow equality is that these relaxation timescales are independent of the lag time at which is estimated (SwopePiteraSuits_JPCB108_6571, ). For real-valued eigenvalues, corresponds to an ordinary relaxation time of the corresponding dynamical process. If has complex-valued eigenvalues, is the decay time of the envelope of an oscillating process whose oscillation frequency depends on the phase of .

iii.1 Loss Function and basis statistics

Given one or many MD simulation trajectories and apply in order to map them to the representation which define the input to LP1. The basic learning problem is the parameter estimation problem which consists of obtaining the optimal estimator as follows:

  1. Define a loss function

  2. Obtain the optimal estimator as

As most texts about molecular kinetics do not use the concept of a loss function, I would like to highlight the importance of a loss (or score) function from a ML point of view. The difference between fitting a training data set and ML is that ML aims at finding the estimator that performs best on an independent test data set. To this end we need to not only optimize the parameters (such as the matrix elements of ), but also hyper-parameters (such as the size of ), which requires the concept of a loss function. Another important learning problem is to estimate the uncertainties of the estimator .

To express the loss function and the optimal estimator of linear propagators , we do not actually need the full trajectory , but only certain sufficient statistics that are usually more compact than and thus may require less storage space and lead to faster algorithms. The most prominent statistics are the empirical means and covariance matrices:

(10)
(11)
(12)
(13)
(14)

A common modification to (12,14) is the so-called shrinkage estimator that is used in ridge or Tikhonov regularization (SchaeferStrimmer_05_Shrinkage, ). Since many algorithms involve the inversion of (12,14) which might be rank-deficient, these estimators are often modified by adding a second matrix which ensures full rank, e.g.:

(15)
(16)

where the small number is a regularization hyper-parameter.

iii.2 Maximum Likelihood and Markov State Models

The concepts of maximum likelihood estimators and Markov State Models (MSMs) are naturally obtained by defining the following encoding:

(17)

where is a partition of configuration space into discrete states, i.e. each point is assigned to exactly one state , indicated by the position of the in the encoding vector. In ML, (17) is called one-hot encoding. A consequence of (17) is that the covariance matrix (13) becomes:

where counts the total number of transitions observed from to . The covariance matrix (12) is a diagonal matrix with diagonal elements

where we use to count the total number of transitions starting in state . With this encoding, a natural definition for the propagator is a transition matrix whose elements indicate the transition probability from any state to any state in a time step :

A natural optimality principle is then the maximum likelihood estimator (MLE): find the transition matrix that has the highest probability to produce the observation . The likelihood is given by:

(18)

Where the last term collects equal transition events along the trajectory and discards the proportionality factor. Maximizing is equivalent to minimizing . However, as common in likelihood formulations we instead use as a loss, which is minimal at the same but avoids the product:

(19)

The MLE can be easily found by minimizing (19) with the constraint using the method of Lagrange multipliers. The result is intuitive: the maximum likelihood transition probability equals the corresponding fraction of transitions observed out of each state:

In matrix form we can express this estimator as

(20)

an expression that we will find also for other optimization principles. As we have a likelihood (18), we can also define priors and construct a full Bayesian estimator that not only provides the maximum likelihood result (20

), but also posterior means and variances for estimating uncertainties. Efficient samplers are known that allow us to sample transition matrices directly from the distribution (

18), and these samples can be used to compute uncertainties on quantities derived from (SinghalPande_JCP123_204909, ; Singhal_JCP07, ).

An important property of a transition matrix is its stationary distribution (which we will assume to exist and be unique here) with

that can be computed by solving the eigenvalue problem (7).

iii.3 MSMs with Detailed Balance

In thermodynamic equilibrium, i.e., when a molecular system is evolving purely as a result of thermal energy at a given thermodynamic condition and no external force is applied, the absolute probability of paths between any two end-points is symmetric. As a consequence of this, there exists no cycle in state space which contains net flux in either direction, and no net work can be extracted from the system, consistently with the second law of thermodynamics. We call this condition detailed balance and write it as:

(21)

Integrating and over the sets and in this equation leads to detailed balance for MSMs:

(22)

When the molecular system is simulated such that equations (21) hold, we also want to ensure that the estimator fulfills the constraint (22). Enforcing (21) in the estimator reduces the number of free parameters and thus improves the statistics. More importantly, propagators that fulfill (21) or (22) have a real-valued spectrum for which additional analyses can be made (see beginning of Sec. III).

The trivial estimator (20) does not fulfill (22), unless is, by chance, a symmetric matrix. Maximum likelihood estimation with (22) as a constraint can be achieved by an iterative algorithm first developed in (Bowman_JCP09_Villin, ) and reformulated as in Algorithm 1 in (TrendelkampSchroerEtAl_InPrep_revMSM, ). Enforcing (22) is only meaningful if there is a unique stationary distribution, which, requires the transition matrix to define a fully connected graph. For this reason, graph algorithms are commonly used to find the largest connected set of states before estimating an MSM with detailed balance (Bowman_JCP09_Villin, ; PrinzEtAl_JCP10_MSM1, ; SchererEtAl_JCTC15_EMMA2, ).

  1. Initialize:

  2. Iterate until convergence:

Algorithm 1 Detailed balance with unknown (Bowman_JCP09_Villin, ; TrendelkampSchroerEtAl_InPrep_revMSM, )

When the equilibrium distribution is known a priori or obtained from another estimator as in (WuMeyRostaNoe_JCP14_dTRAM, ; TrendelkampNoe_PRX15_RareEventKinetics, ; WuEtAL_PNAS16_TRAM, ), the maximum likelihood estimator can be obtained by the iterative Algorithm 2 developed in (TrendelkampSchroerEtAl_InPrep_revMSM, ):

  1. Initialize Lagrange parameters:

  2. Iterate until convergence:

Algorithm 2 Detailed balance with known (TrendelkampSchroerEtAl_InPrep_revMSM, ):

As for MSMs without detailed balance, methods have been developed to perform a full Bayesian analysis of MSMs with detailed balance. No method is known to sample independent transition matrices from the likelihood (18) subject to the detailed balance constraints (22), however efficient Markov Chain Monte Carlo methods have been developed and implemented to this end (Noe_JCP08_TSampling, ; BacalladoChoderaPande_JCP09_ReversibleT, ; ChoderaNoe_JCP09_MSMstatisticsII, ; MetznerNoeSchuette_Sampling, ; TrendelkampNoe_JCP13_EfficientSampler, ; TrendelkampSchroerEtAl_InPrep_revMSM, ; SchererEtAl_JCTC15_EMMA2, ).

iii.4 Minimal Regression Error

We can understand equation (6) as a regression from onto where contains the unknown coefficients. The regression loss is then directly minimizing the error in Eq. (6):

and for a given dataset we can define matrices and resulting in the loss function:

(23)

where indicates the Frobenius norm, i.e. the sum over all squares. The direct solution of the least squares regression problem in (23) is identical with the trivial MSM estimator (20). Thus, the estimator (20) is more general than for MSMs – it can be applied for to any representation . Dynamic Mode Decomposition (DMD) (SchmidSesterhenn_APS08_DMD, ; RowleyEtAl_JFM09_DMDSpectral, ; Schmid_JFM10_DMD, ; TuEtAl_JCD14_ExactDMD, ) and Extended Dynamic Mode Decomposition (EDMD) (WilliamsKevrekidisRowley_JNS15_EDMD, ) are also using the minimal regression error, although the usually consider low-rank approximations of .

In general, the individual dimensions of the encoding may not be orthogonal, and if not, the matrix is not diagonal, but contains off-diagonal elements quantifying the correlation between different dimensions. When there is too much correlation between them, may have some vanishing eigenvalues, i.e. not full rank, causing it not to be invertible or only invertible with large numerical errors. A standard approach approach in least squares regression is to then apply the Ridge regularization (Eq. 15). Using (15) in the estimator (20

) is called Ridge regression.

iii.5 Variational Approach for Dynamics with Detailed Balance (VAC)

Instead of using an optimality principle to estimate directly, we will now derive a variational principle for the eigenvalues and eigenvectors of , from which we can then easily assemble itself. At first, this approach seems to be a complication compared to the likelihood or least squares approach, but this approach is key in making progress on LP2 because the variational principle for has a fundamental relation to the spectral properties of the transition dynamics in configuration space (3). It also turns out that the variational approach leads to a natural representation of configurations that we can optimize in end-to-end learning frameworks. We first define the balanced propagator:

(24)

In this section, we will assume that detailed balance holds with the a unique stationary distribution, Eq (21). In the statistical limit this means that holds and is a symmetric matrix. Using these constraints, we find the stationary balanced propagator:

(25)

Where we have used Eq. (6). Due to the symmetry of , is also symmetric and we have the symmetric eigenvalue decomposition (EVD):

(26)

with eigenvector matrix and eigenvalue matrix ordered as . This EVD is related to the EVD of via a basis transformation:

(27)

such that are the eigenvectors of , their inverse is given by , and both propagators share the same eigenvalues. The above construction is simply a change of viewpoint: instead of optimizing the propagator , we might as well optimize its eigenvalues and eigenvectors, and then assemble via Eq. (27).

Now we seek an optimality principle for eigenvectors and eigenvalues. For symmetric eigenvalue problems such as (26), we have the following variational principle: The dominant eigenfunctions are the solution of the maximization problem:

(28)

This means: we vary a set of vectors , and when the so-called Rayleigh quotients on the right hand side are maximized, we have found the eigenvectors. In this limit, the argument of the Rayleigh quotient equals the sum of eigenvalues. As the argument above can be made for every value of starting from , we have found each single eigenvalue and eigenvector at the end of the procedure (assuming no degeneracy). This variational principle becomes especially useful for LP2, because using the variational approach of conformation dynamics (VAC (NoeNueske_MMS13_VariationalApproach, ; NueskeEtAl_JCTC14_Variational, )), it can also be shown that the eigenvalues of are lower bounds to the true eigenvalues of the Markov dynamics in configurations (Sec. IV.2).

Now we notice that this variational principle can also be understood as a direct correlation function of the data representation. We define the spectral representation as:

(29)

inserting the estimators for and (Eds. 12,13) into Eq. (28), we have:

where the superscript denotes the covariance matrices computed in the spectral representation.

The same calculation as above can be performed with powers of the eigenvalues, e.g., . We therefore get a whole family of VAC-optimization principles, but two choices are especially interesting: we define the VAC-1 loss, that is equivalent to the generalized matrix Rayleigh quotient employed in (McGibbonPande_JCP15_CrossValidation, ), as:

(30)
(31)

The VAC-2 loss is the Frobenius norm, i.e. the sum of squared elements of the matrix:

(32)
(33)

This loss induces a natural spectral embedding where the variance along each dimension equals the squared eigenvalue and geometric distances in this space are related to kinetic distances (NoeClementi_JCTC15_KineticMap, ).

iii.6 General Variational Approach (VAMP)

The variational approach for Markov processes (VAMP) (WuNoe_VAMP, ) generalizes the above VAC approach to dynamics that do not obey detailed balance and may not even have an equilibrium distribution. We use the balanced propagator (24

) that is now no longer symmetric. Without symmetry we cannot use the variational principle for eigenvalues, but there is a similar variational principle for singular values. We therefore use the singular value decomposition (SVD) of the balanced propagator:

(34)

Again, this SVD is related to the SVD of via a basis transformation:

(35)

with and . Using two sets of search vectors and , we can follow the same line of derivation as above and obtain:

Now we define again a spectral representation. If we set (equilibrium case) as above, we can define a single spectral representation, otherwise we need two sets of spectral coordinates:

(36)
(37)

As in the above procedure, we can define a family of VAMP scores, where the VAMP-1 and VAMP-2 scores are of special interest:

(38)
(39)

The VAMP-2 score is again related to an embedding where geometric distance corresponds to kinetic distance (PaulEtAl_VAMP_dimred, ):

(40)
(41)

Iv Spectral Representation and Variational Approach

Before turning to LP2, we will relate the spectral decompositions in the VAC and VAMP approaches described above to spectral representations of the transition density of the underlying Markov dynamics in . These two representations are connected by variational principles. Exploiting this principle leads to the result that a meaningful and feasible formulation of the long-time MD learning problem is to seek a spectral representation of the dynamics. This representation may be thought of as a set of collective variables (CVs) pertaining to the long-time MD, or slow CVs (NoeClementi_COSB17_SlowCVs, ).

iv.1 Spectral theory

We can express the transition density (3) as the action of the Markov propagator in continuous-space, and by its spectral decomposition (SarichNoeSchuette_MMS09_MSMerror, ; WuNoe_VAMP, ):

(42)
(43)

The spectral decomposition can be read as follows: The evolution of the probability density can be approximated as the superposition of basis functions . A second set of functions, is required in order to compute the amplitudes of these functions.

In general, Eq. (43) is a singular value decomposition with left and right singular functions and true singular values (WuNoe_VAMP, ). The approximation then is a low-rank decomposition in which the small singular values are discarded. For the special case that dynamics are in equilibrium and satisfy detailed balance (21), Eq. (43) is an eigenvalue decomposition with the choices:

Hence Eq. (43) simplifies: we only need one set of functions, the eigenfunctions . The true eigenvalues are real-valued and decay exponentially with the time step (hence Eq. 9). The characteristic decay rates are directly linked to experimental observables probing the processes associated with the corresponding eigenfunctions (BucheteHummer_JPCB08, ; NoeEtAl_PNAS11_Fingerprints, ). The approximation in Eq. (43) is due to truncating all terms with decay rates faster than . This approximation improves exponentially with increasing .

Spectral theory makes it clear why learning long-time MD via LP1-3 is significantly simpler than trying to model directly: For long time steps , becomes intrinsically low-dimensional, and it the problem is thus significantly simplified by learning to approximate the low-dimensional representation for a given .

iv.2 Variational principles

The spectral decomposition of the exact dynamics, Eq. (43), is the basis for the usefulness of the variational approaches described in Sec. III.5 and III.6. The missing connection is filled by the following two variational principles. The VAC variational principle (NoeNueske_MMS13_VariationalApproach, ) is that for dynamics obeying detailed balance (21), the eigenvalues of a propagator matrix via any encoding are, in the statistical limit, lower bounds of the true . The VAMP variational principle is more general, as it does not require detailed balance (21), and applies to the singular values:

Equality is only achieved for when detailed balance holds, and for when detailed balance does not hold. Specifically, the eigenvectors or the singular vectors of the propagator then approximate the individual eigenfunctions or singular functions (assuming no degeneracy):

As direct consequence of the variational principles above, the loss function associated with a given embedding is, in the statistical limit, also an upper bound to the sum of true eigenvalues:

and for the minimum possible loss,

has identified the dominant eigenspace or singular space.

iv.3 Spectral representation learning

We have seen in Sec. III (LP1) that a propagator can be equivalently represented by its eigenspectrum or singular spectrum. We can thus define a spectral encoding that attempts to directly learn the encoding to the spectral representation:

with the choices (29) or (36,37), depending on whether the dynamics obey detailed balance or not. In these representations, the dynamics are linear. After encoding to this representation, the eigenvalues or singular values can be directly estimated from:

(44)
(45)

Based on these results, we can formulate the learning of the spectral representation, or variants of it, as the key approach to solve LP2.

V LP2: Learning Features and Representation

Above we have denoted the full MD system configuration and the latent-space representation in which linear propagators are used. We have seen that there is a special representation . In general there may be a whole pipeline of transformations, e.g.

where the first step is a featurization from full configurations to features, e.g. the selection of solute coordinates or the transformation to internal coordinates such as distances or angles. On the latent space side we may have a handcrafted or a learned spectral representation. Instead of considering these transformations individually, we may construct a direct end-to-end learning framework that performs multiple transformation steps.

To simply notation, we commit to the following notation: coordinates are the input to the learning algorithm, whether these are full Cartesian coordinates of the MD system or already transformed by some featurization. are coordinates in the latent space representations that are the output of LP2, . We only explicitly distinguish between different stages within configuration or latent space (e.g. vs ) when this distinction is explicitly needed.

v.1 Suitable and unsuitable loss functions

We first ask: what is the correct formulation for LP2? More specifically: which of the loss functions introduced in LP1 above are compatible with LP2? Looking at the sequence of learning problems:

It is tempting to concatenate them to an end-to-end learning problem and try to solve it by minimizing any of the three losses defined for learning of in Sec. III. However, if we make the encoding sufficiently flexible, we find that only one of the loss functions remains as being suitable for end-to-end learning, while two others must be discarded as they have trivial and useless minima:

Likelihood loss: The theoretical minimum of the likelihood loss (19) is equal to and is achieved if all for the transitions observed in the dataset. However, this maximum can be trivially achieved by learning a representation that assigns all microstates to a single state, e.g. the first state:

Maximizing the transition matrix likelihood while varying the encoding is therefore meaningless.

Regression loss: A similar problem is encountered with the regression loss. The theoretical minimum of (23) is equal to and is achieved when for all . This, can be trivially achieved by learning the uninformative representation:

Minimizing the propagator least squares error while varying the encoding is therefore meaningless. See also discussion in (OttoRowley_LinearlyRecurrentAutoencoder, ).

Variational loss: The variational loss (VAC or VAMP) does not have trivial minima. The reason is that, according to the variational principles (NoeNueske_MMS13_VariationalApproach, ; WuNoe_VAMP, ), the variational optimum coincides with the approximation of the dynamical dynamical components. A trivial encoding such as only identifies a single component and is therefore variationally suboptimal. The variational loss is thus the only choice amongst the losses described in LP1 that can be used to learn both and in an end-to-end fashion.

v.2 Feature selection

We first address the problem of learning the featurization

. We can view this problem as a feature selection problem, i.e. we consider a large potential set of features and ask which of them leads to an optimal model of the long-time MD. In this view, learning the featurization is a model selection problem that can be solved by minimizing the validation loss.

We can solve this problem by employing the variational losses as follows: We compute the spectral representation or directly from the training set and then recompute the covariance matrices in the validation set . We then compute the following matrices that are diagonal in the training set but only approximately diagonal in the validation set. The VAC and VAMP validation scores can then be computed as (Eq. 30,32) or (Eq. 38,40). In (SchererEtAl_VAMPselection, ) we perform VAMP-2 validation in order to select optimal features for describing protein folding and find that a combination of torsion backbone angles and with being the minimum distances between amino acids.

v.3 Blind Source Separation and TICA

For a given featurization, a widely used linear learning method to obtain the spectral representation is an algorithm first introduced in (Molgedey_94, )

as a method for blind source separation that later became known as time-lagged independent component analysis (TICA) method

(HyvaerinenKarhunenOja_ICA_Book, ; PerezEtAl_JCP13_TICA, ; SchwantesPande_JCTC13_TICA, ), sketched in Algorithm 3. In (PerezEtAl_JCP13_TICA, ), it was shown that the TICA algorithm directly follows from the minimization of the VAC variational loss (31,33) to best approximate the Markov operator eigenfunctions by a linear combination of a input features. As a consequence, TICA approximate the eigenvalues and eigenfunctions of Markov operators that obey detailed balance (21), and therefore approximates the slowest relaxation processes of the dynamics.

Algorithm 3 performs a symmetrized estimation of covariance matrices in order to guarantee that the eigenvalue spectrum is real. In most early formulations, one usually symmetrizes only while computing by (12), which is automatically symmetric. However these formulations might lead to eigenvalues larger than 1, which do not correspond to any meaningful relaxation timescale in the present context – this problem is avoided by the step 1 in Algorithm 3 (WuEtAl_JCP17_VariationalKoopman, ). Note that symmetrization of introduces an estimation bias if the data is non-stationary, e.g. because short MD trajectories are used that have not been started from the equilibrium distribution. To avoid this problem, please refer to Ref. (WuEtAl_JCP17_VariationalKoopman, ) which introduces the Koopman reweighting procedure to estimate symmetric covariance matrices without this bias, although at the price of an increased estimator variance.

Furthermore, the covariance matrices in step 1 of Algorithm 3 are computed after removing the mean. Removing the mean has the effect of removing the eigenvalue and the corresponding stationary eigenvector, hence all components return by Algorithm 3 approximate dynamical relaxation processes with finite relaxation timescales estimates according to Eq. (9).

The TICA propagator can be directly computed as , and is a least-squares result in the sense of Sec. III.4. Various extensions of the TICA algorithm were developed: Kernel formulations of TICA were first presented in machine learning (HarmelingEtAl_NeurComput03_KernelTDSEP, ) and later in other fields (SchwantesPande_JCTC15_kTICA, ; WilliamsEtAl_Arxiv14_KernelEDMD, ). An efficient way to solve TICA for multiple lag times simultaneously was introduced as TDSEP (ZieheMueller_ICANN98_TDSEP, ). Efficient computation of TICA for very large feature sets can be performed with a hierarchical decomposition (PerezNoe_JCTC16_hTICA, ) compressed sensing approach (Litzinger_JCTC18_CompressedSensing, ). TICA is closely related to the Dynamic Mode Decomposition (DMD) (SchmidSesterhenn_APS08_DMD, ; RowleyEtAl_JFM09_DMDSpectral, ; Schmid_JFM10_DMD, ; TuEtAl_JCD14_ExactDMD, ) and the Extended Dynamic Mode Decomposition (EDMD) algorithms (WilliamsKevrekidisRowley_JNS15_EDMD, ). DMD approximates the left eigenvectors (“modes”) instead of the Markov operator eigenfunctions described here. EDMD is algorithmically identical to VAC/TICA, but is in practice also used for dynamics that do not fulfill detailed balance (21), although this leads to complex-valued eigenfunctions.

  1. Compute symmetrized mean free covariance matrices

    with means defined analogously as in (10-11), where is an optional ridge parameter.

  2. Compute the largest Eigenvalues and Eigenvectors of:

  3. Project to spectral representation: for all

  4. Return

Algorithm 3 TICA({}, , ).

v.4 Tcca / Vamp

When the dynamics do not satisfy detailed balance (21), e.g., because they are driven by an external force or field, the TICA algorithm is not meaningful, as it will not even in the limit of infinite data approximate the true spectral representation. If detailed balance holds for the dynamical equations, but the data is non-stationary, i.e. because short simulation trajectories started from a non-equilibrium distribution are used, the symmetrized covariance estimation in Algorithm 3 introduces a potentially large bias.

These problems can be avoided by going from TICA to the time-lagged canonical correlation analysis (TCCA, Algorithm 4) which is a direct implementation of the VAMP approach (WuNoe_VAMP, ), i.e. it results from minimizing the VAMP variational loss (39,41), when approximating the Markov operator singular functions with a linear combination of features. The TCCA algorithm performs a canonical correlation analysis (CCA) applied to time series. TCCA returns two sets of features approximating the left and right singular functions of the Markov operator and that can be interpreted as the optimal spectral representation to characterize state of the system “before” and “after” the transition with time step . For non-stationary dynamical systems, these representations are valid for particular points in time, and (KoltaiEtAl_Computation18_NonrevMSM, ).

VAMP/TCCA as a method to obtain a low-dimensional spectral representation of the long time MD is discussed in detail in (PaulEtAl_VAMP_dimred, ), where the algorithm is used to identify low-dimensional embeddings of driven dynamical systems, such as an ion channel in an external electrostatic potential.

  1. Compute covariance matrices , , from {}, as in Eqs. (12-14) or Eqs. (15,16).

  2. Perform the truncated SVD:

    where is the propagator for the representations and , is a diagonal matrix of the first singular values that approximate the true singular values , and and consist of the corresponding left and right singular vectors respectively.

  3. Compute ,

  4. Project to spectral representation: and for all

  5. Return

Algorithm 4 TCCA({}, , )

v.5 MSMs based on geometric clustering

For the spectral representations found by TICA and TCCA, a propagator can be computed by means of Eq. (6), however this propagator is harder to interpret than a MSM propagator whose elements correspond to transition probabilities between states. For this reason, TICA, TCCA and other dimension reduction algorithms are frequently used as a first step towards building an MSM (PerezEtAl_JCP13_TICA, ; SchwantesPande_JCTC13_TICA, ; PerezNoe_JCTC16_hTICA, ). Before TICA and TCCA were introduced into the MD field, MSMs were directly built upon manually constructed features such as distances, torsions or in other metric spaces that define features only indirectly, such as the pairwise distance of aligned molecules (kabsch:acta-cryst:1976:rmsd, ; theobald:acta-cryst:2005:fast-rmsd, ) – see Ref. (HusicPande_JACS18_MSMReview, ) for an extensive discussion.

In this approach, the trajectories in feature space, , or in the representation , must be further transformed into a one-hot encoding (17) before the MSM can be estimated via one of the methods described in Sec. III. In other words, the configuration space must be divided into sets that are associated with the MSM states. Typically, clustering methods that somehow group simulation data by means of geometric similarity. When MSMs were build on manually constructed feature spaces, research on suitable clustering methods was very active (SwopeEtAl_JPCB108_6582, ; NoeHorenkeSchutteSmith_JCP07_Metastability, ; ChoderaEtAl_JCP07, ; AltisStock_JCP07_dPCA, ; BucheteHummer_JPCB08, ; YaoEtAl_JCP09_Mapper, ; Bowman_JCP09_Villin, ; Keller_JCP2010_Clustering, ; JainStock_JCTC12_ProbablePathClustering, ; BowmanPandeNoe_MSMBook, ; SchererEtAl_JCTC15_EMMA2, ; SheongEtAl_JCTC15_APM, ; HusicPande_JCTC17_Ward, ). Since the introduction of TICA and TCCA that identify a spectral representation that already approximates the leading eigenfunctions, the choice of the clustering method has become less critical, and simple methods such as -means lead to robust results. The final step towards an easily interpretable MSM is coarse-graining of down to a few states (KubeWeber_JCP07_CoarseGraining, ; YaoHuang_JCP13_Nystrom, ; FackeldeyWeber_WIAS17_GenPCCA, ; GerberHorenko_PNAS17_Categorial, ; HummerSzabo_JPCB15_CoarseGraining, ; OrioliFaccioli_JCP16_CoarseMSM, ; NoeEtAl_PMMHMM_JCP13, ).

The geometric clustering step introduces a different learning problem and objective whose relationship to the original problem of approximating long-term MD is not clear. Therefore, geometric clustering must be at the moment regarded as a pragmatic approach to construct an MSM from a given embedding, but this approach departs from the avenue of a well-defined machine learning problem.

v.6 VAMPnets

VAMPnets (MardtEtAl_VAMPnets, ) were introduced to replace the complicated and error-prone approach of constructing MSMs by (i) searching for optimal features , (ii) combining them to a representation , e.g., via TICA, (iii) clustering it, (iv) estimating the transition matrix , and (v) coarse-graining it, by a single end-to-end learning approach in which all of these steps are replaced by a deep neural network. This is possible because with the VAC and VAMP variational principles, loss functions are available that are suitable to train the sequence of learning problems 1 and 2 simultaneously. A similar architecture is used by EDMD with dictionary learning (LiEtAl_Chaos17_EDMD_DL, ), which avoids the problem of the regression error to collapse to trivial encodings (Sec. V.1) by fixing some features that are not learnable.

VAMPnets contain two network lobes that transform the molecular configurations found at a time delay along the simulation trajectories (Fig. 2a). VAMPnets can be minimized with any VAC or VAMP variational loss. In Ref. (MardtEtAl_VAMPnets, ), the VAMP-2 loss (41) was used, which is meaningful for both dynamics with and without detailed balance. When detailed balance (22) is enforced in the propagator obtained by (6), the loss function automatically becomes VAC-2. VAMPnets may either use two distinct network lobes to encode the spectral representation of the left and right singular functions (which is important for non-stationary dynamics (KoltaiCiccottiSchuette_JCP16_MetastabilityNonstationary, ; KoltaiEtAl_Computation18_NonrevMSM, )), whereas for MD with a stationary distribution we generally use parameter sharing and have two identical lobes. For dynamics with detailed balance, the VAMPnet output then encodes the space of the dominant Markov operator eigenfunctions (Fig. 3b).

In order to obtain a propagator that can be interpreted as an MSM, (MardtEtAl_VAMPnets, )

chose to use a SoftMax layer as an output layer, thus transforming the spectral representation to a soft indicator function similar to spectral clustering methods such as PCCA+

(DeuflhardWeber_LAA05_PCCA+, ; RoeblitzWeber_AdvDataAnalClassif13_PCCA++, ). As a result, the propagator computed by Eq. (6) is almost a transition matrix. It is guaranteed to be a true transition matrix in the limit where the output layer performs a hard clustering, i.e. one-hot encoding (17). Since this is not true in general, the VAMPnet propagator may still have negative elements, but these are usually very close to zero. The propagator is still valid for transporting probability distributions in time and can therefore be interpreted as an MSM between metastable states (Fig. 4d).

The results described in (MardtEtAl_VAMPnets, ) (see, e.g., Fig. 3,4) were competitive with and sometimes surpassed the state-of-the-art handcrafted MSM analysis pipeline. Given the rapid improvements of training efficiency and accuracy of deep neural networks seen in a broad range of disciplines, we expect end-to-end learning approaches such as VAMPnets to dominate the field eventually.

Figure 2: Overview of network structures for learning Markovian dynamical models. a) VAMPnets (MardtEtAl_VAMPnets, ). b

) Time-autoencoder with propagator

(LuschKutzBrunton_DeepKoopman, ; OttoRowley_LinearlyRecurrentAutoencoder, ). c) time-autoencoder (WehmeyerNoe_TAE, ). d) variational time-encoder (Hernandez_PRE18_VTE, ). e) Deep Generative Markov State Models (WuEtAl_NIPS18_DeepGenMSM, ). f) The rewiring trick to compute the propagator for a deep generative MSM.
Figure 3: Figure adapted from (MardtEtAl_VAMPnets, ): Approximation of the slow transition in a bistable potential by a VAMPnet with one input node () and five output nodes. (a) Potential energy function . (b) Eigenvector of the slowest process calculated by direct numerical approximation (black) and approximated by a VAMPnet with five output nodes (red). Activation of the five Softmax output nodes define the state membership probabilities (blue). (c) Relaxation timescales computed from the Koopman model using the VAMPnet transformation. (d) Chapman-Kolmogorov test comparing long-time predictions of the Koopman model estimated at

and estimates at longer lag times. Panels (c) and (d) report 95% confidence interval error bars over 100 training runs.

Figure 4: Figure adapted from (MardtEtAl_VAMPnets, ): Kinetic model of alanine dipeptide obtained by a VAMPnet with 30 input nodes ( Cartesian coordinates of heavy atoms) and six output nodes. (a) Structure of alanine dipeptide. The main coordinates describing the slow transitions are the backbone torsion angles and , however the neural network inputs are only the Cartesian coordinates of heavy atoms. (b) Assignment of all simulated molecular coordinates, plotted as a function of and

, to the six Softmax output states. Color corresponds to activation of the respective output neuron, indicating the membership probability to the associated metastable state. (

c) Relaxation timescales computed from the Koopman model using the neural network transformation. (d) Representation of the transition probabilities matrix of the Koopman model; transitions with a probability lower than 0.5% have been omitted. (e) Chapman-Kolmogorov test comparing long-time predictions of the Koopman model estimated at and estimates at longer lag times. Panels (c) and (e) report 95% confidence interval error bars over 100 training runs excluding failed runs.

Vi LP3 light: Learn Representation and Decoder

As discussed in Sec. V.1, end-to-end learning combining LP1 and LP2 are limited in their choice of losses applied to the propagator resulting from LP2: Variational losses can be used, leading to the methods described in Sec. V, while using the likelihood and regression losses are prone to collapse to a trivial representation that does not resolve the long-time dynamical processes.

One approach to “rescue” these approaches is to add other loss functions to prevent this collapse to a trivial, uninformative representation from happening. An obvious choice is to add a decoder that is trained with some form of reconstruction loss: the representation should still contain enough information that the input ( or ) can be approximately reconstructed. We discuss several approaches based on this principle. Note that if only finding the spectral embedding and learning the propagator is the objective, VAMPnets solve this problem directly and employing a reconstruction loss unnecessarily adds the difficult inverse problem of reconstructing a high-dimensional variable from a low-dimensional one. However, approximate reconstruction of inputs may be desired in some applications, and is the basis for LP3.

vi.1 Time-Autoencoder

The time-autoencoder (WehmeyerNoe_TAE, ) shortcuts LP2 and constructs a direct learning problem between and (Fig. 2c).

(46)

The time-autoencoder is trained by reconstruction loss:

(47)

where is a suitable norm, e.g., the squared 2-norm.

The TAE has an interesting interpretation: If and

are linear transformation, i.e. encoder and decoder matrices

, , the minimum of (47) is found by VAMP/TCCA, and for data that is in equilibrium and obeys detailed balance by VAC/TICA (WehmeyerNoe_TAE, ). The reverse interpretation is not true: the solution found by minimizing (47) does not lead to TICA/TCCA modes, as there is no constraint in the time-autoencoder for the components – they only span the same space. Within this interpretation, the time-autoencoder can be thought of a nonlinear version of TCCA/TICA in the sense of being able to find a slow but nonlinear spectral representation.

Time-autoencoders have several limitations compared to VAMPnets: (1) Adding the decoder network makes the learning problem more difficult. (2) As indicated in scheme (46), it is not clear what the time step pertaining to the spectral representation is (, , or something in between), as the time stepping is done throughout the entire network. (3) Since the decoding problem from any given to is underdetermined but the decoder network </