I Introduction
With the ability to generate extensive and highthroughput molecular dynamics (MD) simulations Shirts and Pande (2000); Phillips et al. (2005); Harvey et al. (2009); Buch et al. (2010); Shaw et al. (2010); Eastman et al. (2013); Grand et al. (2013); Pronk et al. (2013); Doerr et al. (2016), the spontaneous sampling of rareevents such as protein folding, conformational changes and proteinligand association have become accessible Noé et al. (2009); Buch et al. (2011); Bowman et al. (2011); Sadiq et al. (2012); Silva et al. (2014); Shukla et al. (2014); Plattner and Noé (2015); Reubold et al. (2015). Markov state models (MSMs) Schütte et al. (1999); Swope et al. (2004); Noé et al. (2007); Chodera et al. (2007); Bowman et al. (2009); Prinz et al. (2011); Schütte et al. (2011); Bowman et al. (2014), Masterequation models Chekmarev et al. (2004); Sriraman et al. (2005); Buchete and Hummer (2008) and closely related approaches Noé et al. (2013); Wu and Noé (2014); Rosta and Hummer (2015); Wu and Noé (2015); Wu et al. (2016) have emerged as powerful frameworks for the analysis of extensive MD simulation data. These methods do not require a very specific a priori definition of relevant reaction coordinates Sarich et al. (2010); Prinz et al. (2011). Furthermore, they allow a large variety of mechanistic information to be extracted Noé et al. (2009); Metzner et al. (2009); Berezhkovskii et al. (2009), and experimental observables to be computed and structurally interpreted Buchete and Hummer (2008); Noé et al. (2011); Bowman et al. (2011); Keller et al. (2012); Zhuang et al. (2011); Lindner et al. (2013). Finally, they provide a direct approximation of the dynamic modes describing the slow conformational changes that are identical or closely related to the socalled reaction coordinates, depending on which notion of that term is employed Peters and Trout (2006); Peters (2006); Chodera and Pande (2011); Rohrdanz et al. (2011, 2013); McGibbon and Pande (2016)
. An especially powerful feature of MSMs and similar approaches is that the transition probabilities
, i.e. the probability that the trajectory is found in a set a time lag after it has been found in a set ,is a conditional transition probability. can be estimated without bias even if the trajectory is not initiated from a global, but only a local equilibrium distribution Prinz et al. (2011). Consequently, given transition events between states and at lag time , the maximum likelihood estimator of the transition probability can be easily shown to be
(1) 
i.e. the fraction of the number of transitions to conditioned on starting in . This conditionality is a key reason why MSMs have become popular to analyze short distributed simulations that are started from arbitrary configurations whose relationship to the equilibrium distribution is initially unknown.
However, when estimating (1) from simulation data, one does not generally obtain a timereversible estimate, i.e. the stationary probabilities of the transition matrix, , will usually not fulfill the detailed balance equations , even if the underlying dynamics are microscopically timereversible. Compared to a reversible transition matrix, a transition matrix with independent estimates of and
has more free parameters, resulting in larger statistical uncertainties, and may possess complexvalued eigenvalues and eigenvectors, which limits the application of some analysis tools designed for equilibrium molecular processes
TrendelkampSchroer et al. (2015). Since most molecular dynamics simulations are in thermal equilibrium and thus fulfill at least a generalized microscopic reversibility (Appendix B in Bittracher et al. (2015)), it is desirable to force to fulfill detailed balance, which both reduces statistical uncertainty and enforces a realvalued spectrum Noé (2008); TrendelkampSchroer et al. (2015). In old studies, the pragmatic solution to this problem was often to symmetrize the count matrix, i.e. to simply set , which is equivalent to evaluating the simulation trajectory forward and backward, and which leads to a transition matrix with detailed balance when inserted into (1). However, it has been known since at least 2008 that this estimator is strongly biased, and therefore reversible maximum likelihood and Bayesian estimators have been developed Noé (2008); Buchete and Hummer (2008); Bowman et al. (2009); Prinz et al. (2011); TrendelkampSchroer and Noé (2013); TrendelkampSchroer et al. (2015). These algorithms formulate the estimation problem as an optimization or sampling problem of the transition matrix constrained to fulfill detailed balance. The idea of these algorithms becomes clear when writing the reversible maximum likelihood estimator in two subsequent steps, as demonstrated in TrendelkampSchroer et al. (2015):
Reweighting: Estimate the stationary distribution given all transition counts under the detailed balance condition.

Estimation: Insert and into an equation for the reversible transition matrix to obtain a maximum likelihood estimate of .
Recently, a powerful extension to the Markov modeling framework has been introduced: the variational approach (VA) to approximate the slow components of reversible Markov processes
Noé and Nüske (2013). Due to its relevance for molecular dynamics, it has also been referred to as VA for molecular kinetics PerezHernandez et al. (2013); Nüske et al. (2014) or VA for conformation dynamics PerezHernandez et al. (2013); Vitalini et al. (2015). It has been known for many years that Markov state models are good approximations to molecular kinetics if their largest eigenvalues and eigenvectors approximate the eigenvalues and eigenfunctions of the Markov operator governing the fullphase space dynamics
W. Huisinga and Schuette (2004); Schütte et al. (1999); Sarich et al. (2010), moreover the first few eigenvalues and eigenvectors are sufficient to compute almost all stationary and kinetic quantities of interest Deuflhard and Weber (2003); Kube and Weber (2007); Noé et al. (2011); Keller et al. (2012); Cameron and VandenEijnden (2014). The VA has generalized this idea beyond discrete states and formulated the approximation problem of molecular kinetics in terms of an approach that is similar to the variational approach in quantum mechanics Noé and Nüske (2013); PerezHernandez et al. (2013); Nüske et al. (2014). It is based on the following variational principle: If we are given a set of orthogonal functions of state space, and evaluate the autocorrelations of the molecular dynamics in these functions at lag time , these will give us lower bounds to the true eigenvalues of the Markov operator, equivalent to an underestimate of relaxation timescales and an overestimate of relaxation rates. Only if the functions used are the eigenfunctions themselves, then their autocorrelations will be maximal and identical to the true eigenvalues . Note that this statement is true in the correct statistical limit  for finite data, the variational bound can be violated by problems in the estimation procedure. Sources of violation include systematic estimator bias, which is addressed in this work, and overfitting, which can be addressed by crossvalidation McGibbon and Pande (2015).This principle allows to formulate variational optimization algorithms to approximate the eigenvalues and eigenfunctions of the Markov operator. The linear VA proceeds as follows:

Fix an arbitrary basis set and evaluate the values of all basis functions for all sampled MD configurations .

Estimate two covariance matrices, the covariance matrix , and the timelagged covariance matrix from the basissettransformed data.

Solve a generalized eigenvalue problem involving both and , and obtain estimates for the eigenvalues and the optimal representation of eigenfunctions as a linear combination of basis functions.
Note that the functions can be arbitrary nonlinear functions in the original coordinates
, which allows complex nonlinear dynamics to be encoded even within this linear optimization framework. The variational approach has spawned a variety of followup works, for example it has been shown that the algorithm called blind source separation, timelagged or timestructure based independent component analysis (TICA) established in signal processing and machine learning
Molgedey and Schuster (1994); Ziehe and Müller (1998); Aapo Hyvärinen (2001) is a special case of the VA PerezHernandez et al. (2013). TICA is now widely used in order to reduce the dimensionality of MD data sets to a few slow collective coordinates, in which MSMs and other kinetic models can be built efficiently PerezHernandez et al. (2013); Schwantes and Pande (2013); Naritomi and Fuchigami (2011). The VA has been used to generate and improve guesses of collective reaction coordinates Boninsegna et al. (2015); McGibbon and Pande (2016). A VAbased metric has been defined which transforms the simulation data into a space in which Euclidean distance corresponds to kinetic distance Noé and Clementi (2015); Noé et al. (2016). The importance of meaningful basis sets has been discussed, and a basis for peptide dynamics has been proposed in Vitalini et al. (2015). Kernel versions of TICA have been proposed Harmeling et al. (2003); Schwantes and Pande (2015)and nonlinear deep versions have been proposed based on tensor approximations of products of simple basis functions
Nüske et al. (2016). Finally, the variational principle ranks kinetic models by the magnitude of their largest eigenvalues or derived quantities Noé and Nüske (2013), which can be used to select hyperparameters such as the basis functions , or the number of states in a Markov state model McGibbon and Pande (2015); Scherer et al. (2015).Despite the popularity of the VA and TICA, their estimation from MD data is still in the stage that MSMs had been about a decade ago: A direct estimation of covariance matrices will generally provide a nonsymmetric matrix and complex eigenvalues/eigenfunction estimates that are not consistent with reversible molecular dynamics. In order to avoid this problem, the current state of the art is to enforce the symmetrization of covariance matrices directly PerezHernandez et al. (2013); Schwantes and Pande (2013, 2015). In lack of a better estimator, this approach is currently used also with short MD simulations from distributed computing despite the fact that the resulting timescales and eigenfunctions may be biased and misleading. This problem is addressed in the present paper.
The algorithm of the linear VA Noé and Nüske (2013); Nüske et al. (2014) is identical to more recently proposed extended dynamic mode decomposition (EDMD) Williams et al. (2015), which is based on dynamic mode decomposition (DMD) Schmid and Sesterhenn (2008); Rowley et al. (2009); Schmid (2010); Tu et al. (2014). However, while the VA has been derived for reversible and stationary dynamics, EDMD has been developed in the context of dynamical systems and fluid mechanics, where data is often nonreversible and nonstationary. Mathematically, these methods are based on the eigenvalue decomposition of the Koopman operator, which provides a theoretical description of nonstationary and nonequilibrium dynamics Mezić (2005); Klus et al. (2015). In the present paper, this theory is used in order to formulate robust equilibrium estimators for covariance matrices, even if the simulation data are generated in many short simulations that are not distributed according to equilibrium. Based on these estimates, a Koopman model is computed  a matrix model that approximates the dynamics of the Koopman operator on the basis functions used. Koopman models are proper generalizations of Markov state models  they do not rely on a state space clustering, but can still be used to propagate densities in time, and their eigenvalues and eigenvectors provide estimates of the equilibrium relaxation timescales and slow collective variables. We propose a reversible Koopman model estimator that proceeds analogously to reversible MSM estimation:

Reweighting: Estimate a reweighting vector with an entry for each basis function given the covariance matrices and that have been empirically estimated without symmetrization.

Estimation: Insert and into an equation for the symmetric equilibrium estimates of and . Then compute a Koopman model, and from its eigenvalue decomposition the relaxation timescales and slow collective variables.
In addition to this result, the reweighting vector allows us to approximate any equilibrium estimate in terms of a linear combination of our basis functions from offequilibrium data. The estimator is asymptotically unbiased in the limits of many short trajectories and an accurate basis set.
The new methods are illustrated on toy examples with stochastic dynamics and a benchmark proteinligand binding problem. The methods described in this article are implemented in PyEMMA version 2.3 or later (www.pyemma.org) Scherer et al. (2015).
Ii Variational Approach of molecular kinetics
The VA is an algorithmic framework to approximate the slow components of molecular dynamics  also called conformation dynamics or molecular kinetics  from data. It consists of two main ingredients: (1) a variational principle that provides a computable score of a model of the slow components, and (2) an algorithm based on the variational principle that estimates slow components from simulation data.
ii.1 Variational principle of conformation dynamics
Simulations of molecular dynamics (MD) can be modeled as realizations of an ergodic and timereversible Markov process in a phase space . contains all variables that determine the conformational progression after time
(e.g., positions and velocities of all atoms). The time evolution of the probability distribution
of the molecular ensemble can be decomposed into a set of relaxation processes as(2) 
where is the stationary (Boltzmann) density of the system, are relaxation timescales sorted in decreasing order, are eigenfunctions of the backward operator or Koopman operator of with eigenvalues (see Section III.1), and the inner product is defined as . The first spectral component is given by the constant eigenfunction and infinite timescale corresponding to the stationary state of the system. According to this decomposition, the dominant eigenfunctions can be interpreted as slow collective variables, which characterize the behavior of a molecular system on large time scales .
The eigenvalues and eigenfunctions can also be formulated by the following variational principle Noé and Nüske (2013); Nüske et al. (2014): For any , the first eigenfunctions are the solution of the following optimization problem
(3)  
where denotes the expected value with sampled from the stationary density and the maximum value is the generalized Rayleigh quotient, or Rayleigh trace . Therefore, for every other set of functions that aims at approximating the true eigenfunctions, the eigenvalues will be underestimated, and we can use this variational principle in order to search for the best approximation of eigenfunctions and eigenvalues.
ii.2 Linear variational approach
In this paper, we focus on algorithms that approximate the eigenfunctions in the spectral decomposition (2) by a linear combination of realvalued basis functions, also called feature functions, . Thus, we make the Ansatz:
(4) 
with expansion coefficients . Note that the functions are generally nonlinear in , however we will call the resulting algorithm a linear VA because it is linear in the variables .
Linear VA algorithm: By solving (3) with the Ansatz (4), we obtain the linear VA to optimally approximate eigenvalues and eigenfunctions Noé and Nüske (2013); Nüske et al. (2014). We first estimate the equilibrium covariance and timelagged covariance matrices of the basis functions:
(5)  
(6) 
then the solution of the generalized eigenvalue problem
(7) 
provides the optimal approximation to eigenvalues and expansion coefficient . Inserting these coefficients into (4) results in the approximated eigenfunctions Noé and Nüske (2013); Nüske et al. (2014). An important observation is that ((7)) is formally equivalent to the eigenvalue decomposition of if has full rank. is the Koopman model that is the central object of the present paper and will provide the basis for constructing equilibrium estimates from short simulations.
The linear VA algorithm provides a general framework for the finitedimensional approximation of spectral components of conformation dynamics, and two widely used analysis methods, timelagged independent component analysis (TICA) Molgedey and Schuster (1994); PerezHernandez et al. (2013); Schwantes and Pande (2013) and Markov state models (MSMs) Prinz et al. (2011), are both special cases of the linear VA, see also Fig. 1.
TICA: In TICA, basis functions are meanfree molecular coordinates, where are the means. In particular, the TICA basis set is linear in the input coordinates. Then the resulting estimates of eigenfunctions can be viewed as a set of linearly independent components (ICs) with autocorrelations . The dominant ICs can be used to reduce the dimension of the molecular system. Notice that using mean free coordinates is equivalent to removing the stationary spectral component , thus TICA will only contain the dynamical components, starting from .
In recent MD papers, the term TICA has also been used as the application of Eqs. (57) on trajectories of features, such as distances, contact maps or angles, i.e. the transformation has been applied PerezHernandez et al. (2013); Schwantes and Pande (2013). In this paper we will avoid using the term TICA when VA is meant, because a main result here is that in order to obtain a good variational approximation of the spectral components in (2), it is necessary to employ specific estimation algorithms for (56) that require the stationary spectral component to be kept.
MSM: The MSM is a special case of the VA while using the indicator functions as basis set:
(8) 
where form a partition of the phase space . With such basis functions, the correlation matrix is a diagonal matrix with being the equilibrium probability of , and the th element of the timelagged correlation matrix is equal to the equilibrium frequency of the transition from to . Thus, a piecewiseconstant approximation of eigenfunctions
(9) 
and the corresponding eigenvalues are given by the generalized eigenvalue problem (7). When the equilibrium probability of each is positive, this problem can be equivalently transformed into a simple eigenvalue problem by
(10) 
Here, is the transition matrix of the MSM with , and is the Koopman model for the basis set ((8
)). The viewpoint that MSMs can be viewed as an approximation to molecular kinetics via a projection of eigenfunctions to a basis of characteristic functions has been proposed earlier
Sarich et al. (2010).ii.3 Estimation of covariance matrices
The remaining problem is how to obtain estimates of and . For convenience of notation, we take all sampled coordinates of a trajectory, evaluate their basis function values , and define the following two matrices of size :
(11) 
where each row corresponds to one stored timestep. Thus, contains the first time steps and contains the last time steps. Assuming that is ergodic, and can be directly estimated by time averages of and over the trajectory:
(12)  
(13) 
Furthermore, multiple trajectories are trivially handled by adding up their contributions, e.g. , etc. For all covariance estimates in this paper we can employ the shrinkage approach Ledoit and Wolf (2004); Schäfer et al. (2005) in order to reduce the sensitivity of estimated covariances to statistical noise James and Lee (2014) and improve the robustness of eigenvalues and eigenvectors computed from (1213).
Due to statistical noise or nonequilibrium starting points, the timelagged covariance matrix estimated by this method is generally not symmetric, even if the underlying dynamics are timereversible. Thus, the eigenvalue problem (7) may yield complex eigenvalues and eigenvectors, which are undesirable in analysis of statistically reversible MD simulations. The relaxation timescales can be computed from complex eigenvalues as by using the norm of eigenvalues, but it is a priori unclear how to perform component analysis and dimension reduction as in TICA based on complex eigenfunctions.
In order to avoid the problem of complex estimates, a symmetric estimator is often used in applications, which approximates and by empirically averaging over all transition pairs and their reverses , which is equivalent to averaging the timeforward and the timeinverted trajectory:
(14)  
(15) 
so that the estimate of is always symmetric and the generalized eigenvalue problem (7) has realvalued solutions.
For equilibrium simulations, i.e. if the simulation starting points are sampled from the global equilibrium, or the simulations are much longer than the slowest relaxation times, Eqs. (14) and (15
) are unbiased estimates of
andand can also be derived from the maximum likelihood estimation by assuming a multivariate normal distribution of
Schwantes and Pande (2015). The major difficulty of this approach arises from nonequilibrium data, i.e. simulations whose starting points are not drawn from the equilibrium distribution and are not long enough to reach that equilibrium during the simulation. In this situation, (14) and (15) do not converge to the true covariance matrices in the limit of infinitely many trajectories, and may thus provide biased estimates of the eigenvalues and eigenfunctions or independent components.The difference between the direct estimation and symmetric estimation methods of covariance matrices becomes clear when considering the MSM special case. Since the transition matrix is , as shown in Section II.2, transition matrices of MSMs given by the two estimators are
(direct estimation)  (16)  
(symmetric estimation)  (17) 
respectively. If the transition dynamics between discrete states are exactly Markovian, the direct estimator converges to the true transition matrix in the largedata limit for nonequilibrium or even nonreversible simulations, whereas the symmetric estimator does not. However, the direct estimator may give a nonreversible transition matrix with complex eigenvalues, which is why the symmetric estimator has been frequently used before 2008 until it has been replaced by reversible maximum likelihood and Bayesian estimators Noé (2008); Buchete and Hummer (2008); Bowman et al. (2009); Prinz et al. (2011); TrendelkampSchroer and Noé (2013); TrendelkampSchroer et al. (2015). How do we resolve this problem in the more general case of variational estimates with arbitrary basis functions ? Below, we will introduce a solution based on Koopman operator theory and dynamic mode decomposition (DMD).
Iii Koopman models of equilibrium kinetics
A method equivalent to the linear VA algorithm described in Noé and Nüske (2013); Nüske et al. (2014) and summarized in Sec. II.2 has more recently been introduced in the fluid mechanics field under the name extended Dynamic Mode Decomposition (EDMD) Williams et al. (2015). EDMD also projects the data onto basis functions, and approximates the same eigenvalue and eigenfunctions like the linear VA. EDMD was developed independently of the VA and is based on Dynamic Mode Decomposition (DMD) Schmid and Sesterhenn (2008); Schmid (2010); Tu et al. (2014). EDMD and DMD approximate components of the Koopman operator which is a generalization of the backward operator usually used in molecular kinetics Rowley et al. (2009); Klus et al. (2015).
The equivalence between the VA and EDMD is striking, because the EDMD algorithm has been derived in a setting where dynamics are not reversible and may not even possess a unique stationary distribution. In practice, the EDMD algorithm effectively performs nonreversible empirical estimates of the covariances (1213) and is used in nonequilibrium situations. EDMD is thus used in a regime for which the variational principle does not hold, and yet it does make a useful approximation to eigenvalue and eigenfunctions of dynamical operators Williams et al. (2015). This has two important consequences:

The linear VA is also usable for systems or data that are not reversible and not in equilibrium.

We can use ideas from EDMD and Koopman operator theory to obtain equilibrium and reversible estimates from nonequilibrium, nonreversible data.
In this section we will develop the second point and construct estimators for equilibrium expectations from nonequilibrium data. This will allow us to estimate relaxation timescales, slow collective variables and equilibrium expectations using arbitrary basis sets and without requiring a cluster discretization as used in MSMs.
iii.1 Koopman operator description of conformation dynamics
According to the Koopman operator theory Mezić (2005), the dynamics of a Markov process can be fully described by an integral operator , called Koopman operator, which maps an observable quantity at time , to its expectation at time as
(18)  
If the dynamics fulfill detailed balance, the spectral components discussed in Section II are in fact the eigenvalues and eigenfunctions of the Koopman operator:
(19) 
Notice that the operator description and decomposition of molecular kinetics can also be equivalently provided by the forward and backward operators, which propagate ensemble densities instead of observables Prinz et al. (2011). We exploit the Koopman operator in this paper because it is the only one of these operators that can be reliably approximated from nonequilibrium data in general. See Section III.2 and Appendix A for a more detailed analysis.
Eq. (19) suggests the following way for spectral estimation: First approximate the Koopman operator from data, and then extract the spectral components from the estimated operator.
iii.2 Using linear VA for nonequilibrium and nonreversible data: Extended dynamic mode decomposition
Like in the VA, we can also approximate the Koopman operator by its projection onto the subspace spanned by basis functions which satisfies
(20) 
for any function in that subspace. As the Koopman operator is linear, even if the dynamics are nonlinear, it can be approximated by a matrix as
(21) 
with
(22) 
representing a finitedimensional approximation of . After a few algebraic steps Williams et al. (2015), it can be shown that eigenfunctions of also have the form , and eigenvalues and eigenfunctions of can be calculated by the eigenvalue problem
(23) 
where definitions of are the same as in (7). Considering that
(24) 
for each transition pair in simulations, the matrix can be determined via minimizing the mean square error between and :
(25)  
where the covariance matrices are given by their direct estimates (1213), and denotes the Frobenius norm of matrices.
Based on the above considerations it makes sense to call the matrix together with the basis set a Koopman model of the molecular kinetics. The Koopman model is a generalization of an MSM, as it can be constructed from any basis set , not only from characteristic basis sets (Eq. (8)). Nonetheless, it shares the main features of an MSM as it can be used to propagate densities according to ((20)), and its eigenvalues can be used to compute relaxation timescales and its eigenvectors can be used to compute slow collective variables. The following algorithm computes a nonreversible Koopman model from data. This algorithm is equivalent to the linear VA and EDMD. If the feature trajectories are meanfree, it is also equivalent to TICA in feature space:
Algorithm 1: Nonreversible Koopman estimation

Basistransform input coordinates according to (11).

Compute and .

Compute the Koopman model .

Koopman decomposition: Solve eigenvalue problem .

Output the Koopman model and spectral components: Eigenvalues and eigenfunctions . Both may have imaginary components that are either due to statistical noise or nonreversible dynamics.
Please note that this pseudocode is given for illustrative purposes and should not be implemented literally. In particular, it assumes that the basis functions are linearly independent so that is invertible. In practice, linear independence can be achieved by decorrelation of basis functions  see Appendix F and specifically Algorithm 1* there for advice how to practically implement the Koopman estimator.
The above derivation shows that Koopman estimation as in Algorithm 1 has a key advantage: Suppose the points are sampled from a distribution which is not equal to the equilibrium distribution . Although the empirical estimates of covariance matrices used in Algorithm 1 are biased with respect to the equilibrium expectations and , they are unbiased and consistent estimates of the nonequilibrium covariance matrices and . Furthermore, the matrix given by (25) minimizes the error
(26) 
with , as data size approaches infinity (see Appendices B and C). Therefore, is still a finitedimensional approximation of with minimal mean square error with respect to , which implies that Algorithm 1 is applicable to nonequilibrium data.
iii.3 Koopman reweighting: Estimation the equilibrium distribution from nonequilibrium data
Not only is EDMD robust when using nonequilibrium data, we can also utilize the Koopman matrix to recover the equilibrium properties of the molecular system. According to the principle of importance sampling Glynn and Iglehart (1989), we can assign a weight
(27) 
to each transition pair in the simulation data, such that the equilibrium ensemble average of a function can be consistently estimated by the weighted mean as
(28) 
Based on the finitedimensional approximation (4) of spectral components, we can represent the weight function as a linear combination of basis functions :
(29) 
where satisfies
(30) 
i.e., is the eigenvector of with eigenvalue , in the limit of large statistics. (See Appendix B for proofs of the above equations.)
In practice, the eigenvalue problem (30) cannot be solved for arbitrary choices of basis sets. If the basis set cannot represent the constant 1 eigenfunction, (30) does not have an eigenvalue 1. In order to deal with general basis sets, we have two options. First, we can seek an approximate solution via the quadratic programming problem
(31) 
where the constraint ensures that , and denotes a column vector of ones.
We recommend a simpler way to solve this problem: Add the constant function to the basis set and change and as and correspondingly so that the eigenvalue problem (31) can be exactly solved. The resulting method to compute equilibrium statistical weights of all samples, , can be summarized by the following algorithm:
Algorithm 2: Koopman reweighting

Basistransform input coordinates according to (11).

Compute , and as in Algorithm 1.

Compute as eigenvector of with eigenvalue 1, and normalize it by .

Output weights: .
Again, this algorithm is simplified for illustrative purposes. In our implementation, we ensure numerical robustness by adding the constant function to the decorrelated basis set  see Appendix E and Algorithm 2* there.
After the weights have been estimated and normalized with , we can compute equilibrium estimates for given observables and from nonequilibrium data. For example, ensemble average and timelagged cross correlation can be approximated by
(32)  
(33) 
iii.4 Reversible Koopman models and Eigendecompositions
We now have the tools necessary to compute equilibrium covariance matrices while avoided the bias of forced symmetrization described in Sec. II.3, and can conduct realvalued eigenvalue analysis for reversible dynamics using VA or TICA. At the same time our approach defines an equilibrium estimator of EDMD for timereversible processes. We can obtain symmetrized equilibrium covariances from our offequilibrium data by the following estimators:
(34)  
(35)  
These estimators are based on timereversibility and the reweighting approximation (28) for the equilibrium distribution (see Appendix D for proof). As a result, we obtain a timereversible Koopman matrix:
(36) 
By comparing (34,35) and (14,15), it is apparent that and are equal to the symmetrized direct estimates if weights of data are uniform with . The weight function (29) used here can systematically reduce the bias of the symmetrized estimates for reversible dynamics. Under some weak assumptions, it can be shown that the spectral components calculated from are realvalued and the largest eigenvalue is not larger than even in the existence of statistical noise and modeling error. Furthermore, the procedure is selfconsistent: If the estimation procedure is repeated while starting with weights , these weights remain fixed (See Appendix E for more detailed analysis.)
The estimation algorithm for variationally optimal Koopman models of the reversible equilibrium dynamics can be summarized as follows:
Algorithm 3: Reversible Koopman estimation

Basistransform input coordinates according to (11).

Use Koopman reweighting (Algorithm 2) to compute the equilibrium weights .

Compute the Koopman model .

Reversible Koopman decomposition: solve eigenvalue problem .

Output the Koopman model and its spectral components: Eigenvalues and eigenfunctions . These eigenvalues and eigenfunctions are realvalued.
As before, this algorithm is presented in a pedagogical pseudocode. Taken literally, it will suffer from numerical instabilities if is not positivedefinite, which can also be overcome by reducing correlations between basis functions as mentioned in Section III.2  see Appendix F and Algorithm 3* there.
Iv Applications
In this section, we compare three different estimators for molecular kinetics to the same data sets:

Nonreversible Koopman estimation (Algorithm 1), which provides a nonreversible Koopman model whose eigendecomposition is equivalent to EDMD and (nonsymmetrized) TICA in feature space.
In addition, we compare the estimated equilibrium distribution provided by Koopman reweighting (Algorithm 2) with the empirical distribution estimated from direct counting or histogramming the data in order to demonstrate the usefulness of the proposed reweighting method.
iv.1 Onedimensional diffusion process
As a first example, we consider a onedimensional diffusion process in a doublewell potential restricted to the range as shown in Fig. 2A. In order to validate the robustness of different estimators, we start all simulations far from equilibrium, in the region (shaded area in Fig. 2A). In order to apply the algorithms discussed here, we choose a basis set of Gaussian functions with random parameters. For more details on the simulation model and experimental setup, see Appendix G.1.
Fig. 2B shows estimates of the slowest relaxation timescale based on independent short simulation trajectories with length time units. The largest relaxation timescale is computed from as and is a constant independent of lag time according to (2). For such nonequilibrium data, the symmetrized VA significantly underestimates the relaxation timescale for such nonequilibrium data and gives even worse results with longer lag times. The Koopman models (both reversible and nonreversible), on the other hand, converge quickly to the true timescale before time units. The equilibrium distribution distribution of computed from Algorithm 2 with lag time is shown in Fig. 2C. In contrast to the empirical histogram density given by direct counting, the direct estimator effectively recovers the equilibrium property of the process from nonequilibrium data.
Fig. 2D compares the empirical probability of the potential well I (i.e. by direct counting of the number of samples in well I) with the estimate from Koopman reweighting (Algorithm 2), for different simulation trajectory lengths, where the lag time is still time units and the accumulated simulation time is kept fixed to be . Due to the ergodicity of the process, the empirical probability converges to the true value as the trajectory length increases. The convergence rate, however, is very slow as shown in Fig. 2D, and empirical probability is close to the true value only for trajectories longer than time units. When using the reweighting method proposed here, the estimated probability is robust with respect to changes in trajectory length, and accurate even for very short trajectories.
iv.2 Twodimensional diffusion process
Next, we study a twodimensional diffusion process which has three potential wells as shown in Fig. 3A, where all simulations are initialized with , and the set of basis functions for spectral estimation consists of Gaussian functions with random parameters (see Appendix G.2 for details).
We generate short simulation trajectories with length and show the empirical free energy of the simulation data in Fig. 3B. Comparing Fig. 3B and Fig. 3A, it can be seen that most of the simulation data are distributed in the area and the empirical distribution of simulations is very different from the equilibrium distribution. Therefore, eigenvalues/timescales and eigenfunctions estimated by the symmetrized VA have large errors, whereas the nonreversible and reversible Koopman model provide accurate eigenvalues/timescales and eigenfunctions (Fig. 3D,F). Moreover the equilibrium density can be recovered with high accuracy using Koopman reweighting, although the data is far from equilibrium (Fig. 3C).
For such a twodimensional process, it is also interesting to investigate the slow collective variables predicted by TICA, i.e. directly using the and coordinates as basis functions. Fig. 3A displays the TICA components from the exact equilibrium distribution with lag time . Notice that the slowest mode is parallel to xaxis, which is related to transitions between potential wells I and II, and the second IC is parallel to the yaxis, which is related to transitions between {I,II} and III. However, if we extract ICs from simulation data by using TICA with symmetrized covariance matrices, the result is significantly different as shown in Fig. 3B, where the first IC characterizes transitions between I and III. The ICs given by nonreversible and reversible Koopman models (Algorithms 1 and 3 here) can be seen in Fig. 3C. They are still different from those in Fig. 3A because the equilibrium distribution is difficult to approximate with only linear basis functions, but much more accurate than the estimates obtained by the previously used symmetric estimator in Fig. 3B.
Fig. 3E shows the estimation errors of estimated equilibrium distribution obtained by using simulations with different trajectory lengths, where the accumulated simulation time is kept fixed to be , the lag time for estimators is , and the error is evaluated as the total variation distance between the estimated probability distributions of the three potential wells and the true reference. Fig. 3F shows angles of linear ICs approximated from the same simulation data with lag time . Both of the figures clearly demonstrate the superiority of the Koopman models suggested here.
iv.3 ProteinLigand Binding
We revisit the the binding process of benzamidine to trypsin which was studied previously in Refs. Buch et al. (2011); Scherer et al. (2015). The data set consists of trajectories of and four trajectories of simulation time, resulting in a total simulation time of . From the simulations, we extract a feature set of nearest neighbor heavyatom contacts between all trypsin residues and the ligand. In this feature space, we then perform TICA using the symmetrized estimate (previous standard), and estimate a nonreversible Koopman model (Algorithm 1) and a reversible Koopman model (Algorithm 3). In order to obtain uncertainties, we compute
bootstrapping estimates in which outliers were rejected. In Figure
4 AC, we show the three slowest implied timescales as estimated by the three approaches discussed above. We observe that both the Koopman models provide a larger slowest implied timescale than symmetrized TICA. The slowest timescale estimated by the reversible estimator converges on relatively long lag times. This is likely due to the fact that the trypsinbenzamidin binding kinetics involves internal conformational changes of trypsin Plattner and Noé (2015). In Fig. 4 for all three estimates (the first TICA components of the direct estimate are coincidentally purely real here). The eigenvectors used for the dimensionality reduction were estimated at lag time . The projections are qualitatively similar, revealing three minima of the landscape, labeled 1, 2, and 3. In all three cases, these centers correspond to the same macrostates of the system, shown underneath in Figure 4 GH. Center 1 corresponds to the ligand being either unbound or loosely attached to the protein. The other two states are different conformational arrangements of the bound state of the ligand.V Conclusion
Using dynamic mode decomposition theory, we have shown that the variational approach of conformation dynamics and the timelagged independent component analysis can be made with small bias even if just empirical outofequilibrium estimates of the covariance matrices are available, i.e. they can be applied to ensembles of short MD simulations starting from arbitrary starting point. A crucial point is that the forceful symmetrization of the empirical covariances practiced in previous studies must be avoided.
In order to facilitate a biascorrected symmetric estimate of covariance matrices, we have proposed a Koopman reweighting technique in which the weights of sampled configurations can be estimated using a first pass over the data, during which empirical covariance matrices must be estimated. These weights can be applied in order to turn the empirical (outofequilibrium) estimates of covariance matrices into estimates of the equilibrium covariance matrices. These matrices can then be symmetrized without introducing a bias from the empirical distribution, resulting in realvalued eigenvalue and eigenfunction estimates.
With these algorithms, the variational approach, and thus also the TICA algorithm inherit the same benefits that MSMs have enjoyed since nearly a decade: we can generate optimal and robust reversible and nonreversible estimates of the equilibrium kinetics from swarms of short trajectories not started from equilibrium. Although this work focuses on the estimation of eigenvalues and eigenfunctions of the Koopman operator, the proposed Algorithms 1 and 3 provide Koopman models, which are discrete approximations of the Koopman operator, and that be used for other purposes, such as the propagation of densities. Koopman models are generalization of Markov state models using arbitrary basis functions.
Besides the application to molecular kinetics highlighted in this paper, the Koopman reweighting principle described in Algorithm 2 can be used to compute variationally optimal estimates of any equilibrium property (expectation values, distributions) from outofequilibrium data using an approach that involves arbitrary sets of basis functions. While the viability of this approach critically depends on the suitability of the basis functions employed, it offers a very general way to computing equilibrium expectations that may lead to other applications and extensions in future studies.
Appendix A Dynamical operators
Besides the Koopman operator , the conformation dynamics of a molecular system can also be described by the forward operator and backward operator, or called transfer operator, Prinz et al. (2011), which describe the evolution of ensemble densities as
(37)  
and
(38)  
where denotes the probability density of and denotes the density weighted by the inverse of the stationary density. The relationship between the three operators can be summarized as follows:

is adjoint to in the sense of
(39) for any . If is reversible, and are selfadjoint with respect to , i.e., .

Defining the multiplication operator as , the Markov propagator can be expressed as
(40) Under the detailed balance condition, is selfadjoint with respect to .
We can also find the finitedimensional approximation and of and by minimizing errors and for some weight function . However, it is still unknown how to select the weight functions so that the approximation errors can be easily computed from simulation data as in the approximation of . For example, if we select , the approximation error of is
(41)  
where the last term on the right hand side is a constant independent of , and the computation of the first two terms is infeasible for unknown . For , the weight function is generally set to be , and the corresponding approximation error is then
(42)  
which is difficult to estimate unless the empirical distribution is consistent with or the system is reversible. (For reversible systems, and the finitedimensional approximation of is therefore also that of .) In general cases, only the Koopman operator can be reliably estimated from the nonequilibrium data.
Appendix B Properties of the empirical distribution
We first consider the case where the simulation data consist of independent trajectories of length and the initial state . In this case, can be given by
(43) 
where denotes the forward operator defined in Appendix A.
For an arbitrary function of and , we have
(44)  
and
(45)  
as , where “” denotes the convergence in probability. Therefore and are unbiased and consistent estimates of
(46)  
(47) 
The importance sampling approximation provided by (28) is also consistent because
Comments
There are no comments yet.