VAMPnets: Deep learning of molecular kinetics

10/16/2017 ∙ by Andreas Mardt, et al. ∙ Freie Universität Berlin 0

Here we develop a deep learning framework for molecular kinetics from molecular dynamics (MD) simulation data. There is an increasing demand for computing the relevant structures, equilibria and long-timescale kinetics of complex biomolecular processes, such as protein-drug binding, from high-throughput MD simulations. State-of-the art methods employ a handcrafted data processing pipeline, involving (i) transformation of simulated coordinates into a set of features characterizing the molecular structure, (ii) dimension reduction to collective variables, (iii) clustering the dimension-reduced data, and (iv) estimation of a Markov state model (MSM) or related model of the interconversion rates between molecular structures. This approach demands a substantial amount of modeling expertise, as poor decisions at every step will lead to large modeling errors. Here we employ the recently developed variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states and learns optimal feature transformations, nonlinear dimension reduction, cluster discretization and MSM estimation within a single end-to-end learning framework. Our results, ranging from toy models to protein folding, are competitive or outperform state-of-the art Markov modeling methods and readily provide easily interpretable few-state kinetic models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

Code Repositories

deeptime

Deep learning meets molecular dynamics.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

The rapid advances in computing power and simulation technologies for molecular dynamics (MD) of biomolecules and fluids Lindorff-Larsen et al. (2011); Plattner et al. (in revision); Kohlhoff et al. (2014); Doerr et al. (2016), and ab initio MD of small molecules and materials Ufimtsev and Martinez (2008); Marx and Hutter (2000), allow the generation of extensive simulation data of complex molecular systems. Thus, it is of high interest to automatically extract statistically relevant information, including stationary, kinetic and mechanistic properties.
The Markov modeling approach Schütte et al. (1999); Prinz et al. (2011a); Swope et al. (2004); Noé et al. (2007); Chodera et al. (2007); Buchete and Hummer (2008) has been a driving force in the development of kinetic modeling techniques from MD mass data, chiefly as it facilitates a divide-and-conquer approach to integrate short, distributed MD simulations into a model of the long-timescale behavior. State-of-the-art analysis approaches and software packages Scherer et al. (2015); Doerr et al. (2016); Harrigan et al. (2017) operate by a sequence, or pipeline, of multiple processing steps, that has been engineered by practitioners over the last decade. The first step of a typical processing pipeline is featurization, where the MD coordinates are either aligned (removing translation and rotation of the molecule of interest) or transformed into internal coordinates such as residue distances, contact maps or torsion angles Humphrey et al. (1996); McGibbon et al. (2015); Scherer et al. (2015); Doerr et al. (2016). This is followed by a dimension reduction, in which the dimension is reduced to much fewer (typically 2-100) slow collective variables (CVs), often based on the variational approach or conformation dynamics Noé and Nüske (2013); Nüske et al. (2014)

, time-lagged independent component analysis (TICA)

Perez-Hernandez et al. (2013); Schwantes and Pande (2013), blind source separation Molgedey and Schuster (1994); Ziehe and Müller (1998); Harmeling et al. (2003) or dynamic mode decomposition Mezić (2005); Schmid and Sesterhenn (2008); Tu et al. (2014); Williams et al. (2015); Wu et al. (2017) – see Noé and Clementi (2017); Klus et al. (2017) for an overview. The resulting coordinates may be scaled, in order to embed them in a metric space whose distances correspond to some form of dynamical distance Noé and Clementi (2015); Noé et al. (2016). The resulting metric space is discretized by clustering the projected data using hard or fuzzy data-based clustering methods Bowman et al. (2014); Scherer et al. (2015); Husic and Pande (2017); Sheong et al. (2015); Chodera et al. (2007); Wu and Noé (2015); Weber et al. (2017)

, typically resulting in 100-1000 discrete states. A transition matrix or rate matrix describing the transition probabilities or rate between the discrete states at some lag time

is then estimated Bowman et al. (2009); Prinz et al. (2011a); Buchete and Hummer (2008); Trendelkamp-Schroer et al. (2015) (alternatively, a Koopman model can be built after the dimension reduction Williams et al. (2015); Wu et al. (2017)). The final step towards an easily interpretable kinetic model is coarse-graining of the estimated Markov state model (MSM) down to a few states Kube and Weber (2007); Yao et al. (2013); Fackeldey and Weber (2017); Gerber and Horenko (2017); Hummer and Szabo (2015); Orioli and Faccioli (2016); Noé et al. (2013).

This sequence of analysis steps has been developed by combining physico-chemical intuition and technical experience gathered in the last 10 years. Although each of the steps in the above pipeline appears meaningful, there is no fundamental reason why this or any other given analysis pipeline should be optimal. More dramatically, the success of kinetic modeling currently relies on substantial technical expertise of the modeler, as suboptimal decisions in each step may deteriorate the result. As an example, failure to select suitable features in step 1 will almost certainly lead to large modeling errors.

An important step towards selecting optimal models (parameters) and modeling procedures (hyper-parameters) has been the development of the variational approach for conformation dynamics (VAC) Noé and Nüske (2013); Nüske et al. (2014), which offers a way to define scores that measure the optimality of a given kinetic model compared to the (unknown) MD operator that governs the true kinetics underlying the data. The VAC has recently been generalized to the variational approach for Markov processes (VAMP), which allows to optimize models of arbitrary Markov processes, including nonreversible and non-stationary dynamics Wu and Noé (2017). The VAC has been employed using cross-validation in order to make optimal hyper-parameter choices within the analysis pipeline described above while avoiding overfitting McGibbon and Pande (2015); Husic and Pande (2017). However, a variational score is not only useful to optimize the steps of a given analysis pipeline, but in fact allows us to replace the entire pipeline with a more general learning structure.

Here we develop a deep learning structure that is in principle able to replace the entire analysis pipeline above. Deep learning has been very successful in a broad range of data analysis and learning problems LeCun et al. (2015); Krizhevsky et al. (2012); Mnih et al. (2015). A feedforward deep neural network is a structure that can learn a complex, nonlinear function

. In order to train the network a scoring or loss function is needed that is maximized or minimized, respectively. Here we develop

VAMPnets, a neural network architecture that can be trained by maximizing a VAMP variational score. VAMPnets contain two network lobes that transform the molecular configurations found at a time delay along the simulation trajectories. Compared to previous attempts to include “depth” or “hierarchy” into the analysis method Perez-Hernandez and Noé (2016); Nüske et al. (2016), VAMPnets combine the tasks of featurization, dimension reduction, discretization and coarse-grained kinetic modeling into a single end-to-end learning framework. We demonstrate the performance of our networks using a variety of stochastic models and datasets, including a protein folding dataset. The results are competitive with and sometimes surpass the state-of-the-art handcrafted analysis pipeline. Given the rapid improvements of training efficiency and accuracy of deep neural networks seen in a broad range of disciplines, it is likely that follow-up works can lead to superior kinetic models.

Results

.1 Variational principle for Markov processes

MD can be theoretically described as a Markov process in the full state space . For a given potential energy function, the simulation setup (e.g. periodic boundaries) and the time-step integrator used, the dynamics are fully characterized by a transition density , i.e. the probability density that a MD trajectory will be found at configuration given that it was at configuration a time lag before. Markovianity implies that the can be sampled by knowing alone, without the knowledge of previous time-steps. While the dynamics might be highly nonlinear in the variables , Koopman theory Koopman (1931); Mezić (2005)

tells us that there is a transformation of the original variables into some features or latent variables that, on average, evolve according to a linear transformation. In mathematical terms, there exist transformations to features or latent variables,

and , such that the dynamics in these variables are approximately governed by the matrix :

(1)

This approximation becomes exact in the limit of an infinitely large set of features () and , but for a sufficiently large lag time the approximation can be excellent with low-dimensional feature transformations, as we will demonstrate below. The expectation values account for stochasticity in the dynamics, such as in MD, and thus they can be omitted for deterministic dynamical systems Mezić (2005); Tu et al. (2014); Williams et al. (2015).

To illustrate the meaning of Eq. (1), consider the example of

being a discrete-state Markov chain. If we choose the feature transformation to be indicator functions (

when and 0 otherwise, and correspondingly with and ), their expectation values are equal to the probabilities of the chain to be in any given state, and , and is equal to the matrix of transition probabilities, i.e. . Previous papers on MD kinetics have usually employed a propagator or transfer operator formulation instead of (1) Schütte et al. (1999); Prinz et al. (2011a). However, the above formulation is more powerful as it also applies to nonreversible and non-stationary dynamics, as found for MD of molecules subject to external force, such as voltage, flow, or radiation Knoch and Speck (2015); Wang and Schütte (2015).

A central result of the VAMP theory is that the best finite-dimensional linear model, i.e. the best approximation in Eq. (1), is found when the subspaces spanned by and are identical to those spanned by the top left and right singular functions, respectively, of the so-called Koopman operator Wu and Noé (2017). For an introduction to the Koopman operator, please refer to Koopman (1931); Mezić (2005); Klus et al. (2017).

How do we choose , and from data? First, suppose we are given some feature transformation , and define the following covariance matrices:

(2)
(3)
(4)

where and denote the averages that extend over time points and lagged time points within trajectories, respectively, and across trajectories. Then the optimal that minimizes the least square error is Williams et al. (2015); Wu and Noé (2017); Horenko et al. (2007):

(5)

Now the remaining problem is how to find suitable transformations , . This problem cannot be solved by minimizing the least square error above, as is illustrated by the following example: Suppose we define , i.e. we just map the state space to the constant 1 – in this case the least square error is 0 for , but the model is completely uninformative as all dynamical information is lost.

Instead, in order to seek and based on available simulation data, we employ the VAMP theorem introduced in Ref. Wu and Noé (2017), that can be equivalently formulated as the following subspace version:

VAMP variational principle: For any two sets of linearly independent functions and , let us call

their VAMP-2 score, where are defined by Eqs. (2-4) and is the Frobenius norm of matrix . The maximum value of VAMP-2 score is achieved when the top left and right Koopman singular functions belong to and respectively.

This variational theorem shows that the VAMP-2 score measures the consistency between subspaces of basis functions and those of dominant singular functions, and we can therefore optimize and via maximizing the VAMP-2 score. In the special case where the dynamics are reversible with respect to equilibrium distribution then the theorem above specializes to variational principle for reversible Markov processes Noé and Nüske (2013); Nüske et al. (2014).

.2 Learning the feature transformation using VAMPnets

Here we employ neural networks to find an optimal set of basis functions, and . Neural networks with at least one hidden layer are universal function approximators Cybenko (1989)

, and deep networks can express strongly nonlinear functions with a fairly few neurons per layer

Eigen et al. (2014). Our networks use VAMP as a guiding principle and are hence called VAMPnets. VAMPnets consist of two parallel lobes, each receiving the coordinates of time-lagged MD configurations and as input (Fig. 1). The lobes have output nodes and are trained to learn the transformations and , respectively. For a given set of transformations, and , we pass a batch of training data through the network and compute the training VAMP-2 score of our choice. VAMPnets bear similarities with auto-encoders Ranzato et al. (2006); Bengio et al. (2007) using a time-delay embedding and are closely related to deep Canonical Covariance Analysis (CCA) Galen et al. (2013). VAMPnets are identical to deep CCA with time-delay embedding when using the VAMP-1 score discussed in Wu and Noé (2017), however the VAMP-2 score has easier-to-handle gradients and is more suitable for time series data, due to its direct relation to the Koopman approximation error Wu and Noé (2017).

Figure 1: Scheme of the neural network architecture used. For each time step of the simulation trajectory, the coordinates and

are inputs to two deep networks that conduct a nonlinear dimension reduction. In the present implementation, the output layer consists of a Softmax classifier. The outputs are then merged to compute the variational score that is maximized to optimize the networks. In all present applications the two network lobes are identical clones, but they can also be trained independently.

The first left and right singular functions of the Koopman operator are always equal to the constant function Wu and Noé (2017). We can thus add to basis functions and train the network by maximizing

(6)

where are mean-free covariances of the feature-transformed coordinates:

(7)
(8)
(9)

Here we have defined the matrices and with representing all available transition pairs, and their mean-free versions , . The gradients of are given by:

(10)
(11)

and are back-propagated to train the two network lobes. See Supplementary Note 1 for derivations of Eqs. (6,10,11).

For simplicity of interpretation we may just use a unique basis set . Even when using two different basis sets would be meaningful, we can unify them by simply defining . In this case, we clone the lobes of the network and train them using the total gradient .

After training, we asses the quality of the learned features and select hyper-parameters (e.g. network size) while avoiding overfitting using the VAMP-2 validation score

(12)

where are mean-free covariance matrices computed from a validation data set not used during the training.

.3 Dynamical model and validation

The direct estimate of the time-lagged covariance matrix is generally nonsymmetric. Hence the Koopman model or Markov state model given by Eq. (5) is typically not time-reversible Wu et al. (2017). In MD, it is often desirable to obtain a time-reversible kinetic model – see Trendelkamp-Schroer et al. (2015) for a detailed discussion. To enforce reversibility, can be reweighted as described in Wu et al. (2017) and implemented in PyEMMA Scherer et al. (2015). The present results do not depend on enforcing reversibility, as classical analyses such as PCCA+ Röblitz and Weber (2013) are avoided based on the VAMPnet structure itself.

Since is a Markovian model, it is expected to fulfill the Chapman-Kolmogorov (CK) equation:

(13)

for any value of , where and indicate the models estimated at a lag time of and , respectively. However, since any Markovian model of MD can be only approximate Sarich et al. (2010); Prinz et al. (2011a), Eq. (13) can only be fulfilled approximately, and the relevant test is whether it holds within statistical uncertainty. We construct two tests based on Eq. (13

): In order to select a suitable dynamical model, we will proceed as for Markov state models by conducting an eigenvalue decomposition for every estimated Koopman matrix,

, and computing the implied timescales Swope et al. (2004) as a function of lag time:

(14)

We chose a value , where are approximately constant in . After having chosen , we will test whether Eq. (13) holds within statistical uncertainty Noé et al. (2009). For both the implied timescales and the CK-test we proceed as follows: train the neural network at a fixed lag time , thus obtaining the network transformation , and then compute Eq. (13) or Eq. (14) for different values of with a fixed transformation . Finally, the approximation of the

th eigenfunction is given by

(15)

If dynamics are reversible, the singular value decomposition and eigenvalue decomposition are identical, i.e.

and .

.4 Network architecture and training

We use VAMPnets to learn molecular kinetics from simulation data of a range of model systems. While any neural network architecture can be employed inside the VAMPnet lobes, we chose the following setup for our applications: the two network lobes are identical clones, i.e. , and consist of fully connected networks. In most cases, the networks have less output than input nodes, i.e. the network conducts a dimension reduction. In order to divide the work equally between network layers, we reduce the number of nodes from each layer to the next by a constant factor. Thus, the network architecture is defined by two parameters: the depth and the number of output nodes

. All hidden layers employ Rectified Linear Units (ReLU)

Hahnloser (1998); Nair and Hinton (2010).

Here, we build the output layer with Softmax output nodes, i.e. for all and . Therefore, the activation of an output node can be interpreted as a probability to be in state . As a result, the network effectively performs featurization, dimension reduction and finally a fuzzy clustering to metastable states, and the matrix computed from the network-transformed data is the transition matrix of a fuzzy MSM Wu and Noé (2015); Weber et al. (2017); Harrigan and Pande (2017). Consequently, Eq. (1

) propagates probability distributions in time.

The networks were trained with pairs of MD configurations

using the Adam stochastic gradient descent method

Kingma and Ba (2014). For each result we repeated 100 training runs, each of which with a randomly chosen 90%/10% division of the data to training and validation data. See Methods section for details on network architecture, training and choice of hyper-parameters.

.5 Asymmetric double well potential

Figure 2: Approximation of the slow transition in a bistable potential. (a) Potential energy function . (b

) Eigenvector of the slowest process calculated

by direct numerical approximation (black) and approximated by a VAMPnet with five output nodes (red). Activation of the five Softmax output nodes define the state membership probabilities (blue). (c) Relaxation timescales computed from the Koopman model using the VAMPnet transformation. (d) Chapman-Kolmogorov test comparing long-time predictions of the Koopman model estimated at

and estimates at longer lag times. Panels (c) and (d) report 95% confidence interval error bars over 100 training runs.

We first model the kinetics of a bistable one-dimensional process, simulated by Brownian dynamics (see Methods) in an asymmetric double-well potential (Fig. 2a). A trajectory of time steps is generated. Three-layer VAMPnets are set up with 1-5-10-5 nodes in each lobe. The single input node of each lobe is given the current and time-lagged mean-free coordinate of the system, i.e. and , where and are the respective means, and is used. The network maps to five Softmax output nodes that we will refer to as states, as the network performs a fuzzy discretization by mapping the input configurations to the output activations. The network is trained by using the VAMP-2 score with the four largest singular values.

The network learns to place the output states in a way to resolve the transition region best (Fig. 2b), which is known to be important for the accuracy of a Markov state model Sarich et al. (2010); Prinz et al. (2011a). This placement minimizes the Koopman approximation error, as seen by comparing the dominant Koopman eigenfunction (Eq. 15) with a direct numerical approximation of the true eigenfunction obtained by a transition matrix computed for a direct uniform 200-state discretization of the axis – see Prinz et al. (2011a) for details. The implied timescale and Chapman-Kolmogorov tests (Eqs. 14 and 13) confirm that the kinetic model learned by the VAMPnet successfully predicts the long-time kinetics (Fig. 2 c,d).

.6 Protein folding model

While the first example was one-dimensional we now test if VAMPnets are able to learn reaction coordinates that are nonlinear functions of a multi-dimensional configuration space. For this, we simulate a time step Brownian dynamics trajectory (Eq. 17) using the simple protein folding model defined by the potential energy function (Supplementary Fig. 1 a):

The system has a five-dimensional configuration space,

, however the energy only depends on the norm of the vector

. While small values of are energetically favorable, large values of are entropically favorable as the number of configurations available on a five-dimensional hypersphere grows dramatically with . Thus, the dynamics are bistable along the reaction coordinate . Four-layer network lobes with 5-32-16-8-2 nodes each were employed and trained to maximize the VAMP-2 score involving the largest nontrivial singular value.

The two output nodes successfully identify the folded and the unfolded states, and use intermediate memberships for the intersecting transition region (Supplementary Fig. 1 b). The network excellently approximates the Koopman eigenfunction of the folding process, as apparent from the comparison of the values of the network eigenfunction computed by Eq. (15) with the eigenvector computed from a high-resolution MSM built on the coordinate (Supplementary Fig. 1 b). This demonstrates that the network can learn the nonlinear reaction coordinate mapping based only on maximizing the variational score 6. Furthermore, the implied timescales and the CK-test indicate that the network model predicts the long-time kinetics almost perfectly (Supplementary Fig. 1 c,d).

.7 Alanine dipeptide

As a next level, VAMPnets are used to learn the kinetics of alanine dipeptide from simulation data. It is known that the and backbone torsion angles are the most important reaction coordinates that separate the metastable states of alanine dipeptide, however, our networks only receive Cartesian coordinates as an input, and are thus forced to learn both the nonlinear transformation to the torsion angle space and an optimal cluster discretization within this space, in order to obtain an accurate kinetic model.

A nanosecond MD trajectory generated in Ref. Nüske et al. (2017) (MD setup described there) serves as a dataset. The solute coordinates were stored every picosecond, resulting in configurations that are all aligned on the first frame using minimal RMSD fit to remove global translation and rotation. Each network lobe uses the three-dimensional coordinates of the 10 heavy atoms as input, , and the network is trained using time lag . Different numbers of output states and layer depths are considered, employing the layer sizing scheme described in the Methods section (see Fig. 3 for an example).


Figure 3: Representative structure of one lobe of the VAMPnet used for alanine dipeptide. Here, the five-layer network with six output states used for the results shown in Fig. 4

is shown. Layers are fully connected, have 30-22-16-12-9-6 nodes, and use dropout in the first two hidden layers. All hidden neurons use ReLu activation functions, while the output layer uses Softmax activation function in order to achieve a fuzzy discretization of state space.

A VAMPnet with six output states learns a discretization in six metastable sets corresponding to the free energy minima of the space (Fig. 4b). The implied timescales indicate that given the coordinate transformation found by the network, the two slowest timescales are converged at lag time or larger (Fig. 4c). Thus we estimated a Koopman model at , whose Markov transition probability matrix is depicted in Fig. 4d. Note that transition probabilities between state pairs and are important for the correct kinetics at , but the actual trajectories typically pass via the directly adjacent intermediate states. The model performs excellently in the CK-Test (Fig. 4e).

Figure 4: VAMPnet kinetic model of alanine dipeptide. (a) Structure of alanine dipeptide. The main coordinates describing the slow transitions are the backbone torsion angles and , however the neural network inputs are only the Cartesian coordinates of heavy atoms. (b) Assignment of all simulated molecular coordinates, plotted as a function of and , to the six Softmax output states. Color corresponds to activation of the respective output neuron, indicating the membership probability to the associated metastable state. (c) Relaxation timescales computed from the Koopman model using the neural network transformation. (d) Representation of the transition probabilities matrix of the Koopman model; transitions with a probability lower than 0.5% have been omitted. (e) Chapman-Kolmogorov test comparing long-time predictions of the Koopman model estimated at and estimates at longer lag times. Panels (c) and (e) report 95% confidence interval error bars over 100 training runs excluding failed runs (see text).

.8 Choice of lag time, network depth and number of output states

We studied the success probability of optimizing a VAMPnet with six output states as a function of the lag time by conducting 200 optimization runs. Success was defined as resolving the three slowest processes by finding three slowest timescale higher than 0.2, 0.4 and 1 ns, respectively. Note that the results shown in Fig. 4 are reported for successful runs in this definition. There is a range of values from 4 to 32 picoseconds where the training succeeds with a significant probability (Supplementary Fig. 2 a). However, even in this range the success rate is still below 40 %, which is mainly due to the fact that many runs fail to find the rarely occurring third-slowest process that corresponds to the transition of the positive range (Fig. 4b, state 5 and 6).

The breakdown of optimization success for small and large lag times can be most easily explained by the eigenvalue decomposition of Markov propagators Prinz et al. (2011a). When the lag time exceeds the timescale of a process, the amplitude of this process becomes negligible, making it hard to fit given noisy data. At short lag times, many processes have large eigenvalues, which increases the search space of the neural network and appears to increase the probability of getting stuck in suboptimal maxima of the training score.

We have also studied the success probability, as defined above, as a function of network depth. Deeper networks can represent more complex functions. Also, since the networks defined here reduce the input dimension to the output dimension by a constant factor per layer, deeper networks perform a less radical dimension reduction per layer. On the other hand, deeper networks are more difficult to train. As seen inSupplementary Fig. 2 b, a high success rate is found for four to seven layers.

Next, we studied the dependency of the network-based discretization as a function of the number of output nodes (Fig. 5a-c). With two output states, the network separates the state space at the slowest transition between negative and positive values of the angle (Fig. 5a). The result with three output nodes keeps the same separation and additionally distinguishes between the and regions of the Ramachandran plot, i.e. small and large values of the angle (Fig. 5b). For higher number of output states, finer discretizations and smaller interconversion timescales are found, until the network starts discretizing the transition regions, such as the two transition states between the and regions along the angle (Fig. 5c). We chose the lag time depending on the number of output nodes of the network, using for two output nodes, for three output nodes, and for eight output nodes.

An network output with Softmax neurons describes a

-dimensional feature space as the Softmax normalization removes one degree of freedom. Thus, to resolve

relaxation timescales, at least output nodes or metastable states are required. However, the network quality can improve when given more degrees of freedom in order to approximate the dominant singular functions accurately. Indeed, the best scores using singular values (3 nontrivial singular values) are achieved when using at least six output states that separate each of the six metastable states in the Ramachandran plane (Fig. 5d-e).

For comparison, we investigated how a standard MSM would perform as a function of the number of states (Fig. 5

d). For a fair comparison, the MSMs also used Cartesian coordinates as an input, but then employed a state-of-the-art procedure using a kinetic map transformation that preserves 95% of the cumulative kinetic variance

Noé and Clementi (2015), followed by -means clustering, where the parameter is varied. It is seen that the MSM VAMP-2 scores obtained by this procedure is significantly worse than by VAMPnets when less than 20 states are employed. Clearly, MSMs will succeed when sufficiently many states are used, but in order to obtain an interpretable model those states must again be coarse-grained onto a fewer-state model, while VAMPnets directly produce an accurate model with few-states.

Figure 5: Kinetic model of alanine dipeptide as a function of the number of output states. (a-c) Assignment of input coordinates, plotted as a function of and , to two, three, and eight output states. Color corresponds to activation of the respective output neuron, indicating the membership probability to this state (see 4 b). (d) Comparison of VAMPnet and MSM performance as a function of the number of output states / MSM states. Mean VAMP-2 score and 95% confidence interval from 100 runs are shown. (e) Mean squared values of the four largest singular values that make up the VAMPnets score plotted in panel (d).

.9 VAMPnets learn to transform Cartesian to torsion coordinates

The results above indicate that the VAMPnet has implicitly learned the feature transformation from Cartesian coordinates to backbone torsions. In order to probe this ability more explicitly, we trained a network with 30-10-3-3-2-5 layers, i.e. including a bottleneck of two nodes before the output layer. We find that the activation of the two bottleneck nodes correlates excellently with the and torsion angles that were not presented to the network (Pearson correlation coefficients of and , respectively, Supplementary Fig. 3 a,b). To visualize the internal representation that the network learns, we color data samples depending on the free energy minima in the space they belong to (Supplementary Fig. 3 c), and then show where these samples end up in the space of the bottleneck node activations (Supplementary Fig. 3 d). It is apparent that the network learns a representation of the Ramachandran plot – The four free energy minima at small values ( and areas) are represented as contiguous clusters with the correct connectivity, and are well separated from states with large values ( area). The network fails to separate the two substates in the large value range well, which explains the frequent failure to find the corresponding transition process and the third-largest relaxation timescale.

.10 NTL9 Protein folding dynamics

In order to proceed to a higher-dimensional problem, we analyze the kinetics of an all-atom protein folding simulation of the NTL9 protein generated by the Anton supercomputer Lindorff-Larsen et al. (2011). A five-layer VAMPnet was trained at lag time using time steps, uniformly sampled from a trajectory. Since NTL9 is folding and unfolding, there is no unique reference structure to align Cartesian coordinates to – hence we use internal coordinates as a network input. We computed the nearest-neighbor heavy-atom distance, for all non-redundant pairs of residues and and transformed them into contact maps using the definition , resulting in input nodes.

Again, the network performs a hierarchical decomposition of the molecular configuration space when increasing the number of output nodes. Fig. 6a shows the decomposition of state space for two and five output nodes, and the corresponding mean contact maps and state probabilities. With two output nodes, the network finds the folded and unfolded state that are separated by the slowest transition process (Fig. 6a, middle row). With five output states, the folded state is decomposed into a stable and well-defined fully folded substate and a less stable, more flexible substate that is missing some of the tertiary contacts compared to the fully folded substate. The unfolded substate decomposes into three substates, one of them largely unstructured, a second one with residual structure, thus forming a folding intermediate, and a mis-folded state with an entirely different fold including a non-native -sheet.

The relaxation timescales found by a five-state VAMPnet model are en par with those found by a 40-state MSM using state-of-the-art estimation methods (Fig. 6b-c). However, the fact that only five states are required in the VAMPnet model makes it easier to interpret and analyze. Additionally, the CK-test indicates excellent agreement between long-time predictions and direct estimates.

Figure 6: VAMPnet results of NTL9 folding kinetics. (a) Hierarchical decomposition of the NTL9 protein state space by a network with two and five output nodes. Mean contact maps are shown for all MD samples grouped by the network, along with the fraction of samples in that group. 3D structures are shown for the five-state decomposition, residues involved in -helices or

-sheets in the folded state are colored identically across the different states. (b) Relaxation timescales computed from the Koopman model approximated using the transformation applied by a neural network with five output nodes. (c) Relaxation timescales from a Markov state model computed from a TICA transformation of the contact maps, followed by k-means clustering with

. (d) Chapman-Kolmogorov test comparing long-time predictions of the Koopman model estimated at and estimates at longer lag times. Panels (b), (c) and (d) report 95% confidence interval error bars over 100 training runs.

Discussion

We have introduced a deep learning framework for molecular kinetics, called VAMPnet. Data-driven learning of molecular kinetics is usually done by shallow learning structures, such as TICA and MSMs. However, the processing pipeline, typically consisting of featurization, dimension reduction, MSM estimation and MSM coarse-graining is, in principle, a hand-crafted deep learning structure. Here we propose to replace the entire pipeline by a deep neural network that learns optimal feature transformations, dimension reduction and, if desired, maps the MD time-steps to a fuzzy clustering. The key to optimize the network is the VAMP variational approach which defines scores by which learning structures can be optimized to learn models of both equilibrium and non-equilibrium MD.

Although MSM-based kinetic modeling has been refined over more than a decade, VAMPnets perform competitively or superior in our examples. In particular, they perform extremely well in the Chapman-Kolmogorov test that validates the long-time prediction of the model. VAMPnets have a number of advantages over models based on MSM pipelines: (i) They may be overall more optimal, because featurization, dimension reduction and clustering are not explicitly separate processing steps. (ii) When using Softmax output nodes, the VAMPnet performs a fuzzy clustering of the MD structures fed into the network and constructs a fuzzy MSM, which is readily interpretable in terms of transition probabilities between metastable states. In contrast to other MSM coarse-graining techniques it is thus not necessary to accept reduction in model quality in order to obtain a few-state MSM, but such a coarse-grained model is seamlessly learned within the same learning structure. (iii) VAMPnets require less expertise to train than an MSM-based processing pipelines, and the formulation of the molecular kinetics as a neural network learning problem enables us to exploit an arsenal of highly developed and optimized tools in learning softwares such as tensorflow, theano or keras.

Despite their benefits, VAMPnets still miss many of the benefits that come with extensions developed for MSM approach. This includes multi-ensemble Markov models that are superior to single-conventional MSMs in terms of sampling rare events by combining data from multiple ensembles Wu et al. (2016, 2014); Chodera et al. (2011); Prinz et al. (2011b); Rosta and Hummer (2015); Mey et al. (2014), Augmented Markov models that combine simulation data with experimental observation data Olsson et al. (in press), and statistical error estimators developed for MSMs Hinrichs and Pande (2007); Noé (2008); Chodera and Noé (2010). Since these methods explicitly use the MSM likelihood, it is currently unclear, how they could be implemented in a deep learning structure such as a VAMPnet. Extending VAMPnets towards these special capabilities is a challenge for future studies.

Finally, a remaining concern is that the optimization of VAMPnets can get stuck in suboptimal local maxima. In other applications of network-based learning, a working knowledge has been established which type of network implementation and learning algorithm are most suitable for robust and reproducible learning. For example, it is conceivable that the VAMPnet lobes may benefit from convolutional filters LeCun et al. (1998) or different types of transfer functions. Suitably chosen convolutions, as in Schütt et al. (2017) may also lead to learned feature transformations that are transferable within a given class of molecules.

Methods

Neural network structure

Each network lobe in Fig. 1 has a number of input nodes given by the data dimension. According to the VAMP variational principle (Sec. .1), the output dimension must be at least equal to the number of Koopman singular functions that we want to approximate, i.e. equal to used in the score function . In most applications, the number of input nodes exceeds the number of output nodes, i.e. the network conducts a dimension reduction. Here, we keep the dimension reduction from layer with nodes to layer with nodes constant:

(16)

where is the network depth, i.e. the number of layers excluding the input layer. Thus, the network structure is fixed by and . We tested different values for ranging from 2 to 11; For alanine dipeptide, Supplementary Fig. 2b reports the results in terms of the training success rate described in the results section. Networks have a number of parameters that ranges between 100 and 400000, most of which are between the first and second layer due to the rapid dimension reduction of the network. To avoid overfitting, we use dropout during training Srivastava et al. (2014), and select hyper-parameters using the VAMP-2 validation score.

Neural network hyperparameters

Hyper-parameters include the regularization factors for the weights of the fully connected and the Softmax layer, the dropout probabilities for each layer, the batch-size, and the learning rate for the Adam algorithm. Since a grid search in the joint parameter space would have been too computationally expensive, each hyper-parameter was optimized using the VAMP-2 validation score while keeping the other hyper-parameters constant. We started with the regularization factors due to their large effect on the training performance, and observed optimal performance for a factor of

for the fully connected hidden layers and for the output layer; regularization factors higher than frequently led to training failure. Subsequently, we tested the dropout probabilities with values ranging from 0 to and found dropout in the first two hidden layers and no dropout otherwise to perform well. The results did not strongly depend on the training batch size, however, more training iterations are necessary for large batches, while small batches exhibit stronger fluctuations in the training score. We found a batch-size of to be a good compromise, with tested values ranging between and . The optimal learning rate strongly depends on the network topology (e.g. the number of hidden layers and the number of output nodes). In order to adapt the learning rate, we started from an arbitrary rate of 0.05. If no improvement on the validation VAMP-2 score was observed over 10 training iterations, the learning rate was reduced by a factor of . This scheme led to better convergence of the training and validation scores and better kinetic model validation compared to using a high learning rate throughout.

The time lag between the input pairs of configurations was selected depending on the number of output nodes of the network: larger lag times are better at isolating the slowest processes, and thus are more suitable with a small number of output nodes. The procedure of choosing network structure and lag time is thus as follows: First, the number of output nodes and the hidden layers are selected, which determines the network structure as described above. Then, a lag time is chosen in which the largest singular values (corresponding to the slowest processes) can be trained consistently.

VAMPnet training and validation

We pre-trained the network by minimizing the negative VAMP-1 score during the first third of the total number of epochs, and subsequently optimize the network with VAMP-2 optimization (Sec.

.2

). In order to ensure robustness of the results, we performed 100 network optimization runs for each problem. In each run, the dataset was shuffled and randomly split into 90%/10% for training and validation, respectively. To exclude outliers, we then discarded the best 5% and the worst 5% of results. Hyperparameter optimization was done using the validation score averaged over the remaining runs. Figures report training or validation mean and 95% confidence intervals.

Brownian dynamics simulations

The asymmetric double well and the protein folding toy model are simulated by over-damped Langevin dynamics in a potential energy function , also known as Brownian dynamics, using an forward Euler integration scheme. The position is propagated by time step via:

(17)

where is the diffusion constant and is the Boltzmann constant and temperature. Here, dimensionless units are used and , . The elements of the random vector

are sampled from a normal distribution with zero mean and unit variance.

Hardware used and training times

VAMPnets were trained on a single NVIDIA GeForce GTX 1080 GPU, requiring between 20 seconds (for the double well problem) and 180 seconds for NTL9 for each run.

Code availability

TICA, -means and MSM analyses were conducted with PyEMMA version 2.4, freely available at pyemma.org. VAMPnets are implemented using the freely available packages keras Chollet et al. (2015) with tensorflow-gpu Abadi et al. (2015) as a backend. The code can be obtained at https://github.com/markovmodel/deeptime.

Data availability

Data for NTL9 can be requested from the autors of Lindorff-Larsen et al. (2011). Data for all other examples is available at https://github.com/markovmodel/deeptime.

References

  • Lindorff-Larsen et al. (2011) K. Lindorff-Larsen, S. Piana, R. O. Dror, and D. E. Shaw. How fast-folding proteins fold. Science, 334:517–520, 2011.
  • Plattner et al. (in revision) Nuria Plattner, Stefan Doerr, Gianni De Fabritiis, and Frank Noé. Protein-protein association and binding mechanism resolved in atomic detail. Nat. Chem., in revision.
  • Kohlhoff et al. (2014) K. J. Kohlhoff, D. Shukla, M. Lawrenz, G. R. Bowman, D. E. Konerding, D. Belov, R. B. Altman, and V. S. Pande. Cloud-based simulations on google exacycle reveal ligand modulation of gpcr activation pathways. Nat. Chem., 6:15–21, 2014.
  • Doerr et al. (2016) S. Doerr, M. J. Harvey, F. Noé, and G. De Fabritiis. HTMD: High-Throughput Molecular Dynamics for Molecular Discovery. J. Chem. Theory Comput., 12:1845–1852, 2016.
  • Ufimtsev and Martinez (2008) I. S. Ufimtsev and T. J. Martinez. Graphical processing units for quantum chemistry. Comp. Sci. Eng., 10:26–34, 2008.
  • Marx and Hutter (2000) Dominik Marx and Jürg Hutter. Ab initio molecular dynamics: Theory and implementation. In J. Grotendorst (Ed.), editor, Modern Methods and Algorithms of Quantum Chemistry, volume 1 of NIC Series, pages 301–449. John von Neumann Institute for Computing, Jülich, 2000.
  • Schütte et al. (1999) C. Schütte, A. Fischer, W. Huisinga, and P. Deuflhard. A Direct Approach to Conformational Dynamics based on Hybrid Monte Carlo. J. Comput. Phys., 151:146–168, 1999.
  • Prinz et al. (2011a) J.-H. Prinz, H. Wu, M. Sarich, B. G. Keller, M. Senne, M. Held, J. D. Chodera, C. Schütte, and F. Noé. Markov models of molecular kinetics: Generation and validation. J. Chem. Phys., 134:174105, 2011a.
  • Swope et al. (2004) W. C. Swope, J. W. Pitera, and F. Suits. Describing protein folding kinetics by molecular dynamics simulations: 1. Theory. J. Phys. Chem. B, 108:6571–6581, 2004.
  • Noé et al. (2007) F. Noé, I. Horenko, C. Schütte, and J. C. Smith. Hierarchical Analysis of Conformational Dynamics in Biomolecules: Transition Networks of Metastable States. J. Chem. Phys., 126:155102, 2007.
  • Chodera et al. (2007) J. D. Chodera, K. A. Dill, N. Singhal, V. S. Pande, W. C. Swope, and J. W. Pitera. Automatic discovery of metastable states for the construction of Markov models of macromolecular conformational dynamics. J. Chem. Phys., 126:155101, 2007.
  • Buchete and Hummer (2008) N. V. Buchete and G. Hummer. Coarse Master Equations for Peptide Folding Dynamics. J. Phys. Chem. B, 112:6057–6069, 2008.
  • Scherer et al. (2015) M. K. Scherer, B. Trendelkamp-Schroer, F. Paul, G. Perez-Hernandez, M. Hoffmann, N. Plattner, J.-H. Prinz, and F. Noé. PyEMMA 2: A software package for estimation, validation and analysis of Markov models. J. Chem. Theory Comput., 11:5525–5542, 2015.
  • Harrigan et al. (2017) M. P. Harrigan, M. M. Sultan, C. X. Hernández, B. E. Husic, P. Eastman, C. R. Schwantes, K. A. Beauchamp, R. T. McGibbon, and V. S. Pande. Msmbuilder: Statistical models for biomolecular dynamics. Biophys J., 112:10–15, 2017.
  • Humphrey et al. (1996) W. Humphrey, A. Dalke, and K. Schulten. Vmd - visual molecular dynamics. J. Molec. Graphics, 14:33–38, 1996.
  • McGibbon et al. (2015) R. T. McGibbon, K. A. Beauchamp, M. P. Harrigan, C. Klein, J. M. Swails, C. X. Hernández, C. R. Schwantes, L. P. Wang, T. J. Lane, and V. S. Pande. Mdtraj: A modern open library for the analysis of molecular dynamics trajectories. Biophys J., 109:1528–1532, 2015.
  • Noé and Nüske (2013) F. Noé and F. Nüske. A variational approach to modeling slow processes in stochastic dynamical systems. Multiscale Model. Simul., 11:635–655, 2013.
  • Nüske et al. (2014) F. Nüske, B. G. Keller, G. Pérez-Hernández, A. S. J. S. Mey, and F. Noé. Variational approach to molecular kinetics. J. Chem. Theory Comput., 10:1739–1752, 2014.
  • Perez-Hernandez et al. (2013) G. Perez-Hernandez, F. Paul, T. Giorgino, G. D Fabritiis, and Frank Noé. Identification of slow molecular order parameters for markov model construction. J. Chem. Phys., 139:015102, 2013.
  • Schwantes and Pande (2013) C. R. Schwantes and V. S. Pande. Improvements in markov state model construction reveal many non-native interactions in the folding of ntl9. J. Chem. Theory Comput., 9:2000–2009, 2013.
  • Molgedey and Schuster (1994) L. Molgedey and H. G. Schuster. Separation of a mixture of independent signals using time delayed correlations. Phys. Rev. Lett., 72:3634–3637, 1994. doi: 10.1103/PhysRevLett.72.3634.
  • Ziehe and Müller (1998) A. Ziehe and K.-R. Müller. TDSEP — an efficient algorithm for blind separation using time structure. In ICANN 98, pages 675–680. Springer Science and Business Media, 1998. doi: 10.1007/978-1-4471-1599-1_103.
  • Harmeling et al. (2003) S. Harmeling, A. Ziehe, M. Kawanabe, and K.-R. Müller. Kernel-based nonlinear blind source separation. Neur. Comp., 15:1089–1124, 2003.
  • Mezić (2005) I. Mezić. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynam., 41:309–325, 2005.
  • Schmid and Sesterhenn (2008) P. J. Schmid and J. Sesterhenn. Dynamic mode decomposition of numerical and experimental data. In 61st Annual Meeting of the APS Division of Fluid Dynamics. American Physical Society, 2008.
  • Tu et al. (2014) J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz. On dynamic mode decomposition: Theory and applications. J. Comput. Dyn., 1(2):391–421, dec 2014. doi: 10.3934/jcd.2014.1.391.
  • Williams et al. (2015) M. O. Williams, I. G. Kevrekidis, and C. W. Rowley. A data–driven approximation of the koopman operator: Extending dynamic mode decomposition. J. Nonlinear Sci., 25:1307–1346, 2015.
  • Wu et al. (2017) H. Wu, F. Nüske, F. Paul, S. Klus, P. Koltai, and F. Noé. Variational koopman models: slow collective variables and molecular kinetics from short off-equilibrium simulations. J. Chem. Phys., 146:154104, 2017.
  • Noé and Clementi (2017) F. Noé and C. Clementi. Collective variables for the study of long-time kinetics from molecular trajectories: theory and methods. Curr. Opin. Struc. Biol., 43:141–147, 2017.
  • Klus et al. (2017) S. Klus, F. Nüske, P. Koltai, H. Wu, I. Kevrekidis, C. Schütte, and F. Noé. Data-driven model reduction and transfer operator approximation. arXiv:1703.10112, 2017.
  • Noé and Clementi (2015) F. Noé and C. Clementi. Kinetic distance and kinetic maps from molecular dynamics simulation. J. Chem. Theory Comput., 11:5002–5011, 2015.
  • Noé et al. (2016) F. Noé, R. Banisch, and C. Clementi. Commute maps: separating slowly-mixing molecular configurations for kinetic modeling. J. Chem. Theory Comput., 12:5620–5630, 2016.
  • Bowman et al. (2014) G. R. Bowman, V. S. Pande, and F. Noé, editors. An Introduction to Markov State Models and Their Application to Long Timescale Molecular Simulation., volume 797 of Advances in Experimental Medicine and Biology. Springer Heidelberg, 2014.
  • Husic and Pande (2017) B. E. Husic and V. S. Pande. Ward clustering improves cross-validated markov state models of protein folding. J. Chem. Theo. Comp., 13:963–967, 2017.
  • Sheong et al. (2015) F. K. Sheong, D.-A. Silva, L. Meng, Y. Zhao, and X. Huang. Automatic State Partitioning for Multibody Systems (APM): An Efficient Algorithm for Constructing Markov State Models To Elucidate Conformational Dynamics of Multibody Systems. J. Chem. Theory Comput., 11:17–27, 2015.
  • Wu and Noé (2015) H. Wu and F. Noé. Gaussian markov transition models of molecular kinetics. J. Chem. Phys., 142:084104, 2015.
  • Weber et al. (2017) M. Weber, K. Fackeldey, and C. Schütte. Set-free markov state model building. J. Chem. Phys., 146:124133,, 2017.
  • Bowman et al. (2009) G. R. Bowman, K. A. Beauchamp, G. Boxer, and V. S. Pande. Progress and challenges in the automated construction of Markov state models for full protein systems. J. Chem. Phys., 131:124101, 2009.
  • Trendelkamp-Schroer et al. (2015) B. Trendelkamp-Schroer, H. Wu, F. Paul, and F. Noé. Estimation and uncertainty of reversible markov models. J. Chem. Phys., 143:174101, 2015.
  • Kube and Weber (2007) S. Kube and M. Weber. A coarse graining method for the identification of transition rates between molecular conformations. J. Chem. Phys., 126:024103, 2007. doi: 10.1063/1.2404953. URL http://dx.doi.org/10.1063/1.2404953.
  • Yao et al. (2013) Y. Yao, R. Z. Cui, G. R. Bowman, D.-A. Silva, J. Sun, and X. Huang. Hierarchical nyström methods for constructing markov state models for conformational dynamics. J. Chem. Phys., 138:174106, 2013.
  • Fackeldey and Weber (2017) K. Fackeldey and M. Weber. Genpcca – markov state models for non-equilibrium steady states. WIAS Report, 29:70–80, 2017.
  • Gerber and Horenko (2017) S. Gerber and I. Horenko. Toward a direct and scalable identification of reduced models for categorical processes. Proc. Natl. Acad. Sci. USA, 114:4863–4868, 2017.
  • Hummer and Szabo (2015) G. Hummer and A. Szabo. Optimal dimensionality reduction of multistate kinetic and markov-state models. J. Phys. Chem. B, 119:9029–9037, 2015.
  • Orioli and Faccioli (2016) S. Orioli and P. Faccioli. Dimensional reduction of markov state models from renormalization group theory. J. Chem. Phys., 145:124120, 2016.
  • Noé et al. (2013) F. Noé, H. Wu, J.-H. Prinz, and N. Plattner.

    Projected and hidden markov models for calculating kinetics and metastable states of complex molecules.

    J. Chem. Phys., 139:184114, 2013.
  • Wu and Noé (2017) H. Wu and F. Noé. Variational approach for learning markov processes from time series data. arXiv, 2017.
  • McGibbon and Pande (2015) R. T. McGibbon and V. S. Pande. Variational cross-validation of slow dynamical modes in molecular kinetics. J. Chem. Phys., 142:124105, 2015.
  • LeCun et al. (2015) Y. LeCun, Y. Bengio, and G. E. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
  • Krizhevsky et al. (2012) A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. NIPS, pages 1097–1105, 2012.
  • Mnih et al. (2015) V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, A. K. Fidjeland, M. Riedmiller, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis.

    Human-level control through deep reinforcement learning.

    Nature, 518:529–533, 2015.
  • Perez-Hernandez and Noé (2016) G. Perez-Hernandez and F. Noé. Hierarchical time-lagged independent component analysis: computing slow modes and reaction coordinates for large molecular systems. J. Chem. Theory Comput., 12:6118–6129, 2016.
  • Nüske et al. (2016) F. Nüske, R. Schneider, F. Vitalini, and F. Noé.

    Variational tensor approach for approximating the rare-event kinetics of macromolecular systems.

    J. Chem. Phys., 144:054105, 2016.
  • Koopman (1931) B.O. Koopman. Hamiltonian systems and transformations in hilbert space. Proc. Natl. Acad. Sci. USA, 17:315–318, 1931.
  • Knoch and Speck (2015) F. Knoch and T. Speck. Cycle representatives for the coarse-graining of systems driven into a non-equilibrium steady state. New J. Phys., 17:115004, 2015.
  • Wang and Schütte (2015) H. Wang and C. Schütte. Building markov state models for periodically driven non-equilibrium systems. J. Chem. Theory Comput., 11:1819–1831, 2015.
  • Horenko et al. (2007) I. Horenko, C. Hartmann, C. Schütte, and F. Noé. Data-based parameter estimation of generalized multidimensional Langevin processes. Phys. Rev. E, 76:016706, 2007.
  • Cybenko (1989) G. Cybenko.

    Approximation by superpositions of a sigmoidal function.

    Math. Control Signals, 2(4):303–314, 1989.
  • Eigen et al. (2014) D. Eigen, J. Rolfe, R. Fergus, and Y. LeCun. Understanding deep architectures using a recursive convolutional network. ICLR, 2014.
  • Ranzato et al. (2006) M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun.

    Efficient learning of sparse representations with an energy-based model.

    In J. Platt et al., editor, Adv. Neural Inf. Process. Syst. (NIPS). MIT Press, 2006.
  • Bengio et al. (2007) Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks,. In Adv. Neural Inf. Process. Syst. (NIPS), volume 19, page 153. MIT Press, 2007.
  • Galen et al. (2013) A. Galen, R. Arora, J. Bilmes, and K. Livescu. Deep canonical correlation analysis. ICML Proc., 28, 2013.
  • Röblitz and Weber (2013) S. Röblitz and M. Weber.

    Fuzzy spectral clustering by PCCA+: application to Markov state models and data classification.

    Adv. Data Anal. Classif., 7:147–179, 2013.
  • Sarich et al. (2010) M. Sarich, F. Noé, and C. Schütte. On the approximation quality of markov state models. Multiscale Model. Simul., 8:1154–1177, 2010.
  • Noé et al. (2009) F. Noé, C. Schütte, E. Vanden-Eijnden, L. Reich, and T. R. Weikl. Constructing the full ensemble of folding pathways from short off-equilibrium simulations. Proc. Natl. Acad. Sci. USA, 106:19011–19016, 2009.
  • Hahnloser (1998) R. L. T. Hahnloser. On the piecewise analysis of networks of linear threshold neurons. Neural Netw., 11(4):691–697, 1998.
  • Nair and Hinton (2010) V. Nair and G. E. Hinton.

    Rectified linear units improve restricted boltzmann machines.

    In ICML Proc., volume 27, pages 807–814, 2010.
  • Harrigan and Pande (2017) M. P. Harrigan and V. S. Pande. Landmark kernel tica for conformational dynamics. bioRxiv, 123752, 2017.
  • Kingma and Ba (2014) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv.org:1412.6980, 2014.
  • Nüske et al. (2017) F. Nüske, H. Wu, C. Wehmeyer, C. Clementi, and F. Noé. Markov state models from short non-equilibrium simulations - analysis and correction of estimation bias. arXiv:1701.01665, 2017.
  • Wu et al. (2016) H. Wu, F. Paul, C. Wehmeyer, and F. Noé. Multiensemble markov models of molecular thermodynamics and kinetics. Proc. Natl. Acad. Sci. USA, 113(23):E3221–E3230, 2016. doi: 10.1073/pnas.1525092113.
  • Wu et al. (2014) H. Wu, A. S. J. S. Mey, E. Rosta, and F. Noé. Statistically optimal analysis of state-discretized trajectory data from multiple thermodynamic states. J. Chem. Phys., 141:214106, 2014.
  • Chodera et al. (2011) J. D. Chodera, W. C. Swope, F. Noé, J.-H. Prinz, and V. S. Pande. Dynamical reweighting: Improved estimates of dynamical properties from simulations at multiple temperatures. J. Phys. Chem., 134:244107, 2011.
  • Prinz et al. (2011b) J.-H. Prinz, J. D. Chodera, V. S. Pande, W. C. Swope, J. C. Smith, and F. Noé. Optimal use of data in parallel tempering simulations for the construction of discrete-state markov models of biomolecular dynamics. J. Chem. Phys., 134:244108, 2011b.
  • Rosta and Hummer (2015) E. Rosta and G. Hummer. Free energies from dynamic weighted histogram analysis using unbiased markov state model. J. Chem. Theory Comput., 11:276–285, 2015.
  • Mey et al. (2014) A. S. J. S. Mey, H. Wu, and F. Noé. xTRAM: Estimating equilibrium expectations from time-correlated simulation data at multiple thermodynamic states. Phys. Rev. X, 4:041018, 2014.
  • Olsson et al. (in press) S. Olsson, H. Wu, F. Paul, C. Clementi, and F. Noé. Combining experimental and simulation data of molecular processes via augmented markov models. Proc. Natl. Acad. Sci. USA, in press.
  • Hinrichs and Pande (2007) N. S. Hinrichs and V. S. Pande. Calculation of the distribution of eigenvalues and eigenvectors in Markovian state models for molecular dynamics. J. Chem. Phys., 126:244101, 2007.
  • Noé (2008) F. Noé. Probability Distributions of Molecular Observables computed from Markov Models. J. Chem. Phys., 128:244103, 2008.
  • Chodera and Noé (2010) J. D. Chodera and F. Noé. Probability distributions of molecular observables computed from markov models. ii: Uncertainties in observables and their time-evolution. J. Chem. Phys., 133:105102, 2010.
  • LeCun et al. (1998) Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 86:2278–2324, 1998.
  • Schütt et al. (2017) K. T. Schütt, P.-J. Kindermans, H. E. Sauceda, S. Chmiela, A. Tkatchenko, and K.-R. Müller. Moleculenet: A continuous-filter convolutional neural network for modeling quantum interactions. arXiv:1706.08566, 2017.
  • Srivastava et al. (2014) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15:1929–1958, 2014.
  • Chollet et al. (2015) François Chollet et al. Keras. https://github.com/fchollet/keras, 2015.
  • Abadi et al. (2015) Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems. arXiv.org:1603.04467, 2015.