deeptime
Deep learning meets molecular dynamics.
view repo
Here we develop a deep learning framework for molecular kinetics from molecular dynamics (MD) simulation data. There is an increasing demand for computing the relevant structures, equilibria and long-timescale kinetics of complex biomolecular processes, such as protein-drug binding, from high-throughput MD simulations. State-of-the art methods employ a handcrafted data processing pipeline, involving (i) transformation of simulated coordinates into a set of features characterizing the molecular structure, (ii) dimension reduction to collective variables, (iii) clustering the dimension-reduced data, and (iv) estimation of a Markov state model (MSM) or related model of the interconversion rates between molecular structures. This approach demands a substantial amount of modeling expertise, as poor decisions at every step will lead to large modeling errors. Here we employ the recently developed variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states and learns optimal feature transformations, nonlinear dimension reduction, cluster discretization and MSM estimation within a single end-to-end learning framework. Our results, ranging from toy models to protein folding, are competitive or outperform state-of-the art Markov modeling methods and readily provide easily interpretable few-state kinetic models.
READ FULL TEXT VIEW PDF
Markov state models (MSMs) and Master equation models are popular approa...
read it
Inspired by the success of deep learning techniques in the physical and
...
read it
The modeling of atomistic biomolecular simulations using kinetic models ...
read it
Molecular simulations produce very high-dimensional data-sets with milli...
read it
We propose a deep generative Markov State Model (DeepGenMSM) learning
fr...
read it
State-free reversible VAMPnets (SRVs) are a neural network-based framewo...
read it
Background: Because of the difficulties involved in learning and using 3...
read it
Deep learning meets molecular dynamics.
The rapid advances in computing power and simulation technologies
for molecular dynamics (MD) of biomolecules and fluids Lindorff-Larsen et al. (2011); Plattner et al. (in revision); Kohlhoff et al. (2014); Doerr et al. (2016),
and ab initio MD of small molecules and materials Ufimtsev and Martinez (2008); Marx and Hutter (2000),
allow the generation of extensive simulation data of complex molecular
systems. Thus, it is of high interest to automatically extract statistically
relevant information, including stationary, kinetic and mechanistic
properties.
The Markov modeling approach Schütte et al. (1999); Prinz et al. (2011a); Swope et al. (2004); Noé et al. (2007); Chodera et al. (2007); Buchete and Hummer (2008)
has been a driving force in the development of kinetic modeling techniques
from MD mass data, chiefly as it facilitates a divide-and-conquer
approach to integrate short, distributed MD simulations into a model
of the long-timescale behavior. State-of-the-art analysis approaches
and software packages Scherer et al. (2015); Doerr et al. (2016); Harrigan et al. (2017)
operate by a sequence, or pipeline, of multiple processing steps,
that has been engineered by practitioners over the last decade. The
first step of a typical processing pipeline is featurization, where
the MD coordinates are either aligned (removing translation and rotation
of the molecule of interest) or transformed into internal coordinates
such as residue distances, contact maps or torsion angles Humphrey et al. (1996); McGibbon et al. (2015); Scherer et al. (2015); Doerr et al. (2016).
This is followed by a dimension reduction, in which the dimension
is reduced to much fewer (typically 2-100) slow collective variables
(CVs), often based on the variational approach or conformation dynamics
Noé and Nüske (2013); Nüske et al. (2014)
, time-lagged independent component analysis (TICA)
Perez-Hernandez et al. (2013); Schwantes and Pande (2013), blind source separation Molgedey and Schuster (1994); Ziehe and Müller (1998); Harmeling et al. (2003) or dynamic mode decomposition Mezić (2005); Schmid and Sesterhenn (2008); Tu et al. (2014); Williams et al. (2015); Wu et al. (2017) – see Noé and Clementi (2017); Klus et al. (2017) for an overview. The resulting coordinates may be scaled, in order to embed them in a metric space whose distances correspond to some form of dynamical distance Noé and Clementi (2015); Noé et al. (2016). The resulting metric space is discretized by clustering the projected data using hard or fuzzy data-based clustering methods Bowman et al. (2014); Scherer et al. (2015); Husic and Pande (2017); Sheong et al. (2015); Chodera et al. (2007); Wu and Noé (2015); Weber et al. (2017), typically resulting in 100-1000 discrete states. A transition matrix or rate matrix describing the transition probabilities or rate between the discrete states at some lag time
is then estimated Bowman et al. (2009); Prinz et al. (2011a); Buchete and Hummer (2008); Trendelkamp-Schroer et al. (2015) (alternatively, a Koopman model can be built after the dimension reduction Williams et al. (2015); Wu et al. (2017)). The final step towards an easily interpretable kinetic model is coarse-graining of the estimated Markov state model (MSM) down to a few states Kube and Weber (2007); Yao et al. (2013); Fackeldey and Weber (2017); Gerber and Horenko (2017); Hummer and Szabo (2015); Orioli and Faccioli (2016); Noé et al. (2013).This sequence of analysis steps has been developed by combining physico-chemical intuition and technical experience gathered in the last 10 years. Although each of the steps in the above pipeline appears meaningful, there is no fundamental reason why this or any other given analysis pipeline should be optimal. More dramatically, the success of kinetic modeling currently relies on substantial technical expertise of the modeler, as suboptimal decisions in each step may deteriorate the result. As an example, failure to select suitable features in step 1 will almost certainly lead to large modeling errors.
An important step towards selecting optimal models (parameters) and modeling procedures (hyper-parameters) has been the development of the variational approach for conformation dynamics (VAC) Noé and Nüske (2013); Nüske et al. (2014), which offers a way to define scores that measure the optimality of a given kinetic model compared to the (unknown) MD operator that governs the true kinetics underlying the data. The VAC has recently been generalized to the variational approach for Markov processes (VAMP), which allows to optimize models of arbitrary Markov processes, including nonreversible and non-stationary dynamics Wu and Noé (2017). The VAC has been employed using cross-validation in order to make optimal hyper-parameter choices within the analysis pipeline described above while avoiding overfitting McGibbon and Pande (2015); Husic and Pande (2017). However, a variational score is not only useful to optimize the steps of a given analysis pipeline, but in fact allows us to replace the entire pipeline with a more general learning structure.
Here we develop a deep learning structure that is in principle able to replace the entire analysis pipeline above. Deep learning has been very successful in a broad range of data analysis and learning problems LeCun et al. (2015); Krizhevsky et al. (2012); Mnih et al. (2015). A feedforward deep neural network is a structure that can learn a complex, nonlinear function
. In order to train the network a scoring or loss function is needed that is maximized or minimized, respectively. Here we develop
VAMPnets, a neural network architecture that can be trained by maximizing a VAMP variational score. VAMPnets contain two network lobes that transform the molecular configurations found at a time delay along the simulation trajectories. Compared to previous attempts to include “depth” or “hierarchy” into the analysis method Perez-Hernandez and Noé (2016); Nüske et al. (2016), VAMPnets combine the tasks of featurization, dimension reduction, discretization and coarse-grained kinetic modeling into a single end-to-end learning framework. We demonstrate the performance of our networks using a variety of stochastic models and datasets, including a protein folding dataset. The results are competitive with and sometimes surpass the state-of-the-art handcrafted analysis pipeline. Given the rapid improvements of training efficiency and accuracy of deep neural networks seen in a broad range of disciplines, it is likely that follow-up works can lead to superior kinetic models.MD can be theoretically described as a Markov process in the full state space . For a given potential energy function, the simulation setup (e.g. periodic boundaries) and the time-step integrator used, the dynamics are fully characterized by a transition density , i.e. the probability density that a MD trajectory will be found at configuration given that it was at configuration a time lag before. Markovianity implies that the can be sampled by knowing alone, without the knowledge of previous time-steps. While the dynamics might be highly nonlinear in the variables , Koopman theory Koopman (1931); Mezić (2005)
tells us that there is a transformation of the original variables into some features or latent variables that, on average, evolve according to a linear transformation. In mathematical terms, there exist transformations to features or latent variables,
and , such that the dynamics in these variables are approximately governed by the matrix :(1) |
This approximation becomes exact in the limit of an infinitely large set of features () and , but for a sufficiently large lag time the approximation can be excellent with low-dimensional feature transformations, as we will demonstrate below. The expectation values account for stochasticity in the dynamics, such as in MD, and thus they can be omitted for deterministic dynamical systems Mezić (2005); Tu et al. (2014); Williams et al. (2015).
To illustrate the meaning of Eq. (1), consider the example of
being a discrete-state Markov chain. If we choose the feature transformation to be indicator functions (
when and 0 otherwise, and correspondingly with and ), their expectation values are equal to the probabilities of the chain to be in any given state, and , and is equal to the matrix of transition probabilities, i.e. . Previous papers on MD kinetics have usually employed a propagator or transfer operator formulation instead of (1) Schütte et al. (1999); Prinz et al. (2011a). However, the above formulation is more powerful as it also applies to nonreversible and non-stationary dynamics, as found for MD of molecules subject to external force, such as voltage, flow, or radiation Knoch and Speck (2015); Wang and Schütte (2015).A central result of the VAMP theory is that the best finite-dimensional linear model, i.e. the best approximation in Eq. (1), is found when the subspaces spanned by and are identical to those spanned by the top left and right singular functions, respectively, of the so-called Koopman operator Wu and Noé (2017). For an introduction to the Koopman operator, please refer to Koopman (1931); Mezić (2005); Klus et al. (2017).
How do we choose , and from data? First, suppose we are given some feature transformation , and define the following covariance matrices:
(2) | |||||
(3) | |||||
(4) |
where and denote the averages that extend over time points and lagged time points within trajectories, respectively, and across trajectories. Then the optimal that minimizes the least square error is Williams et al. (2015); Wu and Noé (2017); Horenko et al. (2007):
(5) |
Now the remaining problem is how to find suitable transformations , . This problem cannot be solved by minimizing the least square error above, as is illustrated by the following example: Suppose we define , i.e. we just map the state space to the constant 1 – in this case the least square error is 0 for , but the model is completely uninformative as all dynamical information is lost.
Instead, in order to seek and based on available simulation data, we employ the VAMP theorem introduced in Ref. Wu and Noé (2017), that can be equivalently formulated as the following subspace version:
VAMP variational principle: For any two sets of linearly independent functions and , let us call
their VAMP-2 score, where are defined by Eqs. (2-4) and is the Frobenius norm of matrix . The maximum value of VAMP-2 score is achieved when the top left and right Koopman singular functions belong to and respectively.
This variational theorem shows that the VAMP-2 score measures the consistency between subspaces of basis functions and those of dominant singular functions, and we can therefore optimize and via maximizing the VAMP-2 score. In the special case where the dynamics are reversible with respect to equilibrium distribution then the theorem above specializes to variational principle for reversible Markov processes Noé and Nüske (2013); Nüske et al. (2014).
Here we employ neural networks to find an optimal set of basis functions, and . Neural networks with at least one hidden layer are universal function approximators Cybenko (1989)
, and deep networks can express strongly nonlinear functions with a fairly few neurons per layer
Eigen et al. (2014). Our networks use VAMP as a guiding principle and are hence called VAMPnets. VAMPnets consist of two parallel lobes, each receiving the coordinates of time-lagged MD configurations and as input (Fig. 1). The lobes have output nodes and are trained to learn the transformations and , respectively. For a given set of transformations, and , we pass a batch of training data through the network and compute the training VAMP-2 score of our choice. VAMPnets bear similarities with auto-encoders Ranzato et al. (2006); Bengio et al. (2007) using a time-delay embedding and are closely related to deep Canonical Covariance Analysis (CCA) Galen et al. (2013). VAMPnets are identical to deep CCA with time-delay embedding when using the VAMP-1 score discussed in Wu and Noé (2017), however the VAMP-2 score has easier-to-handle gradients and is more suitable for time series data, due to its direct relation to the Koopman approximation error Wu and Noé (2017).The first left and right singular functions of the Koopman operator are always equal to the constant function Wu and Noé (2017). We can thus add to basis functions and train the network by maximizing
(6) |
where are mean-free covariances of the feature-transformed coordinates:
(7) | |||||
(8) | |||||
(9) |
Here we have defined the matrices and with representing all available transition pairs, and their mean-free versions , . The gradients of are given by:
(10) | ||||
(11) |
and are back-propagated to train the two network lobes. See Supplementary Note 1 for derivations of Eqs. (6,10,11).
For simplicity of interpretation we may just use a unique basis set . Even when using two different basis sets would be meaningful, we can unify them by simply defining . In this case, we clone the lobes of the network and train them using the total gradient .
After training, we asses the quality of the learned features and select hyper-parameters (e.g. network size) while avoiding overfitting using the VAMP-2 validation score
(12) |
where are mean-free covariance matrices computed from a validation data set not used during the training.
The direct estimate of the time-lagged covariance matrix is generally nonsymmetric. Hence the Koopman model or Markov state model given by Eq. (5) is typically not time-reversible Wu et al. (2017). In MD, it is often desirable to obtain a time-reversible kinetic model – see Trendelkamp-Schroer et al. (2015) for a detailed discussion. To enforce reversibility, can be reweighted as described in Wu et al. (2017) and implemented in PyEMMA Scherer et al. (2015). The present results do not depend on enforcing reversibility, as classical analyses such as PCCA+ Röblitz and Weber (2013) are avoided based on the VAMPnet structure itself.
Since is a Markovian model, it is expected to fulfill the Chapman-Kolmogorov (CK) equation:
(13) |
for any value of , where and indicate the models estimated at a lag time of and , respectively. However, since any Markovian model of MD can be only approximate Sarich et al. (2010); Prinz et al. (2011a), Eq. (13) can only be fulfilled approximately, and the relevant test is whether it holds within statistical uncertainty. We construct two tests based on Eq. (13
): In order to select a suitable dynamical model, we will proceed as for Markov state models by conducting an eigenvalue decomposition for every estimated Koopman matrix,
, and computing the implied timescales Swope et al. (2004) as a function of lag time:(14) |
We chose a value , where are approximately constant in . After having chosen , we will test whether Eq. (13) holds within statistical uncertainty Noé et al. (2009). For both the implied timescales and the CK-test we proceed as follows: train the neural network at a fixed lag time , thus obtaining the network transformation , and then compute Eq. (13) or Eq. (14) for different values of with a fixed transformation . Finally, the approximation of the
th eigenfunction is given by
(15) |
If dynamics are reversible, the singular value decomposition and eigenvalue decomposition are identical, i.e.
and .We use VAMPnets to learn molecular kinetics from simulation data of a range of model systems. While any neural network architecture can be employed inside the VAMPnet lobes, we chose the following setup for our applications: the two network lobes are identical clones, i.e. , and consist of fully connected networks. In most cases, the networks have less output than input nodes, i.e. the network conducts a dimension reduction. In order to divide the work equally between network layers, we reduce the number of nodes from each layer to the next by a constant factor. Thus, the network architecture is defined by two parameters: the depth and the number of output nodes
. All hidden layers employ Rectified Linear Units (ReLU)
Hahnloser (1998); Nair and Hinton (2010).Here, we build the output layer with Softmax output nodes, i.e. for all and . Therefore, the activation of an output node can be interpreted as a probability to be in state . As a result, the network effectively performs featurization, dimension reduction and finally a fuzzy clustering to metastable states, and the matrix computed from the network-transformed data is the transition matrix of a fuzzy MSM Wu and Noé (2015); Weber et al. (2017); Harrigan and Pande (2017). Consequently, Eq. (1
) propagates probability distributions in time.
The networks were trained with pairs of MD configurations
using the Adam stochastic gradient descent method
Kingma and Ba (2014). For each result we repeated 100 training runs, each of which with a randomly chosen 90%/10% division of the data to training and validation data. See Methods section for details on network architecture, training and choice of hyper-parameters.We first model the kinetics of a bistable one-dimensional process, simulated by Brownian dynamics (see Methods) in an asymmetric double-well potential (Fig. 2a). A trajectory of time steps is generated. Three-layer VAMPnets are set up with 1-5-10-5 nodes in each lobe. The single input node of each lobe is given the current and time-lagged mean-free coordinate of the system, i.e. and , where and are the respective means, and is used. The network maps to five Softmax output nodes that we will refer to as states, as the network performs a fuzzy discretization by mapping the input configurations to the output activations. The network is trained by using the VAMP-2 score with the four largest singular values.
The network learns to place the output states in a way to resolve the transition region best (Fig. 2b), which is known to be important for the accuracy of a Markov state model Sarich et al. (2010); Prinz et al. (2011a). This placement minimizes the Koopman approximation error, as seen by comparing the dominant Koopman eigenfunction (Eq. 15) with a direct numerical approximation of the true eigenfunction obtained by a transition matrix computed for a direct uniform 200-state discretization of the axis – see Prinz et al. (2011a) for details. The implied timescale and Chapman-Kolmogorov tests (Eqs. 14 and 13) confirm that the kinetic model learned by the VAMPnet successfully predicts the long-time kinetics (Fig. 2 c,d).
While the first example was one-dimensional we now test if VAMPnets are able to learn reaction coordinates that are nonlinear functions of a multi-dimensional configuration space. For this, we simulate a time step Brownian dynamics trajectory (Eq. 17) using the simple protein folding model defined by the potential energy function (Supplementary Fig. 1 a):
The system has a five-dimensional configuration space,
, however the energy only depends on the norm of the vector
. While small values of are energetically favorable, large values of are entropically favorable as the number of configurations available on a five-dimensional hypersphere grows dramatically with . Thus, the dynamics are bistable along the reaction coordinate . Four-layer network lobes with 5-32-16-8-2 nodes each were employed and trained to maximize the VAMP-2 score involving the largest nontrivial singular value.The two output nodes successfully identify the folded and the unfolded states, and use intermediate memberships for the intersecting transition region (Supplementary Fig. 1 b). The network excellently approximates the Koopman eigenfunction of the folding process, as apparent from the comparison of the values of the network eigenfunction computed by Eq. (15) with the eigenvector computed from a high-resolution MSM built on the coordinate (Supplementary Fig. 1 b). This demonstrates that the network can learn the nonlinear reaction coordinate mapping based only on maximizing the variational score 6. Furthermore, the implied timescales and the CK-test indicate that the network model predicts the long-time kinetics almost perfectly (Supplementary Fig. 1 c,d).
As a next level, VAMPnets are used to learn the kinetics of alanine dipeptide from simulation data. It is known that the and backbone torsion angles are the most important reaction coordinates that separate the metastable states of alanine dipeptide, however, our networks only receive Cartesian coordinates as an input, and are thus forced to learn both the nonlinear transformation to the torsion angle space and an optimal cluster discretization within this space, in order to obtain an accurate kinetic model.
A nanosecond MD trajectory generated in Ref. Nüske et al. (2017) (MD setup described there) serves as a dataset. The solute coordinates were stored every picosecond, resulting in configurations that are all aligned on the first frame using minimal RMSD fit to remove global translation and rotation. Each network lobe uses the three-dimensional coordinates of the 10 heavy atoms as input, , and the network is trained using time lag . Different numbers of output states and layer depths are considered, employing the layer sizing scheme described in the Methods section (see Fig. 3 for an example).
A VAMPnet with six output states learns a discretization in six metastable sets corresponding to the free energy minima of the space (Fig. 4b). The implied timescales indicate that given the coordinate transformation found by the network, the two slowest timescales are converged at lag time or larger (Fig. 4c). Thus we estimated a Koopman model at , whose Markov transition probability matrix is depicted in Fig. 4d. Note that transition probabilities between state pairs and are important for the correct kinetics at , but the actual trajectories typically pass via the directly adjacent intermediate states. The model performs excellently in the CK-Test (Fig. 4e).
We studied the success probability of optimizing a VAMPnet with six output states as a function of the lag time by conducting 200 optimization runs. Success was defined as resolving the three slowest processes by finding three slowest timescale higher than 0.2, 0.4 and 1 ns, respectively. Note that the results shown in Fig. 4 are reported for successful runs in this definition. There is a range of values from 4 to 32 picoseconds where the training succeeds with a significant probability (Supplementary Fig. 2 a). However, even in this range the success rate is still below 40 %, which is mainly due to the fact that many runs fail to find the rarely occurring third-slowest process that corresponds to the transition of the positive range (Fig. 4b, state 5 and 6).
The breakdown of optimization success for small and large lag times can be most easily explained by the eigenvalue decomposition of Markov propagators Prinz et al. (2011a). When the lag time exceeds the timescale of a process, the amplitude of this process becomes negligible, making it hard to fit given noisy data. At short lag times, many processes have large eigenvalues, which increases the search space of the neural network and appears to increase the probability of getting stuck in suboptimal maxima of the training score.
We have also studied the success probability, as defined above, as a function of network depth. Deeper networks can represent more complex functions. Also, since the networks defined here reduce the input dimension to the output dimension by a constant factor per layer, deeper networks perform a less radical dimension reduction per layer. On the other hand, deeper networks are more difficult to train. As seen inSupplementary Fig. 2 b, a high success rate is found for four to seven layers.
Next, we studied the dependency of the network-based discretization as a function of the number of output nodes (Fig. 5a-c). With two output states, the network separates the state space at the slowest transition between negative and positive values of the angle (Fig. 5a). The result with three output nodes keeps the same separation and additionally distinguishes between the and regions of the Ramachandran plot, i.e. small and large values of the angle (Fig. 5b). For higher number of output states, finer discretizations and smaller interconversion timescales are found, until the network starts discretizing the transition regions, such as the two transition states between the and regions along the angle (Fig. 5c). We chose the lag time depending on the number of output nodes of the network, using for two output nodes, for three output nodes, and for eight output nodes.
An network output with Softmax neurons describes a
-dimensional feature space as the Softmax normalization removes one degree of freedom. Thus, to resolve
relaxation timescales, at least output nodes or metastable states are required. However, the network quality can improve when given more degrees of freedom in order to approximate the dominant singular functions accurately. Indeed, the best scores using singular values (3 nontrivial singular values) are achieved when using at least six output states that separate each of the six metastable states in the Ramachandran plane (Fig. 5d-e).For comparison, we investigated how a standard MSM would perform as a function of the number of states (Fig. 5
d). For a fair comparison, the MSMs also used Cartesian coordinates as an input, but then employed a state-of-the-art procedure using a kinetic map transformation that preserves 95% of the cumulative kinetic variance
Noé and Clementi (2015), followed by -means clustering, where the parameter is varied. It is seen that the MSM VAMP-2 scores obtained by this procedure is significantly worse than by VAMPnets when less than 20 states are employed. Clearly, MSMs will succeed when sufficiently many states are used, but in order to obtain an interpretable model those states must again be coarse-grained onto a fewer-state model, while VAMPnets directly produce an accurate model with few-states.The results above indicate that the VAMPnet has implicitly learned the feature transformation from Cartesian coordinates to backbone torsions. In order to probe this ability more explicitly, we trained a network with 30-10-3-3-2-5 layers, i.e. including a bottleneck of two nodes before the output layer. We find that the activation of the two bottleneck nodes correlates excellently with the and torsion angles that were not presented to the network (Pearson correlation coefficients of and , respectively, Supplementary Fig. 3 a,b). To visualize the internal representation that the network learns, we color data samples depending on the free energy minima in the space they belong to (Supplementary Fig. 3 c), and then show where these samples end up in the space of the bottleneck node activations (Supplementary Fig. 3 d). It is apparent that the network learns a representation of the Ramachandran plot – The four free energy minima at small values ( and areas) are represented as contiguous clusters with the correct connectivity, and are well separated from states with large values ( area). The network fails to separate the two substates in the large value range well, which explains the frequent failure to find the corresponding transition process and the third-largest relaxation timescale.
In order to proceed to a higher-dimensional problem, we analyze the kinetics of an all-atom protein folding simulation of the NTL9 protein generated by the Anton supercomputer Lindorff-Larsen et al. (2011). A five-layer VAMPnet was trained at lag time using time steps, uniformly sampled from a trajectory. Since NTL9 is folding and unfolding, there is no unique reference structure to align Cartesian coordinates to – hence we use internal coordinates as a network input. We computed the nearest-neighbor heavy-atom distance, for all non-redundant pairs of residues and and transformed them into contact maps using the definition , resulting in input nodes.
Again, the network performs a hierarchical decomposition of the molecular configuration space when increasing the number of output nodes. Fig. 6a shows the decomposition of state space for two and five output nodes, and the corresponding mean contact maps and state probabilities. With two output nodes, the network finds the folded and unfolded state that are separated by the slowest transition process (Fig. 6a, middle row). With five output states, the folded state is decomposed into a stable and well-defined fully folded substate and a less stable, more flexible substate that is missing some of the tertiary contacts compared to the fully folded substate. The unfolded substate decomposes into three substates, one of them largely unstructured, a second one with residual structure, thus forming a folding intermediate, and a mis-folded state with an entirely different fold including a non-native -sheet.
The relaxation timescales found by a five-state VAMPnet model are en par with those found by a 40-state MSM using state-of-the-art estimation methods (Fig. 6b-c). However, the fact that only five states are required in the VAMPnet model makes it easier to interpret and analyze. Additionally, the CK-test indicates excellent agreement between long-time predictions and direct estimates.
We have introduced a deep learning framework for molecular kinetics, called VAMPnet. Data-driven learning of molecular kinetics is usually done by shallow learning structures, such as TICA and MSMs. However, the processing pipeline, typically consisting of featurization, dimension reduction, MSM estimation and MSM coarse-graining is, in principle, a hand-crafted deep learning structure. Here we propose to replace the entire pipeline by a deep neural network that learns optimal feature transformations, dimension reduction and, if desired, maps the MD time-steps to a fuzzy clustering. The key to optimize the network is the VAMP variational approach which defines scores by which learning structures can be optimized to learn models of both equilibrium and non-equilibrium MD.
Although MSM-based kinetic modeling has been refined over more than a decade, VAMPnets perform competitively or superior in our examples. In particular, they perform extremely well in the Chapman-Kolmogorov test that validates the long-time prediction of the model. VAMPnets have a number of advantages over models based on MSM pipelines: (i) They may be overall more optimal, because featurization, dimension reduction and clustering are not explicitly separate processing steps. (ii) When using Softmax output nodes, the VAMPnet performs a fuzzy clustering of the MD structures fed into the network and constructs a fuzzy MSM, which is readily interpretable in terms of transition probabilities between metastable states. In contrast to other MSM coarse-graining techniques it is thus not necessary to accept reduction in model quality in order to obtain a few-state MSM, but such a coarse-grained model is seamlessly learned within the same learning structure. (iii) VAMPnets require less expertise to train than an MSM-based processing pipelines, and the formulation of the molecular kinetics as a neural network learning problem enables us to exploit an arsenal of highly developed and optimized tools in learning softwares such as tensorflow, theano or keras.
Despite their benefits, VAMPnets still miss many of the benefits that come with extensions developed for MSM approach. This includes multi-ensemble Markov models that are superior to single-conventional MSMs in terms of sampling rare events by combining data from multiple ensembles Wu et al. (2016, 2014); Chodera et al. (2011); Prinz et al. (2011b); Rosta and Hummer (2015); Mey et al. (2014), Augmented Markov models that combine simulation data with experimental observation data Olsson et al. (in press), and statistical error estimators developed for MSMs Hinrichs and Pande (2007); Noé (2008); Chodera and Noé (2010). Since these methods explicitly use the MSM likelihood, it is currently unclear, how they could be implemented in a deep learning structure such as a VAMPnet. Extending VAMPnets towards these special capabilities is a challenge for future studies.
Finally, a remaining concern is that the optimization of VAMPnets can get stuck in suboptimal local maxima. In other applications of network-based learning, a working knowledge has been established which type of network implementation and learning algorithm are most suitable for robust and reproducible learning. For example, it is conceivable that the VAMPnet lobes may benefit from convolutional filters LeCun et al. (1998) or different types of transfer functions. Suitably chosen convolutions, as in Schütt et al. (2017) may also lead to learned feature transformations that are transferable within a given class of molecules.
Each network lobe in Fig. 1 has a number of input nodes given by the data dimension. According to the VAMP variational principle (Sec. .1), the output dimension must be at least equal to the number of Koopman singular functions that we want to approximate, i.e. equal to used in the score function . In most applications, the number of input nodes exceeds the number of output nodes, i.e. the network conducts a dimension reduction. Here, we keep the dimension reduction from layer with nodes to layer with nodes constant:
(16) |
where is the network depth, i.e. the number of layers excluding the input layer. Thus, the network structure is fixed by and . We tested different values for ranging from 2 to 11; For alanine dipeptide, Supplementary Fig. 2b reports the results in terms of the training success rate described in the results section. Networks have a number of parameters that ranges between 100 and 400000, most of which are between the first and second layer due to the rapid dimension reduction of the network. To avoid overfitting, we use dropout during training Srivastava et al. (2014), and select hyper-parameters using the VAMP-2 validation score.
Hyper-parameters include the regularization factors for the weights of the fully connected and the Softmax layer, the dropout probabilities for each layer, the batch-size, and the learning rate for the Adam algorithm. Since a grid search in the joint parameter space would have been too computationally expensive, each hyper-parameter was optimized using the VAMP-2 validation score while keeping the other hyper-parameters constant. We started with the regularization factors due to their large effect on the training performance, and observed optimal performance for a factor of
for the fully connected hidden layers and for the output layer; regularization factors higher than frequently led to training failure. Subsequently, we tested the dropout probabilities with values ranging from 0 to and found dropout in the first two hidden layers and no dropout otherwise to perform well. The results did not strongly depend on the training batch size, however, more training iterations are necessary for large batches, while small batches exhibit stronger fluctuations in the training score. We found a batch-size of to be a good compromise, with tested values ranging between and . The optimal learning rate strongly depends on the network topology (e.g. the number of hidden layers and the number of output nodes). In order to adapt the learning rate, we started from an arbitrary rate of 0.05. If no improvement on the validation VAMP-2 score was observed over 10 training iterations, the learning rate was reduced by a factor of . This scheme led to better convergence of the training and validation scores and better kinetic model validation compared to using a high learning rate throughout.The time lag between the input pairs of configurations was selected depending on the number of output nodes of the network: larger lag times are better at isolating the slowest processes, and thus are more suitable with a small number of output nodes. The procedure of choosing network structure and lag time is thus as follows: First, the number of output nodes and the hidden layers are selected, which determines the network structure as described above. Then, a lag time is chosen in which the largest singular values (corresponding to the slowest processes) can be trained consistently.
We pre-trained the network by minimizing the negative VAMP-1 score during the first third of the total number of epochs, and subsequently optimize the network with VAMP-2 optimization (Sec.
.2). In order to ensure robustness of the results, we performed 100 network optimization runs for each problem. In each run, the dataset was shuffled and randomly split into 90%/10% for training and validation, respectively. To exclude outliers, we then discarded the best 5% and the worst 5% of results. Hyperparameter optimization was done using the validation score averaged over the remaining runs. Figures report training or validation mean and 95% confidence intervals.
The asymmetric double well and the protein folding toy model are simulated by over-damped Langevin dynamics in a potential energy function , also known as Brownian dynamics, using an forward Euler integration scheme. The position is propagated by time step via:
(17) |
where is the diffusion constant and is the Boltzmann constant and temperature. Here, dimensionless units are used and , . The elements of the random vector
are sampled from a normal distribution with zero mean and unit variance.
VAMPnets were trained on a single NVIDIA GeForce GTX 1080 GPU, requiring between 20 seconds (for the double well problem) and 180 seconds for NTL9 for each run.
TICA, -means and MSM analyses were conducted with PyEMMA version 2.4, freely available at pyemma.org. VAMPnets are implemented using the freely available packages keras Chollet et al. (2015) with tensorflow-gpu Abadi et al. (2015) as a backend. The code can be obtained at https://github.com/markovmodel/deeptime.
Data for NTL9 can be requested from the autors of Lindorff-Larsen et al. (2011). Data for all other examples is available at https://github.com/markovmodel/deeptime.
Projected and hidden markov models for calculating kinetics and metastable states of complex molecules.
J. Chem. Phys., 139:184114, 2013.Human-level control through deep reinforcement learning.
Nature, 518:529–533, 2015.Variational tensor approach for approximating the rare-event kinetics of macromolecular systems.
J. Chem. Phys., 144:054105, 2016.Approximation by superpositions of a sigmoidal function.
Math. Control Signals, 2(4):303–314, 1989.Efficient learning of sparse representations with an energy-based model.
In J. Platt et al., editor, Adv. Neural Inf. Process. Syst. (NIPS). MIT Press, 2006.Fuzzy spectral clustering by PCCA+: application to Markov state models and data classification.
Adv. Data Anal. Classif., 7:147–179, 2013.Rectified linear units improve restricted boltzmann machines.
In ICML Proc., volume 27, pages 807–814, 2010.
Comments
There are no comments yet.