Iterative trajectory reweighting for estimation of equilibrium and non-equilibrium observables

06/16/2020
by   John D. Russo, et al.
0

We present two algorithms by which a set of short, unbiased trajectories can be iteratively reweighted to obtain various observables. The first algorithm estimates the stationary (steady state) distribution of a system by iteratively reweighting the trajectories based on the average probability in each state. The algorithm applies to equilibrium or non-equilibrium steady states, exploiting the `left' stationarity of the distribution under dynamics – i.e., in a discrete setting, when the column vector of probabilities is multiplied by the transition matrix expressed as a left stochastic matrix. The second procedure relies on the `right' stationarity of the committor (splitting probability) expressed as a row vector. The algorithms are unbiased, do not rely on computing transition matrices, and make no Markov assumption about discretized states. Here, we apply the procedures to a one-dimensional double-well potential, and to a 208μs atomistic Trp-cage folding trajectory from D.E. Shaw Research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2021

Unbiased estimation of equilibrium, rates, and committors from Markov state model analysis

Markov state models (MSMs) have been broadly adopted for analyzing molec...
research
07/13/2017

Inferring the parameters of a Markov process from snapshots of the steady state

We seek to infer the parameters of an ergodic Markov process from sample...
research
06/28/2020

Equilibrium in Wright-Fisher models of population genetics

For multivariant Wright-Fisher models in population genetics, we introdu...
research
09/13/2016

Self-Sustaining Iterated Learning

An important result from psycholinguistics (Griffiths & Kalish, 2005) st...
research
01/19/2014

An algorithm for calculating steady state probabilities of M|E_r|c|K queueing systems

This paper presents a method for calculating steady state probabilities ...
research
11/06/2018

State Aggregation Learning from Markov Transition Data

State aggregation is a model reduction method rooted in control theory a...

Please sign up or login with your details

Forgot password? Click here to reset