Red Light Green Light Method for Solving Large Markov Chains

08/06/2020
by   Konstantin Avrachenkov, et al.
0

Discrete-time discrete-state finite Markov chains are versatile mathematical models for a wide range of real-life stochastic processes. One of most common tasks in studies of Markov chains is computation of the stationary distribution. Without loss of generality, and drawing our motivation from applications to large networks, we interpret this problem as one of computing the stationary distribution of a random walk on a graph. We propose a new controlled, easily distributed algorithm for this task, briefly summarized as follows: at the beginning, each node receives a fixed amount of cash (positive or negative), and at each iteration, some nodes receive `green light' to distribute their wealth or debt proportionally to the transition probabilities of the Markov chain; the stationary probability of a node is computed as a ratio of the cash distributed by this a node to the total cash distributed by all nodes together. Our method includes as special cases a wide range of known, very different, and previously disconnected methods including power iterations, Gauss-Southwell, and online distributed algorithms. We prove exponential convergence of our method, demonstrate its high efficiency, and derive scheduling strategies for the green-light, that achieve convergence rate faster than state-of-the-art algorithms.

READ FULL TEXT
research
10/28/2021

Convergence of Conditional Entropy for Long Range Dependent Markov Chains

In this paper we consider the convergence of the conditional entropy to ...
research
12/28/2021

A Near-Optimal Finite Approximation Approach for Computing Stationary Distribution and Performance Measures of Continuous-State Markov Chains

Analysis and use of stochastic models represented by a discrete-time Mar...
research
06/13/2021

Graph Optimal Transport with Transition Couplings of Random Walks

We present a novel approach to optimal transport between graphs from the...
research
12/15/2022

A New Berry-Esseen Theorem for Expander Walks

We prove that the sum of t boolean-valued random variables sampled by a ...
research
05/12/2022

Faster quantum mixing of Markov chains in non-regular graph with fewer qubits

Sampling from the stationary distribution is one of the fundamental task...
research
07/02/2019

Minimum Power to Maintain a Nonequilibrium Distribution of a Markov Chain

Biological systems use energy to maintain non-equilibrium distributions ...
research
10/14/2020

Minimum stationary values of sparse random directed graphs

We consider the stationary distribution of the simple random walk on the...

Please sign up or login with your details

Forgot password? Click here to reset