Learn Quasi-stationary Distributions of Finite State Markov Chain

11/19/2021
by   Zhiqiang Cai, et al.
0

We propose a reinforcement learning (RL) approach to compute the expression of quasi-stationary distribution. Based on the fixed-point formulation of quasi-stationary distribution, we minimize the KL-divergence of two Markovian path distributions induced by the candidate distribution and the true target distribution. To solve this challenging minimization problem by gradient descent, we apply the reinforcement learning technique by introducing the corresponding reward and value functions. We derive the corresponding policy gradient theorem and design an actor-critic algorithm to learn the optimal solution and value function. The numerical examples of finite state Markov chain are tested to demonstrate the new methods

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2020

Parameter-based Value Functions

Learning value functions off-policy is at the core of modern Reinforceme...
research
08/03/2021

Variational Actor-Critic Algorithms

We introduce a class of variational actor-critic algorithms based on a v...
research
11/22/2021

Policy Gradient and Actor-Critic Learning in Continuous Time and Space: Theory and Algorithms

We study policy gradient (PG) for reinforcement learning in continuous t...
research
12/23/2019

Direct and indirect reinforcement learning

Reinforcement learning (RL) algorithms have been successfully applied to...
research
11/09/2022

Sensitivity analysis of quasi-stationary-distributions (QSDs)

This paper studies the sensitivity analysis of mass-action systems again...
research
10/13/2022

Policy Gradient With Serial Markov Chain Reasoning

We introduce a new framework that performs decision-making in reinforcem...
research
03/25/2020

Convergence of Recursive Stochastic Algorithms using Wasserstein Divergence

This paper develops a unified framework, based on iterated random operat...

Please sign up or login with your details

Forgot password? Click here to reset