Markov Chain Concentration with an Application in Reinforcement Learning

01/07/2023
by   Debangshu Banerjee, et al.
0

Given X_1,· ,X_N random variables whose joint distribution is given as μ we will use the Martingale Method to show any Lipshitz Function f over these random variables is subgaussian. The Variance parameter however can have a simple expression under certain conditions. For example under the assumption that the random variables follow a Markov Chain and that the function is Lipschitz under a Weighted Hamming Metric. We shall conclude with certain well known techniques from concentration of suprema of random processes with applications in Reinforcement Learning

READ FULL TEXT

page 9

page 11

research
10/10/2018

On some Limit Theorem for Markov Chain

The goal of this paper is to describe conditions which guarantee a centr...
research
03/13/2023

Concentration without Independence via Information Measures

We propose a novel approach to concentration for non-independent random ...
research
07/29/2023

Shared Information for a Markov Chain on a Tree

Shared information is a measure of mutual dependence among multiple join...
research
02/17/2018

Optimal Single Sample Tests for Structured versus Unstructured Network Data

We study the problem of testing, using only a single sample, between mea...
research
04/02/2015

Structure Learning of Partitioned Markov Networks

We learn the structure of a Markov Network between two groups of random ...
research
07/21/2023

Topological reconstruction of compact supports of dependent stationary random variables

In this paper we extend results on reconstruction of probabilistic suppo...
research
07/01/2022

Prediction of random variables by excursion metric projections

We use the concept of excursions for the prediction of random variables ...

Please sign up or login with your details

Forgot password? Click here to reset