DeepAI AI Chat
Log In Sign Up

Towards Tight Bounds on the Sample Complexity of Average-reward MDPs

by   Yujia Jin, et al.

We prove new upper and lower bounds for sample complexity of finding an ϵ-optimal policy of an infinite-horizon average-reward Markov decision process (MDP) given access to a generative model. When the mixing time of the probability transition matrix of all policies is at most t_mix, we provide an algorithm that solves the problem using O(t_mixϵ^-3) (oblivious) samples per state-action pair. Further, we provide a lower bound showing that a linear dependence on t_mix is necessary in the worst case for any algorithm which computes oblivious samples. We obtain our results by establishing connections between infinite-horizon average-reward MDPs and discounted MDPs of possible further utility.


page 1

page 2

page 3

page 4


Near Sample-Optimal Reduction-based Policy Learning for Average Reward MDP

This work considers the sample complexity of obtaining an ε-optimal poli...

Efficiently Solving MDPs with Stochastic Mirror Descent

We present a unified framework based on primal-dual stochastic mirror de...

Robust Average-Reward Markov Decision Processes

In robust Markov decision processes (MDPs), the uncertainty in the trans...

Near-Optimal Sample Complexity Bounds for Constrained MDPs

In contrast to the advances in characterizing the sample complexity for ...

Stochastic Primal-Dual Methods and Sample Complexity of Reinforcement Learning

We study the online estimation of the optimal policy of a Markov decisio...

Parallel Stochastic Mirror Descent for MDPs

We consider the problem of learning the optimal policy for infinite-hori...

Nearly Optimal Latent State Decoding in Block MDPs

We investigate the problems of model estimation and reward-free learning...