Optimal Sample Complexity of Reinforcement Learning for Uniformly Ergodic Discounted Markov Decision Processes

02/15/2023
by   Shengbo Wang, et al.
0

We consider the optimal sample complexity theory of tabular reinforcement learning (RL) for controlling the infinite horizon discounted reward in a Markov decision process (MDP). Optimal min-max complexity results have been developed for tabular RL in this setting, leading to a sample complexity dependence on γ and ϵ of the form Θ̃((1-γ)^-3ϵ^-2), where γ is the discount factor and ϵ is the tolerance solution error. However, in many applications of interest, the optimal policy (or all policies) will induce mixing. We show that in these settings the optimal min-max complexity is Θ̃(t_minorize(1-γ)^-2ϵ^-2), where t_minorize is a measure of mixing that is within an equivalent factor of the total variation mixing time. Our analysis is based on regeneration-type ideas, that, we believe are of independent interest since they can be used to study related problems for general state space MDPs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset