Regret Bounds for Discounted MDPs
Recently, it has been shown that carefully designed reinforcement learning (RL) algorithms can achieve near-optimal regret in the episodic or the average-reward setting. However, in practice, RL algorithms are applied mostly to the infinite-horizon discounted-reward setting, so it is natural to ask what the lowest regret an algorithm can achieve is in this case, and how close to the optimal the regrets of existing RL algorithms are. In this paper, we prove a regret lower bound of Ω(√(SAT)/1 - γ - 1/(1 - γ)^2) when T≥ SA on any learning algorithm for infinite-horizon discounted Markov decision processes (MDP), where S and A are the numbers of states and actions, T is the number of actions taken, and γ is the discounting factor. We also show that a modified version of the double Q-learning algorithm gives a regret upper bound of Õ(√(SAT)/(1 - γ)^2.5) when T≥ SA. Compared to our bounds, previous best lower and upper bounds both have worse dependencies on T and γ, while our dependencies on S, A, T are optimal. The proof of our upper bound is inspired by recent advances in the analysis of Q-learning in the episodic setting, but the cyclic nature of infinite-horizon MDPs poses many new challenges.
READ FULL TEXT