DeepAI AI Chat
Log In Sign Up

Regret Bounds for Discounted MDPs

by   Shuang Liu, et al.
University of California, San Diego

Recently, it has been shown that carefully designed reinforcement learning (RL) algorithms can achieve near-optimal regret in the episodic or the average-reward setting. However, in practice, RL algorithms are applied mostly to the infinite-horizon discounted-reward setting, so it is natural to ask what the lowest regret an algorithm can achieve is in this case, and how close to the optimal the regrets of existing RL algorithms are. In this paper, we prove a regret lower bound of Ω(√(SAT)/1 - γ - 1/(1 - γ)^2) when T≥ SA on any learning algorithm for infinite-horizon discounted Markov decision processes (MDP), where S and A are the numbers of states and actions, T is the number of actions taken, and γ is the discounting factor. We also show that a modified version of the double Q-learning algorithm gives a regret upper bound of Õ(√(SAT)/(1 - γ)^2.5) when T≥ SA. Compared to our bounds, previous best lower and upper bounds both have worse dependencies on T and γ, while our dependencies on S, A, T are optimal. The proof of our upper bound is inspired by recent advances in the analysis of Q-learning in the episodic setting, but the cyclic nature of infinite-horizon MDPs poses many new challenges.


page 1

page 2

page 3

page 4


Minimax Optimal Reinforcement Learning for Discounted MDPs

We study the reinforcement learning problem for discounted Markov Decisi...

Efficient Reinforcement Learning in Factored MDPs with Application to Constrained RL

Reinforcement learning (RL) in episodic, factored Markov decision proces...

Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping

Modern tasks in reinforcement learning are always with large state and a...

Reinforcement Learning in a Birth and Death Process: Breaking the Dependence on the State Space

In this paper, we revisit the regret of undiscounted reinforcement learn...

Learning Zero-sum Stochastic Games with Posterior Sampling

In this paper, we propose Posterior Sampling Reinforcement Learning for ...

Fine-Grained Gap-Dependent Bounds for Tabular MDPs via Adaptive Multi-Step Bootstrap

This paper presents a new model-free algorithm for episodic finite-horiz...

Tight Regret Bounds for Model-Based Reinforcement Learning with Greedy Policies

State-of-the-art efficient model-based Reinforcement Learning (RL) algor...