A Model-free Learning Algorithm for Infinite-horizon Average-reward MDPs with Near-optimal Regret

06/08/2020
by   Mehdi Jafarnia-Jahromi, et al.
12

Recently, model-free reinforcement learning has attracted research attention due to its simplicity, memory and computation efficiency, and the flexibility to combine with function approximation. In this paper, we propose Exploration Enhanced Q-learning (EE-QL), a model-free algorithm for infinite-horizon average-reward Markov Decision Processes (MDPs) that achieves regret bound of O(√(T)) for the general class of weakly communicating MDPs, where T is the number of interactions. EE-QL assumes that an online concentrating approximation of the optimal average reward is available. This is the first model-free learning algorithm that achieves O(√(T)) regret without the ergodic assumption, and matches the lower bound in terms of T except for logarithmic factors. Experiments show that the proposed algorithm performs as well as the best known model-based algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset