UCB Momentum Q-learning: Correcting the bias without forgetting

03/01/2021 ∙ by Pierre Ménard, et al. ∙ 0

We propose UCBMQ, Upper Confidence Bound Momentum Q-learning, a new algorithm for reinforcement learning in tabular and possibly stage-dependent, episodic Markov decision process. UCBMQ is based on Q-learning where we add a momentum term and rely on the principle of optimism in face of uncertainty to deal with exploration. Our new technical ingredient of UCBMQ is the use of momentum to correct the bias that Q-learning suffers while, at the same time, limiting the impact it has on the second-order term of the regret. For UCBMQ , we are able to guarantee a regret of at most O(√(H^3SAT)+ H^4 S A ) where H is the length of an episode, S the number of states, A the number of actions, T the number of episodes and ignoring terms in polylog(SAHT). Notably, UCBMQ is the first algorithm that simultaneously matches the lower bound of Ω(√(H^3SAT)) for large enough T and has a second-order term (with respect to the horizon T) that scales only linearly with the number of states S.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.