Provably Efficient Reinforcement Learning with Aggregated States

12/13/2019 ∙ by Shi Dong, et al. ∙ 0

We establish that an optimistic variant of Q-learning applied to a finite-horizon episodic Markov decision process with an aggregated state representation incurs regret Õ(√(H^5 M K) + ϵ HK), where H is the horizon, M is the number of aggregate states, K is the number of episodes, and ϵ is the largest difference between any pair of optimal state-action values associated with a common aggregate state. Notably, this regret bound does not depend on the number of states or actions. To the best of our knowledge, this is the first such result pertaining to a reinforcement learning algorithm applied with nontrivial value function approximation without any restrictions on the Markov decision process.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.