Provably Efficient Reinforcement Learning with Aggregated States

12/13/2019
by   Shi Dong, et al.
0

We establish that an optimistic variant of Q-learning applied to a finite-horizon episodic Markov decision process with an aggregated state representation incurs regret Õ(√(H^5 M K) + ϵ HK), where H is the horizon, M is the number of aggregate states, K is the number of episodes, and ϵ is the largest difference between any pair of optimal state-action values associated with a common aggregate state. Notably, this regret bound does not depend on the number of states or actions. To the best of our knowledge, this is the first such result pertaining to a reinforcement learning algorithm applied with nontrivial value function approximation without any restrictions on the Markov decision process.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset