Regret Bounds for Reinforcement Learning via Markov Chain Concentration

08/06/2018
by   Ronald Ortner, et al.
0

We give a simple optimistic algorithm for which it is easy to derive regret bounds of Õ(√(t_ mix SAT)) after T steps in uniformly ergodic MDPs with S states, A actions, and mixing time parameter t_ mix. These bounds are the first regret bounds in the general, non-episodic setting with an optimal dependence on all given parameters. They could only be improved by using an alternative mixing time parameter.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset