Regret Bounds for Reinforcement Learning via Markov Chain Concentration
We give a simple optimistic algorithm for which it is easy to derive regret bounds of Õ(√(t_ mix SAT)) after T steps in uniformly ergodic MDPs with S states, A actions, and mixing time parameter t_ mix. These bounds are the first regret bounds in the general, non-episodic setting with an optimal dependence on all given parameters. They could only be improved by using an alternative mixing time parameter.
READ FULL TEXT