A Tractable Algorithm For Finite-Horizon Continuous Reinforcement Learning

06/26/2019
by   Phanideep Gampa, et al.
0

We consider the finite horizon continuous reinforcement learning problem. Our contribution is three-fold. First,we give a tractable algorithm based on optimistic value iteration for the problem. Next,we give a lower bound on regret of order Ω(T^2/3) for any algorithm discretizes the state space, improving the previous regret bound of Ω(T^1/2) of Ortner and Ryabko contrl for the same problem. Next,under the assumption that the rewards and transitions are Hölder Continuous we show that the upper bound on the discretization error is const.Ln^-αT. Finally,we give some simple experiments to validate our propositions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset