Provably Efficient Reinforcement Learning with General Value Function Approximation

05/21/2020
by   Ruosong Wang, et al.
13

Value function approximation has demonstrated phenomenal empirical success in reinforcement learning (RL). Nevertheless, despite a handful of recent progress on developing theory for RL with linear function approximation, the understanding of general function approximation schemes largely remains missing. In this paper, we establish the first provable efficiently RL algorithm with general value function approximation. In particular, we show that if the value functions admit an approximation with a function class F, our algorithm achieves a regret bound of O(poly(dH)√(T)) where d is a complexity measure of F, H is the planning horizon, and T is the number interactions with the environment. Our theory strictly generalizes recent progress on RL with linear function approximation and does not make explicit assumptions on the model of the environment. Moreover, our algorithm is model-free and provides a framework to justify algorithms used in practice.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset