Provably Efficient Reinforcement Learning with General Value Function Approximation
Value function approximation has demonstrated phenomenal empirical success in reinforcement learning (RL). Nevertheless, despite a handful of recent progress on developing theory for RL with linear function approximation, the understanding of general function approximation schemes largely remains missing. In this paper, we establish the first provable efficiently RL algorithm with general value function approximation. In particular, we show that if the value functions admit an approximation with a function class F, our algorithm achieves a regret bound of O(poly(dH)√(T)) where d is a complexity measure of F, H is the planning horizon, and T is the number interactions with the environment. Our theory strictly generalizes recent progress on RL with linear function approximation and does not make explicit assumptions on the model of the environment. Moreover, our algorithm is model-free and provides a framework to justify algorithms used in practice.
READ FULL TEXT