Adaptive Temporal Difference Learning with Linear Function Approximation

02/20/2020
by   Tao Sun, et al.
15

This paper revisits the celebrated temporal difference (TD) learning algorithm for the policy evaluation in reinforcement learning. Typically, the performance of the plain-vanilla TD algorithm is sensitive to the choice of stepsizes. Oftentimes, TD suffers from slow convergence. Motivated by the tight connection between the TD learning algorithm and the stochastic gradient methods, we develop the first adaptive variant of the TD learning algorithm with linear function approximation that we term AdaTD. In contrast to the original TD, AdaTD is robust or less sensitive to the choice of stepsizes. Analytically, we establish that to reach an ϵ accuracy, the number of iterations needed is Õ(ϵ^2ln^41/ϵ/ln^41/ρ), where ρ represents the speed of the underlying Markov chain converges to the stationary distribution. This implies that the iteration complexity of AdaTD is no worse than that of TD in the worst case. Going beyond TD, we further develop an adaptive variant of TD(λ), which is referred to as AdaTD(λ). We evaluate the empirical performance of AdaTD and AdaTD(λ) on several standard reinforcement learning tasks in OpenAI Gym on both linear and nonlinear function approximation, which demonstrate the effectiveness of our new approaches over existing ones.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset