An Empirical Evaluation of True Online TD(λ)

07/01/2015
by   Harm van Seijen, et al.
0

The true online TD(λ) algorithm has recently been proposed (van Seijen and Sutton, 2014) as a universal replacement for the popular TD(λ) algorithm, in temporal-difference learning and reinforcement learning. True online TD(λ) has better theoretical properties than conventional TD(λ), and the expectation is that it also results in faster learning. In this paper, we put this hypothesis to the test. Specifically, we compare the performance of true online TD(λ) with that of TD(λ) on challenging examples, random Markov reward processes, and a real-world myoelectric prosthetic arm. We use linear function approximation with tabular, binary, and non-binary features. We assess the algorithms along three dimensions: computational cost, learning speed, and ease of use. Our results confirm the strength of true online TD(λ): 1) for sparse feature vectors, the computational overhead with respect to TD(λ) is minimal; for non-sparse features the computation time is at most twice that of TD(λ), 2) across all domains/representations the learning speed of true online TD(λ) is often better, but never worse than that of TD(λ), and 3) true online TD(λ) is easier to use, because it does not require choosing between trace types, and it is generally more stable with respect to the step-size. Overall, our results suggest that true online TD(λ) should be the first choice when looking for an efficient, general-purpose TD method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/13/2015

True Online Temporal-Difference Learning

The temporal-difference methods TD(λ) and Sarsa(λ) form a core part of m...
research
12/21/2014

Implicit Temporal Differences

In reinforcement learning, the TD(λ) algorithm is a fundamental policy e...
research
05/10/2018

Metatrace: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control

Reinforcement learning (RL) has had many successes in both "deep" and "s...
research
06/15/2018

An Online Prediction Algorithm for Reinforcement Learning with Linear Function Approximation using Cross Entropy Method

In this paper, we provide two new stable online algorithms for the probl...
research
07/02/2016

A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning

One of the main obstacles to broad application of reinforcement learning...
research
07/19/2019

On Linear Convergence of Weighted Kernel Herding

We provide a novel convergence analysis of two popular sampling algorith...
research
09/06/2019

Gradient Q(σ, λ): A Unified Algorithm with Function Approximation for Reinforcement Learning

Full-sampling (e.g., Q-learning) and pure-expectation (e.g., Expected Sa...

Please sign up or login with your details

Forgot password? Click here to reset