ℓ_1 Regularized Gradient Temporal-Difference Learning

10/05/2016
by   Dominik Meyer, et al.
0

In this paper, we study the Temporal Difference (TD) learning with linear value function approximation. It is well known that most TD learning algorithms are unstable with linear function approximation and off-policy learning. Recent development of Gradient TD (GTD) algorithms has addressed this problem successfully. However, the success of GTD algorithms requires a set of well chosen features, which are not always available. When the number of features is huge, the GTD algorithms might face the problem of overfitting and being computationally expensive. To cope with this difficulty, regularization techniques, in particular ℓ_1 regularization, have attracted significant attentions in developing TD learning algorithms. The present work combines the GTD algorithms with ℓ_1 regularization. We propose a family of ℓ_1 regularized GTD algorithms, which employ the well known soft thresholding operator. We investigate convergence properties of the proposed algorithms, and depict their performance with several numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2022

Gradient Descent Temporal Difference-difference Learning

Off-policy algorithms, in which a behavior policy differs from the targe...
research
04/21/2023

A Cubic-regularized Policy Newton Algorithm for Reinforcement Learning

We consider the problem of control in the setting of reinforcement learn...
research
07/01/2020

Gradient Temporal-Difference Learning with Regularized Corrections

It is still common to use Q-learning and temporal difference (TD) learni...
research
10/27/2020

Temporal Difference Learning as Gradient Splitting

Temporal difference learning with linear function approximation is a pop...
research
09/09/2021

Versions of Gradient Temporal Difference Learning

Sutton, Szepesvári and Maei introduced the first gradient temporal-diffe...
research
06/27/2012

A Dantzig Selector Approach to Temporal Difference Learning

LSTD is a popular algorithm for value function approximation. Whenever t...
research
06/02/2021

An Empirical Comparison of Off-policy Prediction Learning Algorithms on the Collision Task

Off-policy prediction – learning the value function for one policy from ...

Please sign up or login with your details

Forgot password? Click here to reset