Gradient Temporal-Difference Learning with Regularized Corrections

07/01/2020
by   Sina Ghiassian, et al.
3

It is still common to use Q-learning and temporal difference (TD) learning-even though they have divergence issues and sound Gradient TD alternatives exist-because divergence seems rare and they typically perform well. However, recent work with large neural network learning systems reveals that instability is more common than previously thought. Practitioners face a difficult dilemma: choose an easy to use and performant TD method, or a more complex algorithm that is more sound but harder to tune and all but unexplored with non-linear function approximation or control. In this paper, we introduce a new method called TD with Regularized Corrections (TDRC), that attempts to balance ease of use, soundness, and performance. It behaves as well as TD, when TD performs well, but is sound in cases where TD diverges. We empirically investigate TDRC across a range of problems, for both prediction and control, and for both linear and non-linear function approximation, and show, potentially for the first time, that gradient TD methods could be a better alternative to TD and Q-learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2021

A Generalized Projected Bellman Error for Off-policy Value Estimation in Reinforcement Learning

Many reinforcement learning algorithms rely on value estimation. However...
research
10/05/2016

ℓ_1 Regularized Gradient Temporal-Difference Learning

In this paper, we study the Temporal Difference (TD) learning with linea...
research
03/21/2019

Towards Characterizing Divergence in Deep Q-Learning

Deep Q-Learning (DQL), a family of temporal difference algorithms for co...
research
05/10/2021

Parameter-free Gradient Temporal Difference Learning

Reinforcement learning lies at the intersection of several challenges. M...
research
11/06/2018

Online Off-policy Prediction

This paper investigates the problem of online prediction learning, where...
research
07/25/2022

On the benefits of non-linear weight updates

Recent work has suggested that the generalisation performance of a DNN i...
research
03/01/2019

Should All Temporal Difference Learning Use Emphasis?

Emphatic Temporal Difference (ETD) learning has recently been proposed a...

Please sign up or login with your details

Forgot password? Click here to reset