Properties of the Least Squares Temporal Difference learning algorithm

01/22/2013
by   Kamil Ciosek, et al.
0

This paper presents four different ways of looking at the well-known Least Squares Temporal Differences (LSTD) algorithm for computing the value function of a Markov Reward Process, each of them leading to different insights: the operator-theory approach via the Galerkin method, the statistical approach via instrumental variables, the linear dynamical system view as well as the limit of the TD iteration. We also give a geometric view of the algorithm as an oblique projection. Furthermore, there is an extensive comparison of the optimization problem solved by LSTD as compared to Bellman Residual Minimization (BRM). We then review several schemes for the regularization of the LSTD solution. We then proceed to treat the modification of LSTD for the case of episodic Markov Reward Processes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2012

Value Function Approximation in Zero-Sum Markov Games

This paper investigates value function approximation in the context of z...
research
11/19/2010

Should one compute the Temporal Difference fix point or minimize the Bellman Residual? The unified oblique projection view

We investigate projection methods, for evaluating a linear approximation...
research
02/27/2023

Residual QPAS subspace (ResQPASS) algorithm for bounded-variable least squares (BVLS) with superlinear Krylov convergence

This paper presents the Residual QPAS Subspace method (ResQPASS) method ...
research
09/24/2021

Optimal policy evaluation using kernel-based temporal difference methods

We study methods based on reproducing kernel Hilbert spaces for estimati...
research
05/10/2019

Second Order Value Iteration in Reinforcement Learning

Value iteration is a fixed point iteration technique utilized to obtain ...
research
11/14/2019

Supplementary material for Uncorrected least-squares temporal difference with lambda-return

Here, we provide a supplementary material for Takayuki Osogami, "Uncorre...
research
08/24/2023

Intentionally-underestimated Value Function at Terminal State for Temporal-difference Learning with Mis-designed Reward

Robot control using reinforcement learning has become popular, but its l...

Please sign up or login with your details

Forgot password? Click here to reset