Efficient Reinforcement Learning Using Recursive Least-Squares Methods

06/03/2011
by   H. He, et al.
0

The recursive least-squares (RLS) algorithm is one of the most well-known algorithms used in adaptive filtering, system identification and adaptive control. Its popularity is mainly due to its fast convergence speed, which is considered to be optimal in practice. In this paper, RLS methods are used to solve reinforcement learning problems, where two new reinforcement learning algorithms using linear value function approximators are proposed and analyzed. The two algorithms are called RLS-TD(lambda) and Fast-AHC (Fast Adaptive Heuristic Critic), respectively. RLS-TD(lambda) can be viewed as the extension of RLS-TD(0) from lambda=0 to general lambda within interval [0,1], so it is a multi-step temporal-difference (TD) learning algorithm using RLS methods. The convergence with probability one and the limit of convergence of RLS-TD(lambda) are proved for ergodic Markov chains. Compared to the existing LS-TD(lambda) algorithm, RLS-TD(lambda) has advantages in computation and is more suitable for online learning. The effectiveness of RLS-TD(lambda) is analyzed and verified by learning prediction experiments of Markov chains with a wide range of parameter settings. The Fast-AHC algorithm is derived by applying the proposed RLS-TD(lambda) algorithm in the critic network of the adaptive heuristic critic method. Unlike conventional AHC algorithm, Fast-AHC makes use of RLS methods to improve the learning-prediction efficiency in the critic. Learning control experiments of the cart-pole balancing and the acrobot swing-up problems are conducted to compare the data efficiency of Fast-AHC with conventional AHC. From the experimental results, it is shown that the data efficiency of learning control can also be improved by using RLS methods in the learning-prediction process of the critic. The performance of Fast-AHC is also compared with that of the AHC method using LS-TD(lambda). Furthermore, it is demonstrated in the experiments that different initial values of the variance matrix in RLS-TD(lambda) are required to get better performance not only in learning prediction but also in learning control. The experimental results are analyzed based on the existing theoretical work on the transient phase of forgetting factor RLS methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/15/2022

Recursive Least Squares Advantage Actor-Critic Algorithms

As an important algorithm in deep reinforcement learning, advantage acto...
research
12/03/1998

Training Reinforcement Neurocontrollers Using the Polytope Algorithm

A new training algorithm is presented for delayed reinforcement learning...
research
09/16/2009

A Convergent Online Single Time Scale Actor Critic Algorithm

Actor-Critic based approaches were among the first to address reinforcem...
research
10/13/2020

Average Cost Optimal Control of Stochastic Systems Using Reinforcement Learning

This paper addresses the average cost minimization problem for discrete-...
research
04/25/2019

Zap Q-Learning for Optimal Stopping Time Problems

We propose a novel reinforcement learning algorithm that approximates so...
research
06/15/2018

An Online Prediction Algorithm for Reinforcement Learning with Linear Function Approximation using Cross Entropy Method

In this paper, we provide two new stable online algorithms for the probl...
research
05/02/2010

Adaptive Bases for Reinforcement Learning

We consider the problem of reinforcement learning using function approxi...

Please sign up or login with your details

Forgot password? Click here to reset