A Unified Approach for Multi-step Temporal-Difference Learning with Eligibility Traces in Reinforcement Learning

02/09/2018 ∙ by Long Yang, et al. ∙ Zhejiang University 0

Recently, a new multi-step temporal learning algorithm, called Q(σ), unifies n-step Tree-Backup (when σ=0) and n-step Sarsa (when σ=1) by introducing a sampling parameter σ. However, similar to other multi-step temporal-difference learning algorithms, Q(σ) needs much memory consumption and computation time. Eligibility trace is an important mechanism to transform the off-line updates into efficient on-line ones which consume less memory and computation time. In this paper, we further develop the original Q(σ), combine it with eligibility traces and propose a new algorithm, called Q(σ ,λ), in which λ is trace-decay parameter. This idea unifies Sarsa(λ) (when σ =1) and Q^π(λ) (when σ =0). Furthermore, we give an upper error bound of Q(σ ,λ) policy evaluation algorithm. We prove that Q(σ,λ) control algorithm can converge to the optimal value function exponentially. We also empirically compare it with conventional temporal-difference learning methods. Results show that, with an intermediate value of σ, Q(σ ,λ) creates a mixture of the existing algorithms that can learn the optimal value significantly faster than the extreme end (σ=0, or 1).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In reinforcement learning, experiences are sequences of states, actions and rewards that generated by the agent interacts with environment. The agent’s goal is learning from experiences and seeking an optimal policy from the delayed reward decision system. There are two fundamental mechanisms have been studied, one is temporal-difference (TD) learning method which is a combination of Monte Carlo method and dynamic programming  [Sutton1988]. The other one is eligibility trace [Sutton1984, Watkins1989], which is a short-term memory process as a function of states. TD learning combining with eligibility trace provides a bridge between one-step learning and Monte Carlo methods through the trace-decay parameter  [Sutton1988].

Recently, Multi-step  [Sutton and Barto2017] unifies -step Sarsa (, full-sampling) and -step Tree-backup (, pure-expectation). For some intermediate value , creates a mixture of full-sampling and pure-expectation approach, can perform better than the extreme case or  [De Asis et al.2018].

The results in  [De Asis et al.2018] implies a fundamental trade-off problem in reinforcement learning :

should one estimates the value function by adopting pure-expectation (

) algorithm or full-sampling () algorithm

? Although pure-expectation approach has lower variance, it needs more complex and larger calculation  

[Van Seijen et al.2009]. On the other hand, full-sampling algorithm needs smaller calculation time, however, it may have a worse asymptotic performance [De Asis et al.2018]. Multi-step  [Sutton and Barto2017] firstly attempts to combine pure-expectation with full-sample algorithms, however, multi-step temporal-difference learning is too expensive during the training. In this paper, we try to combine the algorithm with eligibility trace, and create a new algorithm, called . Our unifies the Sarsa algorithm  [Rummery and Niranjan1994] and algorithm  [Harutyunyan2016]. When varies from 0 to 1, changes continuously from Sarsa ( in ) to ( in ). In this paper, we also focus on the trade-off between pure-expectation and full-sample in control task, our experiments show that an intermediate value can achieve a better performance than extreme case.

Our contributions are summaried as follows:

  • We define a new operator mixed-sampling operator through which we can deduce the corresponding policy evaluation algorithm and control algorithm .

  • For new policy evaluation algorithm, we give its upper error bound.

  • We present an new algorithm which unifies Sarsa and . For the control problem, we prove that both of the off-line and on-line algorithm can converge to the optimal value function.

2 Framework and Notation

The standard episodic reinforcement learning framework  [Sutton and Barto2017] is often formalized as Markov decision processes (MDPs). Such framework considers 5-tuples form , where indicates the set of all states, indicates the set of all actions,

indicates a state-transition probability from state

to state under taking action , ; indicates the expected reward for a transition, is the discount factor. In this paper, we denote as a trajectory of the state-reward sequence in one episode.A policy

is a probability distribution on

and stationary policy is a policy that does not change over time.

Consider the state-action value maps on to , for a given policy , has a corresponding state-action value:

Optimal state-action value is defined as:

Bellman operator

(1)

Bellman optimality operator

(2)

where and , the corresponding entry is:

Value function and satisfy the following Bellman equation and optimal Bellman equation correspondingly:

Both and are -contraction operator in the sup-norm, that is to say, for any , or . From the fact that fixed point of contraction operator is unique, the value iteration converges: , , as , for any initial  [Bertsekas et al.2005].

Unfortunately, both the system (1) and (2) can not be solved directly because of fact that the and in the environment are usually unknown. A practical model in reinforcement learning has not been available, called, model free.

2.1 One-step TD Learning Algorithms

TD learning algorithm [Sutton1984, Sutton1988] is one of the most significant algorithms in model free reinforcement learning, the idea of bootstrapping is critical to TD learning: the evluation of the value function are used as targets during the learning process.

Given a target policy which is to be learned and a behavior policy that generates the trajectory , if , the learning is called on-policy learning, otherwise it is off-policy learning.

Sarsa: For a given sample transition (), Sarsa  [Rummery and Niranjan1994] is a on-policy learning algorithm and its updates value as follows:

(3)
(4)

where is the k-th TD error, is stepsize.

Expected-Sarsa: Expected-Sarsa  [Van Seijen et al.2009] uses expectation of all the next state-action value pairs according to the target policy to estimate value as follows:

(5)

where is the k-th expected TD error. Expected-Sarsa is a off-policy learning algorithm if , for example, when is greedy with respect to then Expected-Sarsa is restricted to Q-Learning [Watkins1989]. If the trajectory was generated by , Expected-Sarsa is a on-policy algorithm [Van Seijen et al.2009].

The above two algorithms are guaranteed convergence under some conditions  [Singh et al.2000, Van Seijen et al.2009].

: One-step  [Sutton and Barto2017, De Asis et al.2018] is a weighted average between the Sarsa update and Expected Sarsa update through sampling parameter :

(6)

Where is degree of sampling, denoting full-sampling and denoting a pure-expectation with no sampling, are in (4) and (5).

2.2 -Return Algorithm

One-step TD learning algorithm can be generalized to multi-step bootstrapping learning method. The -return algorithm  [Watkins1989] is a particular way to mix many multi-step TD learning algorithms through weighting -step returns proportionally to .

-operator111The notation is coincident with textbook  [Bertsekas et al.2012]. is a flexible way to express -return algorithm, consider a trajectory ,

where is -step returns from initial state-action pair , the term , called -returns, and .

Based on the fact that is fixed point of , remains the fixed point of . When , is equal to the usual Bellman operator . When , the evaluation of becomes Monte Carlo method. It is well-known that trades off the bias of the bootstrapping with an approximate , with the variance of sampling multi-step returns estimation  [Kearns and Singh2000]. In practice, a high and intermediate should be typically better  [Singh and Dayan1998, Sutton1996].

3 Mixed-sampling Operator

In this section, we present the mixed-sampling operator , which is one of our key contribution and is flexible to analysis our new algorithm later. By introducing a sampling parameter , the mixed-sampling operator varies continuously from pure-expectation method to full-sampling method. In this section, we analysis the contraction of firstly. Then we introduce the -return vision of mixed-sampling operator, denoting it . Finally, we give a upper error bound of the corresponding policy evaluation algorithm.

3.1 Contraction of Mixed-sampling Operator

Definition 1.

Mixed sampling operator is a map on to

(7)

where

The parameter is also degree of sampling intrduced by the algorithm  [De Asis et al.2018]. In one of extreme end (, pure-expectation), can deduce the -step returns in  [Harutyunyan2016], where , is the k-th expected TD error. Multi-step Sarsa  [Sutton and Barto2017] is in another extreme end (, full-sampling). Every intermediate value can create a mixed method varies continuously from pure-expectation to full-sampling which is why we call mixed sample operator.

-Return Version We now define the -version of , denote it as :

(8)

where the is the parameter takes the from TD(0) to Monte Carlo version as usual. When , is restricted to  [Harutyunyan2016], when , is restricted to -operator. The next theorem provides a basic property of .

Theorem 1.

The operator is a -contraction: for any ,

Furthermore, for any initial , the sequence is generated by the iteration

can converge to the unique fixed point of .

Proof.

Unfolding the operator , we have

(9)

where . Based the fact that both [Bertsekas et al.2012]and  [Harutyunyan2016, Munos et al.2016] are -contraction operators, and is the convex combination of above operators, thus is a -contraction. ∎

3.2 Upper Error Bound of Policy Evaluation

In this section we discuss the ability of policy evaluation iteration in Theorem 1. Our results show that when and are sufficiently close, the ability of the policy evaluation iteration increases gradually as the decreases from 1 to 0.

Lemma 1.

If a sequence satisfies , then for any , we have

Furthermore, for any , has the following estimation

Theorem 2 (Upper error bound of policy evaluation).

Consider the policy evaluation algorithm , if the behavior policy is -away from the target policy , in the sense that , , and , then for a large , the policy evaluation sequence satisfy

where for a given policy , is determined by the learning system.

Proof.

Firstly, we provide an equation which could be used later:

(10)

Rewrite the policy evaluation iteration:

Note is fixed point of  [Harutyunyan2016], then we merely consider next estimator:

The first equation is derived by replacing in (10) with . Since is -away from , the first inequality is determined the following fact:

where is determined by the reinforcement learning system and independent of . . For the given policy , is a constant on determined by learning system, we denote it . ∎

Remark 1.

The proof in Theorem 2 strictly dependent on the assumption that is smaller but never to be zero, where the is a bound of discrepancy between the behavior policy and target policy . That is to say, the ability of the prediction in policy evaluation iteration is dependent on the gap between and .

4 Control Algorithm

In this section, we present algorithm for control. We analysis the off-line version of which converges to optimal value function exponentially.

Considering the typical iteration , is an arbitrary sequence of corresponding behavior policies, is calculated by the following two steps,
Step1: policy evaluation


Step2: policy improvement

that is is greedy policy with repect to . We call the approach introduced by above step1 and step2 control algorithm.

In the following, we presents the convergence rate of control algorithm.

Theorem 3 (Convergence of Control Algorithm).

Considering the sequence generated by the control algorithm, given , then

Particularly, for , then sequence converges to exponentially fast:

Proof.

By the definition of ,

we have222The section inequality is based on the next two results: [Munos et al.2016] Theorem2 and  [Bertsekas et al.2012] Proposition6.3.10.:

5 On-line Implementation of

We have discussed the contraction of mixed-sampling operator through which we introduced the control algorithm. Both of the iteration in Theorem 2 and Theorem 3 are the version of offline. In this section, we give the on-line version of and discuss its convergence.

5.1 On-line Learning

Off-line learning is too expensive due to the learning process must be carried out at the end of a episode, however, on-line learning updates value function with a lower computational cost, better performance. There is a simple interpretation of equivalence between off-line learning and on-line learning which means that, by the end of the episode, the total updates of the forward view(off-line learning) is equal to the total updates of the backward view(on-line learning[Sutton and Barto1998]. By the view of equivalence333The true online learning was firstly introduced by [Seijen and Sutton2014], more details in [Van Seijen et al.2016]., on-line learning can be seen as an implementation of offline algorithm in an inexpensive manner. Another interpretation of online learning was provided by [Singh and Sutton1996], TD learning with accumulate trace comes to approximate every-visit Monte-Carlo method and TD learning with replace trace comes to approximate first-visit Monte-Carlo method.

The iterations in Theorem 2 and Theorem 3 are the version of expectations . In practice, we can only access to the trajectory . By statistical approaches, we can utilize the trajectory to estimate the value function. Algorithm 1 corresponds to online form of . Algorithm1:On-line Q() algorithm Require:Initialize arbitrarily,  Require:Initialize to be the behavior policy Parameters: step-size Repeat (for each episode):  Initialize state-action pair For = 0 , 1, 2, :   Obersive a sample            For :    +       End For   ,   If is terminal:    Break End For

5.2 On-line Learning Convergence Analysis

We make some common assumption similar to  [Bertsekas and Tsitsiklis1996, Harutyunyan2016].

Assumption 1.

, minimum visit frequency, every pair can be visited.

Assumption 2.

For every historical chain in a MDPs, , where is a positive constants, is a positive integer.

For the convenience of expression, we give some notations firstly. Let

be the vector obtained after

iterations in the -th trajectory, and the superscript emphasizes online learning. We denote the -th trajectory as sampled by the policy . Then the online update rules can be expressed as follows:

where is the length of the - trajectory.

Theorem 4.

Based on the Assumption 1 and Assumption 2, step-size satisfying,, is greedy with respect to , then , where is short for with probability one.

Proof.

After some sample algebra:

where . We rewrite the off-line update:

where is the -returns at time when the pair was visited in the -th trajectory, the superscript in emphasizes the forward (off-line) update. denotes the times of the pair visited in the -th trajectory.
We define the residual between and the off-line estimate in the -th trajectory:

Set , then we consider the next random iterative process:

(11)

where

Step1:Upper bound on :

(12)

where .

where is the difference between the total on-line updates of first steps and the first times off-line update in -th trajectory. By induction on , we have:

where is a consist and , . Based on the condition of step-size in the Theorem 4, , then we have (12).
Step2: .
In fact:

From the property of eligibility trace(more details refer to [Bertsekas et al.2012]) and Assumption 2, we have:

Then according to (11), for some :

Step3: Considering the iteration (11) and Theorem 1 in  [Jaakkola et al.1994], then we have . ∎

Based on Theorem 3 in [Munos et al.2016] and our Theorem 4, if is greedy with respect to , then in Algorithm 1 can converge to with probability one.
Remark 2 The conclusion in [Jaakkola et al.1994] similar to our Theorem 4, but the update is different from ours and we further develop it under the Assumption 2.

6 Experiments

6.1 Experiment for Prediction Capability

In this section, we test the prediction abilities of in 19-state random walk environment which is a one-dimension MDP environment that widely used in reinforcement learning [Sutton and Barto2017, De Asis et al.2018]. The agent at each state has two action : left and right, and taking each action with equal probability.

We compare the root-mean-square(RMS) error as a function of episodes, varies dynamically from 0 to 1 with steps of 0.2. Results in Figure 1 show that the performance of increases gradually as the decreases from 1 to 0, which just verifies the upper error bound in Theorem2.