TBQ(σ): Improving Efficiency of Trace Utilization for Off-Policy Reinforcement Learning

05/17/2019
by   Longxiang Shi, et al.
0

Off-policy reinforcement learning with eligibility traces is challenging because of the discrepancy between target policy and behavior policy. One common approach is to measure the difference between two policies in a probabilistic way, such as importance sampling and tree-backup. However, existing off-policy learning methods based on probabilistic policy measurement are inefficient when utilizing traces under a greedy target policy, which is ineffective for control problems. The traces are cut immediately when a non-greedy action is taken, which may lose the advantage of eligibility traces and slow down the learning process. Alternatively, some non-probabilistic measurement methods such as General Q(λ) and Naive Q(λ) never cut traces, but face convergence problems in practice. To address the above issues, this paper introduces a new method named TBQ(σ), which effectively unifies the tree-backup algorithm and Naive Q(λ). By introducing a new parameter σ to illustrate the degree of utilizing traces, TBQ(σ) creates an effective integration of TB(λ) and Naive Q(λ) and continuous role shift between them. The contraction property of TB(σ) is theoretically analyzed for both policy evaluation and control settings. We also derive the online version of TBQ(σ) and give the convergence proof. We empirically show that, for ϵ∈(0,1] in ϵ-greedy policies, there exists some degree of utilizing traces for λ∈[0,1], which can improve the efficiency in trace utilization for off-policy reinforcement learning, to both accelerate the learning process and improve the performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/23/2021

Improving the Efficiency of Off-Policy Reinforcement Learning by Accounting for Past Decisions

Off-policy learning from multistep returns is crucial for sample-efficie...
research
01/26/2023

Trajectory-Aware Eligibility Traces for Off-Policy Reinforcement Learning

Off-policy learning from multistep returns is crucial for sample-efficie...
research
02/16/2016

Q(λ) with Off-Policy Corrections

We propose and analyze an alternate approach to off-policy multi-step te...
research
06/08/2016

Safe and Efficient Off-Policy Reinforcement Learning

In this work, we take a fresh look at some old and new algorithms for of...
research
02/09/2018

A Unified Approach for Multi-step Temporal-Difference Learning with Eligibility Traces in Reinforcement Learning

Recently, a new multi-step temporal learning algorithm, called Q(σ), uni...
research
08/11/2021

Truncated Emphatic Temporal Difference Methods for Prediction and Control

Emphatic Temporal Difference (TD) methods are a class of off-policy Rein...
research
06/16/2020

META-Learning Eligibility Traces for More Sample Efficient Temporal Difference Learning

Temporal-Difference (TD) learning is a standard and very successful rein...

Please sign up or login with your details

Forgot password? Click here to reset