Incrementally Learning Functions of the Return

07/05/2019
by   Brendan Bennett, et al.
0

Temporal difference methods enable efficient estimation of value functions in reinforcement learning in an incremental fashion, and are of broader interest because they correspond learning as observed in biological systems. Standard value functions correspond to the expected value of a sum of discounted returns. While this formulation is often sufficient for many purposes, it would often be useful to be able to represent functions of the return as well. Unfortunately, most such functions cannot be estimated directly using TD methods. We propose a means of estimating functions of the return using its moments, which can be learned online using a modified TD algorithm. The moments of the return are then used as part of a Taylor expansion to approximate analytic functions of the return.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/25/2018

Directly Estimating the Variance of the λ-Return Using Temporal-Difference Methods

This paper investigates estimating the variance of a temporal-difference...
05/28/2019

Beyond Exponentially Discounted Sum: Automatic Learning of Return Function

In reinforcement learning, Return, which is the weighted accumulated fut...
03/15/2012

Parametric Return Density Estimation for Reinforcement Learning

Most conventional Reinforcement Learning (RL) algorithms aim to optimize...
09/30/2019

Off-policy Multi-step Q-learning

In the past few years, off-policy reinforcement learning methods have sh...
11/14/2019

Supplementary material for Uncorrected least-squares temporal difference with lambda-return

Here, we provide a supplementary material for Takayuki Osogami, "Uncorre...
09/01/2020

Return to Bali

This paper gives an overview of the project Return to Bali that seeks to...
07/23/2019

Simulating an infinite mean waiting time

We consider a hybrid method to simulate the return time to the initial s...