Off-Policy Deep Reinforcement Learning by Bootstrapping the Covariate Shift

01/27/2019
by   Carles Gelada, et al.
14

In this paper we revisit the method of off-policy corrections for reinforcement learning (COP-TD) pioneered by Hallak et al. (2017). Under this method, online updates to the value function are reweighted to avoid divergence issues typical of off-policy learning. While Hallak et al.'s solution is appealing, it cannot easily be transferred to nonlinear function approximation. First, it requires a projection step onto the probability simplex; second, even though the operator describing the expected behavior of the off-policy learning algorithm is convergent, it is not known to be a contraction mapping, and hence, may be more unstable in practice. We address these two issues by introducing a discount factor into COP-TD. We analyze the behavior of discounted COP-TD and find it better behaved from a theoretical perspective. We also propose an alternative soft normalization penalty that can be minimized online and obviates the need for an explicit projection step. We complement our analysis with an empirical evaluation of the two techniques in an off-policy setting on the game Pong from the Atari domain where we find discounted COP-TD to be better behaved in practice than the soft normalization penalty. Finally, we perform a more extensive evaluation of discounted COP-TD in 5 games of the Atari domain, where we find performance gains for our approach.

READ FULL TEXT

page 8

page 13

page 14

research
01/23/2023

On The Convergence Of Policy Iteration-Based Reinforcement Learning With Monte Carlo Policy Evaluation

A common technique in reinforcement learning is to evaluate the value fu...
research
06/06/2020

Stable and Efficient Policy Evaluation

Policy evaluation algorithms are essential to reinforcement learning due...
research
09/05/2019

√(n)-Regret for Learning in Markov Decision Processes with Function Approximation and Low Bellman Rank

In this paper, we consider the problem of online learning of Markov deci...
research
12/15/2018

On Improving Decentralized Hysteretic Deep Reinforcement Learning

Recent successes of value-based multi-agent deep reinforcement learning ...
research
12/05/2018

Relative Entropy Regularized Policy Iteration

We present an off-policy actor-critic algorithm for Reinforcement Learni...
research
02/14/2019

CrossNorm: Normalization for Off-Policy TD Reinforcement Learning

Off-policy Temporal Difference (TD) learning methods, when combined with...
research
01/31/2012

Learning RoboCup-Keepaway with Kernels

We apply kernel-based methods to solve the difficult reinforcement learn...

Please sign up or login with your details

Forgot password? Click here to reset