Importance Sampling Placement in Off-Policy Temporal-Difference Methods

03/18/2022
by   Eric Graves, et al.
0

A central challenge to applying many off-policy reinforcement learning algorithms to real world problems is the variance introduced by importance sampling. In off-policy learning, the agent learns about a different policy than the one being executed. To account for the difference importance sampling ratios are often used, but can increase variance in the algorithms and reduce the rate of learning. Several variations of importance sampling have been proposed to reduce variance, with per-decision importance sampling being the most popular. However, the update rules for most off-policy algorithms in the literature depart from per-decision importance sampling in a subtle way; they correct the entire TD error instead of just the TD target. In this work, we show how this slight change can be interpreted as a control variate for the TD target, reducing variance and improving performance. Experiments over a wide range of algorithms show this subtle modification results in improved performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2012

AND/OR Importance Sampling

The paper introduces AND/OR importance sampling for probabilistic graphi...
research
10/16/2019

Conditional Importance Sampling for Off-Policy Learning

The principal contribution of this paper is a conceptual framework for o...
research
09/10/2021

An Empirical Comparison of Off-policy Prediction Learning Algorithms in the Four Rooms Environment

Many off-policy prediction learning algorithms have been proposed in the...
research
06/27/2023

Value-aware Importance Weighting for Off-policy Reinforcement Learning

Importance sampling is a central idea underlying off-policy prediction i...
research
02/05/2023

Sample Dropout: A Simple yet Effective Variance Reduction Technique in Deep Policy Optimization

Recent success in Deep Reinforcement Learning (DRL) methods has shown th...
research
10/09/2019

Policy Optimization Through Approximated Importance Sampling

Recent policy optimization approaches (Schulman et al., 2015a, 2017) hav...
research
12/13/2021

Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization

Learning in a lifelong setting, where the dynamics continually evolve, i...

Please sign up or login with your details

Forgot password? Click here to reset