Policy Optimization Through Approximated Importance Sampling

10/09/2019
by   Marcin B. Tomczak, et al.
0

Recent policy optimization approaches (Schulman et al., 2015a, 2017) have achieved substantial empirical successes by constructing new proxy optimization objectives. These proxy objectives allow stable and low variance policy learning, but require small policy updates to ensure that the proxy objective remains an accurate approximation of the target policy value. In this paper we derive an alternative objective that obtains the value of the target policy by applying importance sampling. This objective can be directly estimated from samples, as it takes an expectation over trajectories generated by the current policy. However, the basic importance sampled objective is not suitable for policy optimization, as it incurs unacceptable variance. We therefore introduce an approximation that allows us to directly trade-off the bias of approximation with the variance in policy updates. We show that our approximation unifies the proxy optimization approaches with the importance sampling objective and allows us to interpolate between them. We then provide a theoretical analysis of the method that directly quantifies the error term due to the approximation. Finally, we obtain a practical algorithm by optimizing the introduced objective with proximal policy optimization techniques (Schulman etal., 2017). We empirically demonstrate that the result-ing algorithm yields superior performance on continuous control benchmarks

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2022

Importance Sampling Placement in Off-Policy Temporal-Difference Methods

A central challenge to applying many off-policy reinforcement learning a...
research
09/17/2018

Policy Optimization via Importance Sampling

Policy optimization is an effective reinforcement learning approach to s...
research
06/24/2016

Is the Bellman residual a bad proxy?

This paper aims at theoretically and empirically comparing two standard ...
research
10/09/2019

Compatible features for Monotonic Policy Improvement

Recent policy optimization approaches have achieved substantial empirica...
research
02/06/2020

Minimax Confidence Interval for Off-Policy Evaluation and Policy Optimization

We study minimax methods for off-policy evaluation (OPE) using value-fun...
research
06/04/2018

Efficiency of adaptive importance sampling

The sampling policy of stage t, formally expressed as a probability dens...
research
08/22/2020

Optimizing tail risks using an importance sampling based extrapolation for heavy-tailed objectives

Motivated by the prominence of Conditional Value-at-Risk (CVaR) as a mea...

Please sign up or login with your details

Forgot password? Click here to reset