Policy Optimization Through Approximated Importance Sampling

by   Marcin B. Tomczak, et al.

Recent policy optimization approaches (Schulman et al., 2015a, 2017) have achieved substantial empirical successes by constructing new proxy optimization objectives. These proxy objectives allow stable and low variance policy learning, but require small policy updates to ensure that the proxy objective remains an accurate approximation of the target policy value. In this paper we derive an alternative objective that obtains the value of the target policy by applying importance sampling. This objective can be directly estimated from samples, as it takes an expectation over trajectories generated by the current policy. However, the basic importance sampled objective is not suitable for policy optimization, as it incurs unacceptable variance. We therefore introduce an approximation that allows us to directly trade-off the bias of approximation with the variance in policy updates. We show that our approximation unifies the proxy optimization approaches with the importance sampling objective and allows us to interpolate between them. We then provide a theoretical analysis of the method that directly quantifies the error term due to the approximation. Finally, we obtain a practical algorithm by optimizing the introduced objective with proximal policy optimization techniques (Schulman etal., 2017). We empirically demonstrate that the result-ing algorithm yields superior performance on continuous control benchmarks



There are no comments yet.


page 1

page 2

page 3

page 4


Importance Sampling Placement in Off-Policy Temporal-Difference Methods

A central challenge to applying many off-policy reinforcement learning a...

Policy Optimization via Importance Sampling

Policy optimization is an effective reinforcement learning approach to s...

Is the Bellman residual a bad proxy?

This paper aims at theoretically and empirically comparing two standard ...

Minimax Confidence Interval for Off-Policy Evaluation and Policy Optimization

We study minimax methods for off-policy evaluation (OPE) using value-fun...

Optimizing tail risks using an importance sampling based extrapolation for heavy-tailed objectives

Motivated by the prominence of Conditional Value-at-Risk (CVaR) as a mea...

Compatible features for Monotonic Policy Improvement

Recent policy optimization approaches have achieved substantial empirica...

Stochastic Enumeration with Importance Sampling

Many hard problems in the computational sciences are equivalent to count...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.