Variance-Reduced Off-Policy Memory-Efficient Policy Search

09/14/2020
by   Daoming Lyu, et al.
0

Off-policy policy optimization is a challenging problem in reinforcement learning (RL). The algorithms designed for this problem often suffer from high variance in their estimators, which results in poor sample efficiency, and have issues with convergence. A few variance-reduced on-policy policy gradient algorithms have been recently proposed that use methods from stochastic optimization to reduce the variance of the gradient estimate in the REINFORCE algorithm. However, these algorithms are not designed for the off-policy setting and are memory-inefficient, since they need to collect and store a large “reference” batch of samples from time to time. To achieve variance-reduced off-policy-stable policy optimization, we propose an algorithm family that is memory-efficient, stochastically variance-reduced, and capable of learning from off-policy samples. Empirical studies validate the effectiveness of the proposed approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2021

On the Convergence and Sample Efficiency of Variance-Reduced Policy Gradient Method

Policy gradient gives rise to a rich class of reinforcement learning (RL...
research
06/06/2020

Stable and Efficient Policy Evaluation

Policy evaluation algorithms are essential to reinforcement learning due...
research
05/30/2019

Don't Forget Your Teacher: A Corrective Reinforcement Learning Framework

Although reinforcement learning (RL) can provide reliable solutions in m...
research
05/20/2019

Stochastic Variance Reduction for Deep Q-learning

Recent advances in deep reinforcement learning have achieved human-level...
research
06/09/2019

SVRG for Policy Evaluation with Fewer Gradient Evaluations

Stochastic variance-reduced gradient (SVRG) is an optimization method or...
research
06/22/2021

Local policy search with Bayesian optimization

Reinforcement learning (RL) aims to find an optimal policy by interactio...
research
11/29/2022

Closing the gap between SVRG and TD-SVRG with Gradient Splitting

Temporal difference (TD) learning is a simple algorithm for policy evalu...

Please sign up or login with your details

Forgot password? Click here to reset