Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation

06/24/2021
by   Yunhao Tang, et al.
0

Model-agnostic meta-reinforcement learning requires estimating the Hessian matrix of value functions. This is challenging from an implementation perspective, as repeatedly differentiating policy gradient estimates may lead to biased Hessian estimates. In this work, we provide a unifying framework for estimating higher-order derivatives of value functions, based on off-policy evaluation. Our framework interprets a number of prior approaches as special cases and elucidates the bias and variance trade-off of Hessian estimates. This framework also opens the door to a new family of estimates, which can be easily implemented with auto-differentiation libraries, and lead to performance gains in practice.

READ FULL TEXT
research
12/14/2021

Biased Gradient Estimate with Drastic Variance Reduction for Meta Reinforcement Learning

Despite the empirical success of meta reinforcement learning (meta-RL), ...
research
03/30/2022

Marginalized Operators for Off-policy Reinforcement Learning

In this work, we propose marginalized operators, a new class of off-poli...
research
06/15/2017

Expected Policy Gradients

We propose expected policy gradients (EPG), which unify stochastic polic...
research
09/15/2019

Biased Estimates of Advantages over Path Ensembles

The estimation of advantage is crucial for a number of reinforcement lea...
research
09/23/2019

Loaded DiCE: Trading off Bias and Variance in Any-Order Score Function Estimators for Reinforcement Learning

Gradient-based methods for optimisation of objectives in stochastic sett...
research
02/14/2018

DiCE: The Infinitely Differentiable Monte-Carlo Estimator

The score function estimator is widely used for estimating gradients of ...
research
06/11/2021

Taylor Expansion of Discount Factors

In practical reinforcement learning (RL), the discount factor used for e...

Please sign up or login with your details

Forgot password? Click here to reset