ReVar: Strengthening Policy Evaluation via Reduced Variance Sampling

03/09/2022
by   Subhojyoti Mukherjee, et al.
0

This paper studies the problem of data collection for policy evaluation in Markov decision processes (MDPs). In policy evaluation, we are given a target policy and asked to estimate the expected cumulative reward it will obtain in an environment formalized as an MDP. We develop theory for optimal data collection within the class of tree-structured MDPs by first deriving an oracle data collection strategy that uses knowledge of the variance of the reward distributions. We then introduce the Reduced Variance Sampling (ReVar) algorithm that approximates the oracle strategy when the reward variances are unknown a priori and bound its sub-optimality compared to the oracle strategy. Finally, we empirically validate that ReVar leads to policy evaluation with mean squared error comparable to the oracle strategy and significantly lower than simply running the target policy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2023

SPEED: Experimental Design for Policy Evaluation in Linear Heteroscedastic Bandits

In this paper, we study the problem of optimal data collection for polic...
research
11/29/2021

Robust On-Policy Data Collection for Data-Efficient Policy Evaluation

This paper considers how to complement offline reinforcement learning (R...
research
06/12/2017

Data-Efficient Policy Evaluation Through Behavior Policy Search

We consider the task of evaluating a policy for a Markov decision proces...
research
06/14/2018

Stochastic Variance-Reduced Policy Gradient

In this paper, we propose a novel reinforcement- learning algorithm cons...
research
02/27/2023

Global Algorithms for Mean-Variance Optimization in Markov Decision Processes

Dynamic optimization of mean and variance in Markov decision processes (...
research
09/21/2022

Off-Policy Risk Assessment in Markov Decision Processes

Addressing such diverse ends as safety alignment with human preferences,...
research
09/30/2022

On Convergence of Average-Reward Off-Policy Control Algorithms in Weakly-Communicating MDPs

We show two average-reward off-policy control algorithms, Differential Q...

Please sign up or login with your details

Forgot password? Click here to reset