High-Confidence Off-Policy (or Counterfactual) Variance Estimation

by   Yash Chandak, et al.

Many sequential decision-making systems leverage data collected using prior policies to propose a new policy. For critical applications, it is important that high-confidence guarantees on the new policy's behavior are provided before deployment, to ensure that the policy will behave as desired. Prior works have studied high-confidence off-policy estimation of the expected return, however, high-confidence off-policy estimation of the variance of returns can be equally critical for high-risk applications. In this paper, we tackle the previously open problem of estimating and bounding, with high confidence, the variance of returns from off-policy data


page 1

page 2

page 3

page 4


Universal Off-Policy Evaluation

When faced with sequential decision-making problems, it is often useful ...

Off-Policy Risk Assessment in Contextual Bandits

To evaluate prospective contextual bandit policies when experimentation ...

Optimal Mixture Weights for Off-Policy Evaluation with Multiple Behavior Policies

Off-policy evaluation is a key component of reinforcement learning which...

Identification of Subgroups With Similar Benefits in Off-Policy Policy Evaluation

Off-policy policy evaluation methods for sequential decision making can ...

Safe and Efficient Off-Policy Reinforcement Learning

In this work, we take a fresh look at some old and new algorithms for of...

Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation

We consider the off-policy estimation problem of estimating the expected...

Offline Policy Comparison with Confidence: Benchmarks and Baselines

Decision makers often wish to use offline historical data to compare seq...