Theoretical and Experimental Comparison of Off-Policy Evaluation from Dependent Samples
We theoretically and experimentally compare estimators for off-policy evaluation (OPE) using dependent samples obtained via multi-armed bandit (MAB) algorithms. The goal of OPE is to evaluate a new policy using historical data. Because the MAB algorithms sequentially update the policy based on past observations, the generated samples are not independent and identically distributed. To conduct OPE from dependent samples, we need to use some techniques for constructing the estimator with asymptotic normality. In particular, we focus on a doubly robust (DR) estimator, which consists of an inverse probability weighting (IPW) component and an estimator of the conditionally expected outcome. We first summarize existing and new theoretical results for such OPE estimators. Then, we compare their empirical properties using benchmark datasets with other estimators, such as an estimator with cross-fitting.
READ FULL TEXT