Anytime-valid off-policy inference for contextual bandits

10/19/2022
by   Ian Waudby-Smith, et al.
0

Contextual bandit algorithms are ubiquitous tools for active sequential experimentation in healthcare and the tech industry. They involve online learning algorithms that adaptively learn policies over time to map observed contexts X_t to actions A_t in an attempt to maximize stochastic rewards R_t. This adaptivity raises interesting but hard statistical inference questions, especially counterfactual ones: for example, it is often of interest to estimate the properties of a hypothetical policy that is different from the logging policy that was used to collect the data – a problem known as “off-policy evaluation” (OPE). Using modern martingale techniques, we present a comprehensive framework for OPE inference that relax many unnecessary assumptions made in past work, significantly improving on them both theoretically and empirically. Importantly, our methods can be employed while the original experiment is still running (that is, not necessarily post-hoc), when the logging policy may be itself changing (due to learning), and even if the context distributions are a highly dependent time-series (such as if they are drifting over time). More concretely, we derive confidence sequences for various functionals of interest in OPE. These include doubly robust ones for time-varying off-policy mean reward values, but also confidence bands for the entire CDF of the off-policy reward distribution. All of our methods (a) are valid at arbitrary stopping times (b) only make nonparametric assumptions, (c) do not require known bounds on the maximal importance weights, and (d) adapt to the empirical variance of our estimators. In summary, our methods enable anytime-valid off-policy inference using adaptively collected contextual bandit data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2021

Off-policy Confidence Sequences

We develop confidence bounds that hold uniformly over time for off-polic...
research
02/28/2023

Time-uniform confidence bands for the CDF under nonstationarity

Estimation of the complete distribution of a random variable is a useful...
research
02/14/2022

Statistical Inference After Adaptive Sampling in Non-Markovian Environments

There is a great desire to use adaptive sampling methods, such as reinfo...
research
04/29/2021

Statistical Inference with M-Estimators on Bandit Data

Bandit algorithms are increasingly used in real world sequential decisio...
research
09/07/2020

Admissible anytime-valid sequential inference must rely on nonnegative martingales

Wald's anytime-valid p-values and Robbins' confidence sequences enable s...
research
02/07/2019

Cost-Effective Incentive Allocation via Structured Counterfactual Inference

We address a practical problem ubiquitous in modern industry, in which a...
research
03/23/2011

Doubly Robust Policy Evaluation and Learning

We study decision making in environments where the reward is only partia...

Please sign up or login with your details

Forgot password? Click here to reset