Policy learning "without” overlap: Pessimism and generalized empirical Bernstein's inequality

12/19/2022
by   Ying Jin, et al.
0

This paper studies offline policy learning, which aims at utilizing observations collected a priori (from either fixed or adaptively evolving behavior policies) to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics are lower bounded in the offline dataset; put differently, the performance of the existing methods depends on the worst-case propensity in the offline dataset. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities for certain actions. In this paper, we propose a new algorithm that optimizes lower confidence bounds (LCBs) – instead of point estimates – of the policy values. The LCBs are constructed using knowledge of the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class we optimize over. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our theoretical analysis, we develop a new self-normalized type concentration inequality for inverse-propensity-weighting estimators, generalizing the well-known empirical Bernstein's inequality to unbounded and non-i.i.d. data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2023

In-Sample Policy Iteration for Offline Reinforcement Learning

Offline reinforcement learning (RL) seeks to derive an effective control...
research
06/01/2022

On Gap-dependent Bounds for Offline Reinforcement Learning

This paper presents a systematic study on gap-dependent sample complexit...
research
12/30/2020

Is Pessimism Provably Efficient for Offline RL?

We study offline reinforcement learning (RL), which aims to learn an opt...
research
05/05/2021

Policy Learning with Adaptively Collected Data

Learning optimal policies from historical data enables the gains from pe...
research
05/19/2023

Off-policy evaluation beyond overlap: partial identification through smoothness

Off-policy evaluation (OPE) is the problem of estimating the value of a ...
research
10/24/2021

Off-Policy Evaluation in Partially Observed Markov Decision Processes

We consider off-policy evaluation of dynamic treatment rules under the a...
research
09/08/2023

Offline Recommender System Evaluation under Unobserved Confounding

Off-Policy Estimation (OPE) methods allow us to learn and evaluate decis...

Please sign up or login with your details

Forgot password? Click here to reset