Generalizing Off-Policy Learning under Sample Selection Bias

12/02/2021
by   Tobias Hatt, et al.
0

Learning personalized decision policies that generalize to the target population is of great relevance. Since training data is often not representative of the target population, standard policy learning methods may yield policies that do not generalize target population. To address this challenge, we propose a novel framework for learning policies that generalize to the target population. For this, we characterize the difference between the training data and the target population as a sample selection bias using a selection variable. Over an uncertainty set around this selection variable, we optimize the minimax value of a policy to achieve the best worst-case policy value on the target population. In order to solve the minimax problem, we derive an efficient algorithm based on a convex-concave procedure and prove convergence for parametrized spaces of policies such as logistic policies. We prove that, if the uncertainty set is well-specified, our policies generalize to the target population as they can not do worse than on the training data. Using simulated data and a clinical trial, we demonstrate that, compared to standard policy learning methods, our framework improves the generalizability of policies substantially.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2023

Policy Learning under Biased Sample Selection

Practitioners often use data from a randomized controlled trial to learn...
research
05/22/2018

Confounding-Robust Policy Improvement

We study the problem of learning personalized decision policies from obs...
research
06/20/2019

More Efficient Policy Learning via Optimal Retargeting

Policy learning can be used to extract individualized treatment regimes ...
research
11/11/2022

What does it mean to be "representative"?

Medical and population health science researchers frequently make ambigu...
research
11/22/2021

Case-based off-policy policy evaluation using prototype learning

Importance sampling (IS) is often used to perform off-policy policy eval...

Please sign up or login with your details

Forgot password? Click here to reset