Importance Sampling with Unequal Support

by   Philip S. Thomas, et al.

Importance sampling is often used in machine learning when training and testing data come from different distributions. In this paper we propose a new variant of importance sampling that can reduce the variance of importance sampling-based estimates by orders of magnitude when the supports of the training and testing distributions differ. After motivating and presenting our new importance sampling estimator, we provide a detailed theoretical analysis that characterizes both its bias and variance relative to the ordinary importance sampling estimator (in various settings, which include cases where ordinary importance sampling is biased, while our new estimator is not, and vice versa). We conclude with an example of how our new importance sampling estimator can be used to improve estimates of how well a new treatment policy for diabetes will work for an individual, using only data from when the individual used a previous treatment policy.


page 1

page 2

page 3

page 4


AND/OR Importance Sampling

The paper introduces AND/OR importance sampling for probabilistic graphi...

Importance Sampling Policy Evaluation with an Estimated Behavior Policy

In reinforcement learning, off-policy evaluation is the task of using da...

Finite-sample Guarantees for Winsorized Importance Sampling

Importance sampling is a widely used technique to estimate the propertie...

Importance Sampling for Minibatches

Minibatching is a very well studied and highly popular technique in supe...

Policy Improvement for POMDPs Using Normalized Importance Sampling

We present a new method for estimating the expected return of a POMDP fr...

Amortized Rejection Sampling in Universal Probabilistic Programming

Existing approaches to amortized inference in probabilistic programs wit...

Sample Efficient Model Evaluation

Labelling data is a major practical bottleneck in training and testing c...