Outcome Assumptions and Duality Theory for Balancing Weights

by   David Bruns-Smith, et al.

We study balancing weight estimators, which reweight outcomes from a source population to estimate missing outcomes in a target population. These estimators minimize the worst-case error by making an assumption about the outcome model. In this paper, we show that this outcome assumption has two immediate implications. First, we can replace the minimax optimization problem for balancing weights with a simple convex loss over the assumed outcome function class. Second, we can replace the commonly-made overlap assumption with a more appropriate quantitative measure, the minimum worst-case bias. Finally, we show conditions under which the weights remain robust when our assumptions on the outcomes are wrong.



page 1

page 2

page 3

page 4


Model misspecification and bias for inverse probability weighting and doubly robust estimators

In the causal inference literature a class of semi-parametric estimators...

Enhanced Balancing of Bias-Variance Tradeoff in Stochastic Estimation: A Minimax Perspective

Biased stochastic estimators, such as finite-differences for noisy gradi...

Reducing bias in difference-in-differences models using entropy balancing

This paper illustrates the use of entropy balancing in difference-in-dif...

Faster Algorithms and Constant Lower Bounds for the Worst-Case Expected Error

The study of statistical estimation without distributional assumptions o...

Surprise in Elections

Elections involving a very large voter population often lead to outcomes...

Posterior Average Effects

Economists are often interested in computing averages with respect to a ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.