Outcome Assumptions and Duality Theory for Balancing Weights

03/17/2022
by   David Bruns-Smith, et al.
0

We study balancing weight estimators, which reweight outcomes from a source population to estimate missing outcomes in a target population. These estimators minimize the worst-case error by making an assumption about the outcome model. In this paper, we show that this outcome assumption has two immediate implications. First, we can replace the minimax optimization problem for balancing weights with a simple convex loss over the assumed outcome function class. Second, we can replace the commonly-made overlap assumption with a more appropriate quantitative measure, the minimum worst-case bias. Finally, we show conditions under which the weights remain robust when our assumptions on the outcomes are wrong.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

11/26/2017

Model misspecification and bias for inverse probability weighting and doubly robust estimators

In the causal inference literature a class of semi-parametric estimators...
02/12/2019

Enhanced Balancing of Bias-Variance Tradeoff in Stochastic Estimation: A Minimax Perspective

Biased stochastic estimators, such as finite-differences for noisy gradi...
11/09/2020

Reducing bias in difference-in-differences models using entropy balancing

This paper illustrates the use of entropy balancing in difference-in-dif...
12/27/2021

Faster Algorithms and Constant Lower Bounds for the Worst-Case Expected Error

The study of statistical estimation without distributional assumptions o...
01/30/2018

Surprise in Elections

Elections involving a very large voter population often lead to outcomes...
06/14/2019

Posterior Average Effects

Economists are often interested in computing averages with respect to a ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.