Salvaging Falsified Instrumental Variable Models

12/30/2018
by   Matthew A. Masten, et al.
0

What should researchers do when their baseline model is refuted? We provide four constructive answers. First, researchers can measure the extent of falsification. To do this, we consider continuous relaxations of the baseline assumptions of concern. We then define the falsification frontier: the boundary between the set of assumptions which falsify the model and those which do not. This frontier provides a quantitative measure of the extent of falsification. Second, researchers can present the identified set for the parameter of interest under the assumption that the true model lies somewhere on this frontier. We call this the falsification adaptive set. This set generalizes the standard baseline estimand to account for possible falsification. Third, researchers can present the identified set for a specific point on this frontier. Finally, as a sensitivity analysis, researchers can present identified sets for points beyond the frontier. To illustrate these four ways of salvaging falsified models, we study overidentifying restrictions in two instrumental variable models: a homogeneous effects linear model, and heterogeneous effect models with either binary or continuous outcomes. In the linear model, we consider the classical overidentifying restrictions implied when multiple instruments are observed. We generalize these conditions by considering continuous relaxations of the classical exclusion restrictions. By sufficiently weakening the assumptions, a falsified baseline model becomes non-falsified. We obtain analogous results in the heterogeneous effects models, where we derive identified sets for marginal distributions of potential outcomes, falsification frontiers, and falsification adaptive sets under continuous relaxations of the instrument exogeneity assumptions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro