Regression adjustment in randomized experiments with a diverging number of covariates

06/20/2018
by   Lihua Lei, et al.
0

Extending R. A. Fisher and D. A. Freedman's results on the analysis of covariance, Lin [2013] proposed an ordinary least squares adjusted estimator of the average treatment effect in completely randomized experiments. His results are appealing because he uses Neyman's randomization model without imposing any modeling assumptions, and the consistency and asymptotic normality of his estimator hold even if the linear model is misspecified. His results hold for a fixed dimension p of the covariates with the sample size n approaching infinity. However, it is common for practitioners to adjust for a large number of covariates to improve efficiency. Therefore, it is important to consider asymptotic regimes allowing for a diverging number of covariates. We show that Lin [2013]'s estimator is consistent when κ p → 0 and asymptotically normal when κ p → 0 under mild moment conditions, where κ is the maximum leverage score of the covariate matrix. In the favorable case where leverage scores are all close, his estimator is consistent when p = o(n / n) and is asymptotically normal when p = o(n^1/2). In addition, we propose a bias-corrected estimator that is consistent when κ p→ 0 and is asymptotically normal, with the same variance in the fixed-p regime, when κ^2 p p → 0. In the favorable case, the latter condition reduces to p = o(n^2/3 / ( n)^1/3), and our simulation confirms that n^2/3 is the phase-transition threshold for Lin [2013]'s central limit theorem. Our analysis requires novel analytic tools for finite population inference and sampling without replacement, which complement and potentially enrich the theory in other areas such as survey sampling, matrix sketching, and transductive learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset