High Dimensional M-Estimation with Missing Outcomes: A Semi-Parametric Framework

11/26/2019 ∙ by Abhishek Chakrabortty, et al. ∙ 17

We consider high dimensional M-estimation in settings where the response Y is possibly missing at random and the covariates X∈R^p can be high dimensional compared to the sample size n. The parameter of interest θ_0 ∈R^d is defined as the minimizer of the risk of a convex loss, under a fully non-parametric model, and θ_0 itself is high dimensional which is a key distinction from existing works. Standard high dimensional regression and series estimation with possibly misspecified models and missing Y are included as special cases, as well as their counterparts in causal inference using 'potential outcomes'. Assuming θ_0 is s-sparse (s ≪ n), we propose an L_1-regularized debiased and doubly robust (DDR) estimator of θ_0 based on a high dimensional adaptation of the traditional double robust (DR) estimator's construction. Under mild tail assumptions and arbitrarily chosen (working) models for the propensity score (PS) and the outcome regression (OR) estimators, satisfying only some high-level conditions, we establish finite sample performance bounds for the DDR estimator showing its (optimal) L_2 error rate to be √(s (log d)/ n) when both models are correct, and its consistency and DR properties when only one of them is correct. Further, when both the models are correct, we propose a desparsified version of our DDR estimator that satisfies an asymptotic linear expansion and facilitates inference on low dimensional components of θ_0. Finally, we discuss various of choices of high dimensional parametric/semi-parametric working models for the PS and OR estimators. All results are validated via detailed simulations.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.