Orthogonal Statistical Learning

01/25/2019
by   Dylan J. Foster, et al.
0

We provide excess risk guarantees for statistical learning in the presence of an unknown nuisance component. We analyze a two-stage sample splitting meta-algorithm that takes as input two arbitrary estimation algorithms: one for the target model and one for the nuisance model. We show that if the population risk satisfies a condition called Neyman orthogonality, the impact of the first stage error on the excess risk bound achieved by the meta-algorithm is of second order. Our general theorem is agnostic to the particular algorithms used for the target and nuisance and only makes an assumption on their individual performance. This enables the use of a plethora of existing results from statistical learning and machine learning literature to give new guarantees for learning with a nuisance component. Moreover, by focusing on excess risk rather than parameter estimation, we can give guarantees under weaker assumptions than in previous works and accommodate the case where the target parameter belongs to a complex nonparametric class. When the nuisance and target parameters belong to arbitrary classes, we characterize conditions on the metric entropy such that oracle rates---rates of the same order as if we knew the nuisance model---are achieved. We also analyze the rates achieved by specific estimation algorithms such as variance-penalized empirical risk minimization, neural network estimation and sparse high-dimensional linear model estimation. We highlight the applicability of our results via four applications of primary importance: 1) heterogeneous treatment effect estimation, 2) offline policy optimization, 3) domain adaptation, and 4) learning with missing data.

READ FULL TEXT
research
06/13/2018

Plug-in Regularized Estimation of High-Dimensional Parameters in Nonlinear Semiparametric Models

We develop a theory for estimation of a high-dimensional sparse paramete...
research
04/30/2022

Orthogonal Statistical Learning with Self-Concordant Loss

Orthogonal statistical learning and double machine learning have emerged...
research
05/31/2021

A Simple and General Debiased Machine Learning Theorem with Finite Sample Guarantees

Debiased machine learning is a meta algorithm based on bias correction a...
research
12/30/2019

Localized Debiased Machine Learning: Efficient Estimation of Quantile Treatment Effects, Conditional Value at Risk, and Beyond

We consider the efficient estimation of a low-dimensional parameter in t...
research
04/29/2020

Optimal doubly robust estimation of heterogeneous causal effects

Heterogeneous effect estimation plays a crucial role in causal inference...
research
08/14/2020

Semiparametric Estimation and Inference on Structural Target Functions using Machine Learning and Influence Functions

We aim to construct a class of learning algorithms that are of practical...
research
06/16/2020

Theory of Machine Learning Debugging via M-estimation

We investigate problems in penalized M-estimation, inspired by applicati...

Please sign up or login with your details

Forgot password? Click here to reset