Minimax Semiparametric Learning With Approximate Sparsity

12/27/2019
by   Jelena Bradic, et al.
19

Many objects of interest can be expressed as a linear, mean square continuous functional of a least squares projection (regression). Often the regression may be high dimensional, depending on many variables. This paper gives minimal conditions for root-n consistent and efficient estimation of such objects when the regression and the Riesz representer of the functional are approximately sparse and the sum of the absolute value of the coefficients is bounded. The approximately sparse functions we consider are those where an approximation by some t regressors has root mean square error less than or equal to Ct^-ξ for C,ξ>0. We show that a necessary condition for efficient estimation is that the sparse approximation rate ξ_1 for the regression and the rate ξ_2 for the Riesz representer satisfy max{ξ_1 ,ξ_2}>1/2. This condition is stronger than the corresponding condition ξ_1+ξ_2>1/2 for Holder classes of functions. We also show that Lasso based, cross-fit, debiased machine learning estimators are asymptotically efficient under these conditions. In addition we show efficiency of an estimator without cross-fitting when the functional depends on the regressors and the regression sparse approximation rate satisfies ξ_1>1/2.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2019

A unifying approach for doubly-robust ℓ_1 regularized estimation of causal contrasts

We consider inference about a scalar parameter under a non-parametric mo...
research
09/20/2019

Does SLOPE outperform bridge regression?

A recently proposed SLOPE estimator (arXiv:1407.3824) has been shown to ...
research
04/29/2021

The Raise Regression: Justification, properties and application

Multicollinearity produces an inflation in the variance of the Ordinary ...
research
08/06/2019

On cylindrical regression in three-dimensional Euclidean space

The three-dimensional cylindrical regression problem is a problem of fin...
research
09/14/2018

Learning L2 Continuous Regression Functionals via Regularized Riesz Representers

Many objects of interest can be expressed as an L2 continuous functional...
research
03/30/2020

Supplementary Material for CDC Submission No. 1461

In this paper, we focus on the influences of the condition number of the...
research
12/17/2012

Co-clustering separately exchangeable network data

This article establishes the performance of stochastic blockmodels in ad...

Please sign up or login with your details

Forgot password? Click here to reset