Asymptotic normality of robust M-estimators with convex penalty

07/08/2021
by   Pierre C. Bellec, et al.
0

This paper develops asymptotic normality results for individual coordinates of robust M-estimators with convex penalty in high-dimensions, where the dimension p is at most of the same order as the sample size n, i.e, p/n≤γ for some fixed constant γ>0. The asymptotic normality requires a bias correction and holds for most coordinates of the M-estimator for a large class of loss functions including the Huber loss and its smoothed versions regularized with a strongly convex penalty. The asymptotic variance that characterizes the width of the resulting confidence intervals is estimated with data-driven quantities. This estimate of the variance adapts automatically to low (p/n→0) or high (p/n ≤γ) dimensions and does not involve the proximal operators seen in previous works on asymptotic normality of M-estimators. For the Huber loss, the estimated variance has a simple expression involving an effective degrees-of-freedom as well as an effective sample size. The case of the Huber loss with Elastic-Net penalty is studied in details and a simulation study confirms the theoretical findings. The asymptotic normality results follow from Stein formulae for high-dimensional random vectors on the sphere developed in the paper which are of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/26/2020

Out-of-sample error estimate for robust M-estimators with convex penalty

A generic out-of-sample error estimate is proposed for robust M-estimato...
research
04/14/2022

Observable adjustments in single-index models for regularized M-estimators

We consider observations (X,y) from single index models with unknown lin...
research
08/12/2019

Sharp Guarantees for Solving Random Equations with One-Bit Information

We study the performance of a wide class of convex optimization-based es...
research
07/11/2021

Derivatives and residual distribution of regularized M-estimators with application to adaptive tuning

This paper studies M-estimators with gradient-Lipschitz loss function re...
research
09/26/2020

Constructing Confidence Intervals for the Signals in Sparse Phase Retrieval

In this paper, we provide a general methodology to draw statistical infe...
research
12/26/2019

Second order Poincaré inequalities and de-biasing arbitrary convex regularizers when p/n → γ

A new Central Limit Theorem (CLT) is developed for random variables of t...
research
07/19/2022

Inference for high-dimensional split-plot designs with different dimensions between groups

In repeated Measure Designs with multiple groups, the primary purpose is...

Please sign up or login with your details

Forgot password? Click here to reset