Out-of-sample error estimate for robust M-estimators with convex penalty

08/26/2020
by   Pierre C. Bellec, et al.
0

A generic out-of-sample error estimate is proposed for robust M-estimators regularized with a convex penalty in high-dimensional linear regression where (X,y) is observed and p,n are of the same order. If ψ is the derivative of the robust data-fitting loss ρ, the estimate depends on the observed data only through the quantities ψ̂= ψ(y-Xβ̂), X^⊤ψ̂ and the derivatives (∂/∂ y) ψ̂ and (∂/∂ y) Xβ̂ for fixed X. The out-of-sample error estimate enjoys a relative error of order n^-1/2 in a linear model with Gaussian covariates and independent noise, either non-asymptotically when p/n≤γ or asymptotically in the high-dimensional asymptotic regime p/n→γ'∈(0,∞). General differentiable loss functions ρ are allowed provided that ψ=ρ' is 1-Lipschitz. The validity of the out-of-sample error estimate holds either under a strong convexity assumption, or for the ℓ_1-penalized Huber M-estimator if the number of corrupted observations and sparsity of the true β are bounded from above by s_*n for some small enough constant s_*∈(0,1) independent of n,p. For the square loss and in the absence of corruption in the response, the results additionally yield n^-1/2-consistent estimates of the noise variance and of the generalization error. This generalizes, to arbitrary convex penalty, estimates that were previously known for the Lasso.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2021

Asymptotic normality of robust M-estimators with convex penalty

This paper develops asymptotic normality results for individual coordina...
research
07/11/2021

Derivatives and residual distribution of regularized M-estimators with application to adaptive tuning

This paper studies M-estimators with gradient-Lipschitz loss function re...
research
10/12/2019

First order expansion of convex regularized estimators

We consider first order expansions of convex penalized estimators in hig...
research
12/26/2019

Second order Poincaré inequalities and de-biasing arbitrary convex regularizers when p/n → γ

A new Central Limit Theorem (CLT) is developed for random variables of t...
research
04/14/2022

Observable adjustments in single-index models for regularized M-estimators

We consider observations (X,y) from single index models with unknown lin...
research
04/29/2021

Generalized Linear Models with Structured Sparsity Estimators

In this paper, we introduce structured sparsity estimators in Generalize...
research
01/30/2018

A scalable estimate of the extra-sample prediction error via approximate leave-one-out

We propose a scalable closed-form formula (ALO_λ) to estimate the extra-...

Please sign up or login with your details

Forgot password? Click here to reset