First order expansion of convex regularized estimators

10/12/2019
by   Pierre C. Bellec, et al.
0

We consider first order expansions of convex penalized estimators in high-dimensional regression problems with random designs. Our setting includes linear regression and logistic regression as special cases. For a given penalty function h and the corresponding penalized estimator β̂, we construct a quantity η, the first order expansion of β̂, such that the distance between β̂ and η is an order of magnitude smaller than the estimation error β̂ - β^*. In this sense, the first order expansion η can be thought of as a generalization of influence functions from the mathematical statistics literature to regularized estimators in high-dimensions. Such first order expansion implies that the risk of β̂ is asymptotically the same as the risk of η which leads to a precise characterization of the MSE of β̂; this characterization takes a particularly simple form for isotropic design. Such first order expansion also leads to inference results based on β̂. We provide sufficient conditions for the existence of such first order expansion for three regularizers: the Lasso in its constrained form, the lasso in its penalized form, and the Group-Lasso. The results apply to general loss functions under some conditions and those conditions are satisfied for the squared loss in linear regression and for the logistic loss in the logistic model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2011

Estimation And Selection Via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

The ℓ_1-penalized method, or the Lasso, has emerged as an important tool...
research
04/12/2022

High-dimensional nonconvex lasso-type M-estimators

This paper proposes a theory for ℓ_1-norm penalized high-dimensional M-e...
research
09/02/2019

Asymptotic linear expansion of regularized M-estimators

Parametric high-dimensional regression analysis requires the usage of re...
research
12/26/2019

Second order Poincaré inequalities and de-biasing arbitrary convex regularizers when p/n → γ

A new Central Limit Theorem (CLT) is developed for random variables of t...
research
08/26/2020

Out-of-sample error estimate for robust M-estimators with convex penalty

A generic out-of-sample error estimate is proposed for robust M-estimato...
research
11/02/2020

c-lasso – a Python package for constrained sparse and robust regression and classification

We introduce c-lasso, a Python package that enables sparse and robust li...
research
07/28/2018

Logistic regression and Ising networks: prediction and estimation when violating lasso assumptions

The Ising model was originally developed to model magnetisation of solid...

Please sign up or login with your details

Forgot password? Click here to reset