Some Large Sample Results for the Method of Regularized Estimators

12/19/2017
by   Michael Jansson, et al.
0

We present a general framework for studying regularized estimators; i.e., estimation problems wherein "plug-in" type estimators are either ill-defined or ill-behaved. We derive primitive conditions that imply consistency and asymptotic linear representation for regularized estimators, allowing for slower than √(n) estimators as well as infinite dimensional parameters. We also provide data-driven methods for choosing tuning parameters that, under some conditions, achieve the aforementioned results. We illustrate the scope of our approach by studying a wide range of applications, revisiting known results and deriving new ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2018

Large-dimensional behavior of regularized Maronna's M-estimators of covariance matrices

Robust estimators of large covariance matrices are considered, comprisin...
research
05/31/2013

On model selection consistency of regularized M-estimators

Regularized M-estimators are used in diverse areas of science and engine...
research
10/09/2017

Maximum Regularized Likelihood Estimators: A General Prediction Theory and Applications

Maximum regularized likelihood estimators (MRLEs) are arguably the most ...
research
09/02/2019

Asymptotic linear expansion of regularized M-estimators

Parametric high-dimensional regression analysis requires the usage of re...
research
03/31/2017

The Risk of Machine Learning

Many applied settings in empirical economics involve simultaneous estima...
research
09/01/2022

A Unified Framework for Estimation of High-dimensional Conditional Factor Models

This paper develops a general framework for estimation of high-dimension...
research
04/22/2022

Adversarial Estimators

We develop an asymptotic theory of adversarial estimators ('A-estimators...

Please sign up or login with your details

Forgot password? Click here to reset