Penalized robust estimators in logistic regression with applications to sparse models

11/01/2019 ∙ by Ana M. Bianco, et al. ∙ 0

Sparse covariates are frequent in classification and regression problems and in these settings the task of variable selection is usually of interest. As it is well known, sparse statistical models correspond to situations where there are only a small number of non–zero parameters and for that reason, they are much easier to interpret than dense ones. In this paper, we focus on the logistic regression model and our aim is to address robust and penalized estimation for the regression parameter. We introduce a family of penalized weighted M-type estimators for the logistic regression parameter that are stable against atypical data. We explore different penalizations functions and we introduce the so–called Sign penalization. This new penalty has the advantage that it depends only on one penalty parameter, avoiding arbitrary tuning constants. We discuss the variable selection capability of the given proposals as well as their asymptotic behaviour. Through a numerical study, we compare the finite sample performance of the proposal corresponding to different penalized estimators either robust or classical, under different scenarios. A robust cross–validation criterion is also presented. The analysis of two real data sets enables to investigate the stability of the penalized estimators to the presence of outliers.



There are no comments yet.


page 26

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.