MM for Penalized Estimation

12/23/2019
by   Zhu Wang, et al.
0

Penalized estimation can conduct variable selection and parameter estimation simultaneously. The general framework is to minimize a loss function subject to a penalty designed to generate sparse variable selection. Much of the previous work have focused on convex loss functions including generalized linear models. When data are contaminated with noise, robust loss functions are typically introduced. Recent literature has witnessed a growing impact of nonconvex loss-based methods, which can generate robust estimation for data contaminated with outliers. This article investigates robust variable selection based on penalized nonconvex loss functions. We investigate properties of the local and global minimizers of the original penalized loss function and the surrogate penalized loss function induced by the majorization-minimization (MM) algorithm for numerical computation. We establish convergence theory of the proposed MM algorithm for penalized convex and nonconvex loss functions. Performance of the proposed algorithms for regression and classification problems are evaluated on simulated and real data including healthcare costs and cancer clinical status. Efficient implementations of the algorithms are available in the R package mpath in CRAN.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset