Sparse Regression for Extreme Values
We study the problem of selecting features associated with extreme values in high dimensional linear regression. Normally, in linear modeling problems, the presence of abnormal extreme values or outliers is considered an anomaly which should either be removed from the data or remedied using robust regression methods. In many situations, however, the extreme values in regression modeling are not outliers but rather the signals of interest; consider traces from spiking neurons, volatility in finance, or extreme events in climate science, for example. In this paper, we propose a new method for sparse high-dimensional linear regression for extreme values which is motivated by the Subbotin, or generalized normal distribution. This leads us to utilize an ℓ_p norm loss where p is an even integer greater than two; we demonstrate that this loss increases the weight on extreme values. We prove consistency and variable selection consistency for the ℓ_p norm regression with a Lasso penalty, which we term the Extreme Lasso. Through simulation studies and real-world data data examples, we show that this method outperforms other methods currently used in the literature for selecting features of interest associated with extreme values in high-dimensional regression.
READ FULL TEXT