Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization

05/04/2012
by   Martin Slawski, et al.
0

Least squares fitting is in general not useful for high-dimensional linear models, in which the number of predictors is of the same or even larger order of magnitude than the number of samples. Theory developed in recent years has coined a paradigm according to which sparsity-promoting regularization is regarded as a necessity in such setting. Deviating from this paradigm, we show that non-negativity constraints on the regression coefficients may be similarly effective as explicit regularization if the design matrix has additional properties, which are met in several applications of non-negative least squares (NNLS). We show that for these designs, the performance of NNLS with regard to prediction and estimation is comparable to that of the lasso. We argue further that in specific cases, NNLS may have a better ℓ_∞-rate in estimation and hence also advantages with respect to support recovery when combined with thresholding. From a practical point of view, NNLS does not depend on a regularization parameter and is hence easier to use.

READ FULL TEXT
research
03/12/2013

Recovering Non-negative and Combined Sparse Representations

The non-negative solution to an underdetermined linear system can be uni...
research
01/17/2019

Sparse Non-Negative Recovery from Biased Subgaussian Measurements using NNLS

We investigate non-negative least squares (NNLS) for the recovery of spa...
research
03/21/2016

A Discontinuous Neural Network for Non-Negative Sparse Approximation

This paper investigates a discontinuous neural network which is used as ...
research
06/07/2015

No penalty no tears: Least squares in high-dimensional linear models

Ordinary least squares (OLS) is the default method for fitting linear mo...
research
12/17/2014

Support recovery without incoherence: A case for nonconvex regularization

We demonstrate that the primal-dual witness proof method may be used to ...
research
04/26/2014

Estimation of positive definite M-matrices and structure learning for attractive Gaussian Markov Random fields

Consider a random vector with finite second moments. If its precision ma...
research
04/08/2021

A New Perspective on Debiasing Linear Regressions

In this paper, we propose an abstract procedure for debiasing constraine...

Please sign up or login with your details

Forgot password? Click here to reset