
A note on "MLE in logistic regression with a diverging dimension"
This short note is to point the reader to notice that the proof of high ...
read it

The Performance Analysis of Generalized Margin Maximizer (GMM) on Separable Data
Logistic models are commonly used for binary classification tasks. The s...
read it

HighDimensional Bernoulli Autoregressive Process with LongRange Dependence
We consider the problem of estimating the parameters of a multivariate B...
read it

SLOE: A Faster Method for Statistical Inference in HighDimensional Logistic Regression
Logistic regression remains one of the most widely used tools in applied...
read it

Analysis of overfitting in the regularized Cox model
The Cox proportional hazards model is ubiquitous in the analysis of time...
read it

Using Feature Grouping as a Stochastic Regularizer for HighDimensional Noisy Data
The use of complex models with many parameters is challenging with h...
read it

Optimal link prediction with matrix logistic regression
We consider the problem of link prediction, based on partial observation...
read it
The Impact of Regularization on Highdimensional Logistic Regression
Logistic regression is commonly used for modeling dichotomous outcomes. In the classical setting, where the number of observations is much larger than the number of parameters, properties of the maximum likelihood estimator in logistic regression are well understood. Recently, Sur and Candes have studied logistic regression in the highdimensional regime, where the number of observations and parameters are comparable, and show, among other things, that the maximum likelihood estimator is biased. In the highdimensional regime the underlying parameter vector is often structured (sparse, blocksparse, finitealphabet, etc.) and so in this paper we study regularized logistic regression (RLR), where a convex regularizer that encourages the desired structure is added to the negative of the loglikelihood function. An advantage of RLR is that it allows parameter recovery even for instances where the (unconstrained) maximum likelihood estimate does not exist. We provide a precise analysis of the performance of RLR via the solution of a system of six nonlinear equations, through which any performance metric of interest (mean, meansquared error, probability of support recovery, etc.) can be explicitly computed. Our results generalize those of Sur and Candes and we provide a detailed study for the cases of ℓ_2^2RLR and sparse (ℓ_1regularized) logistic regression. In both cases, we obtain explicit expressions for various performance metrics and can find the values of the regularizer parameter that optimizes the desired performance. The theory is validated by extensive numerical simulations across a range of parameter values and problem instances.
READ FULL TEXT
Comments
There are no comments yet.