To tune or not to tune, a case study of ridge logistic regression in small or sparse datasets

by   Hana Šinkovec, et al.

For finite samples with binary outcomes penalized logistic regression such as ridge logistic regression (RR) has the potential of achieving smaller mean squared errors (MSE) of coefficients and predictions than maximum likelihood estimation. There is evidence, however, that RR is sensitive to small or sparse data situations, yielding poor performance in individual datasets. In this paper, we elaborate this issue further by performing a comprehensive simulation study, investigating the performance of RR in comparison to Firth's correction that has been shown to perform well in low-dimensional settings. Performance of RR strongly depends on the choice of complexity parameter that is usually tuned by minimizing some measure of the out-of-sample prediction error or information criterion. Alternatively, it may be determined according to prior assumptions about true effects. As shown in our simulation and illustrated by a data example, values optimized in small or sparse datasets are negatively correlated with optimal values and suffer from substantial variability which translates into large MSE of coefficients and large variability of calibration slopes. In contrast, if the degree of shrinkage is pre-specified, accurate coefficients and predictions can be obtained even in non-ideal settings such as encountered in the context of rare outcomes or sparse predictors.



There are no comments yet.


page 10

page 14

page 19

page 20


Tuning in ridge logistic regression to solve separation

Separation in logistic regression is a common problem causing failure of...

On the variability of regression shrinkage methods for clinical prediction models: simulation study on predictive performance

When developing risk prediction models, shrinkage methods are recommende...

Firth's logistic regression with rare events: accurate effect estimates AND predictions?

Firth-type logistic regression has become a standard approach for the an...

On resampling methods for model assessment in penalized and unpenalized logistic regression

Penalized logistic regression methods are frequently used to investigate...

A modern maximum-likelihood theory for high-dimensional logistic regression

Every student in statistics or data science learns early on that when th...

Sparse network asymptotics for logistic regression

Consider a bipartite network where N consumers choose to buy or not to b...

Combining Predictions of Auto Insurance Claims

This paper aims at achieving better performance of prediction by combini...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.