Condition Number Analysis of Logistic Regression, and its Implications for Standard First-Order Solution Methods

10/20/2018
by   Robert M. Freund, et al.
0

Logistic regression is one of the most popular methods in binary classification, wherein estimation of model parameters is carried out by solving the maximum likelihood (ML) optimization problem, and the ML estimator is defined to be the optimal solution of this problem. It is well known that the ML estimator exists when the data is non-separable, but fails to exist when the data is separable. First-order methods are the algorithms of choice for solving large-scale instances of the logistic regression problem. In this paper, we introduce a pair of condition numbers that measure the degree of non-separability or separability of a given dataset in the setting of binary classification, and we study how these condition numbers relate to and inform the properties and the convergence guarantees of first-order methods. When the training data is non-separable, we show that the degree of non-separability naturally enters the analysis and informs the properties and convergence guarantees of two standard first-order methods: steepest descent (for any given norm) and stochastic gradient descent. Expanding on the work of Bach, we also show how the degree of non-separability enters into the analysis of linear convergence of steepest descent (without needing strong convexity), as well as the adaptive convergence of stochastic gradient descent. When the training data is separable, first-order methods rather curiously have good empirical success, which is not well understood in theory. In the case of separable data, we demonstrate how the degree of separability enters into the analysis of ℓ_2 steepest descent and stochastic gradient descent for delivering approximate-maximum-margin solutions with associated computational guarantees as well. This suggests that first-order methods can lead to statistically meaningful solutions in the separable case, even though the ML solution does not exist.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2023

Gradient Descent Converges Linearly for Logistic Regression on Separable Data

We show that running gradient descent with variable learning rate guaran...
research
06/12/2018

Convergence of SGD in Learning ReLU Models with Separable Data

We consider the binary classification problem in which the objective fun...
research
03/20/2018

Risk and parameter convergence of logistic regression

The logistic loss is strictly convex and does not attain its infimum; co...
research
03/23/2020

A termination criterion for stochastic gradient descent for binary classification

We propose a new, simple, and computationally inexpensive termination te...
research
11/13/2019

A Model of Double Descent for High-dimensional Binary Linear Classification

We consider a model for logistic regression where only a subset of featu...
research
08/15/2021

Implicit Regularization of Bregman Proximal Point Algorithm and Mirror Descent on Separable Data

Bregman proximal point algorithm (BPPA), as one of the centerpieces in t...
research
06/26/2015

Convolutional networks and learning invariant to homogeneous multiplicative scalings

The conventional classification schemes -- notably multinomial logistic ...

Please sign up or login with your details

Forgot password? Click here to reset