Convergence of Gradient Descent on Separable Data

03/05/2018
by   Mor Shpigel Nacson, et al.
0

The implicit bias of gradient descent is not fully understood even in simple linear classification tasks (e.g., logistic regression). Soudry et al. (2018) studied this bias on separable data, where there are multiple solutions that correctly classify the data. It was found that, when optimizing monotonically decreasing loss functions with exponential tails using gradient descent, the linear classifier specified by the gradient descent iterates converge to the L_2 max margin separator. However, the convergence rate to the maximum margin solution with fixed step size was found to be extremely slow: 1/(t). Here we examine how the convergence is influenced by using different loss functions and by using variable step sizes. First, we calculate the convergence rate for loss functions with poly-exponential tails near (-u^ν). We prove that ν=1 yields the optimal convergence rate in the range ν>0.25. Based on further analysis we conjecture that this remains the optimal rate for ν≤ 0.25, and even for sub-poly-exponential tails --- until loss functions with polynomial tails no longer converge to the max margin. Second, we prove the convergence rate could be improved to ( t) /√(t) for the exponential loss, by using aggressive step sizes which compensate for the rapidly vanishing gradients.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2019

Bias of Homotopic Gradient Descent for the Hinge Loss

Gradient descent is a simple and widely used optimization method for mac...
research
06/11/2019

A refined primal-dual analysis of the implicit bias

Recent work shows that gradient descent on linearly separable data is im...
research
06/25/2020

Implicitly Maximizing Margins with the Hinge Loss

A new loss function is proposed for neural networks on classification ta...
research
05/19/2023

Implicit Bias of Gradient Descent for Logistic Regression at the Edge of Stability

Recent research has observed that in machine learning optimization, grad...
research
10/17/2022

On Accelerated Perceptrons and Beyond

The classical Perceptron algorithm of Rosenblatt can be used to find a l...
research
05/09/2021

Directional Convergence Analysis under Spherically Symmetric Distribution

We consider the fundamental problem of learning linear predictors (i.e.,...
research
01/21/2019

A Deterministic Approach to Avoid Saddle Points

Loss functions with a large number of saddle points are one of the main ...

Please sign up or login with your details

Forgot password? Click here to reset