Fast Margin Maximization via Dual Acceleration

07/01/2021
by   Ziwei Ji, et al.
0

We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of 𝒪(1/t^2). This contrasts with a rate of 𝒪(1/log(t)) for standard gradient descent, and 𝒪(1/t) for normalized gradient descent. This momentum-based method is derived via the convex dual of the maximum-margin problem, and specifically by applying Nesterov acceleration to this dual, which manages to result in a simple and intuitive method in the primal. This dual view can also be used to derive a stochastic variant, which performs adaptive non-uniform sampling via the dual variables.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2019

A refined primal-dual analysis of the implicit bias

Recent work shows that gradient descent on linearly separable data is im...
research
08/13/2023

Conic Descent Redux for Memory-Efficient Optimization

Conic programming has well-documented merits in a gamut of signal proces...
research
05/22/2023

Fast Convergence in Learning Two-Layer Neural Networks with Separable Data

Normalized gradient descent has shown substantial success in speeding up...
research
10/08/2021

Momentum Doesn't Change the Implicit Bias

The momentum acceleration technique is widely adopted in many optimizati...
research
01/17/2020

Gradient descent with momentum — to accelerate or to super-accelerate?

We consider gradient descent with `momentum', a widely used method for l...
research
02/11/2019

Efficient Primal-Dual Algorithms for Large-Scale Multiclass Classification

We develop efficient algorithms to train ℓ_1-regularized linear classifi...
research
04/23/2013

The Stochastic Gradient Descent for the Primal L1-SVM Optimization Revisited

We reconsider the stochastic (sub)gradient approach to the unconstrained...

Please sign up or login with your details

Forgot password? Click here to reset