Speeding Up Budgeted Stochastic Gradient Descent SVM Training with Precomputed Golden Section Search

06/26/2018
by   Tobias Glasmachers, et al.
0

Limiting the model size of a kernel support vector machine to a pre-defined budget is a well-established technique that allows to scale SVM learning and prediction to large-scale data. Its core addition to simple stochastic gradient training is budget maintenance through merging of support vectors. This requires solving an inner optimization problem with an iterative method many times per gradient step. In this paper we replace the iterative procedure with a fast lookup. We manage to reduce the merging time by up to 65 training time by 44

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2018

Multi-Merge Budget Maintenance for Stochastic Gradient Descent SVM Training

Budgeted Stochastic Gradient Descent (BSGD) is a state-of-the-art techni...
research
06/26/2018

Dual SVM Training on a Budget

We present a dual subspace ascent algorithm for support vector machine t...
research
05/11/2020

A Relational Gradient Descent Algorithm For Support Vector Machine Training

We consider gradient descent like algorithms for Support Vector Machine ...
research
12/31/2016

Very Fast Kernel SVM under Budget Constraints

In this paper we propose a fast online Kernel SVM algorithm under tight ...
research
05/03/2019

Performance Optimization on Model Synchronization in Parallel Stochastic Gradient Descent Based SVM

Understanding the bottlenecks in implementing stochastic gradient descen...
research
08/08/2022

Fast Offline Policy Optimization for Large Scale Recommendation

Personalised interactive systems such as recommender systems require sel...
research
04/21/2018

Stability of the Stochastic Gradient Method for an Approximated Large Scale Kernel Machine

In this paper we measured the stability of stochastic gradient method (S...

Please sign up or login with your details

Forgot password? Click here to reset