Fast model selection by limiting SVM training times

02/10/2016
by   Aydin Demircioglu, et al.
0

Kernelized Support Vector Machines (SVMs) are among the best performing supervised learning methods. But for optimal predictive performance, time-consuming parameter tuning is crucial, which impedes application. To tackle this problem, the classic model selection procedure based on grid-search and cross-validation was refined, e.g. by data subsampling and direct search heuristics. Here we focus on a different aspect, the stopping criterion for SVM training. We show that by limiting the training time given to the SVM solver during parameter tuning we can reduce model selection times by an order of magnitude.

READ FULL TEXT
research
11/03/2021

Heuristical choice of SVM parameters

Support Vector Machine (SVM) is one of the most popular classification m...
research
12/24/2021

Optimal Model Averaging of Support Vector Machines in Diverging Model Spaces

Support vector machine (SVM) is a powerful classification method that ha...
research
03/11/2022

Research on Parallel SVM Algorithm Based on Cascade SVM

Cascade SVM (CSVM) can group datasets and train subsets in parallel, whi...
research
05/17/2023

Separability and Scatteredness (S S) Ratio-Based Efficient SVM Regularization Parameter, Kernel, and Kernel Parameter Selection

Support Vector Machine (SVM) is a robust machine learning algorithm with...
research
04/26/2019

A Novel Orthogonal Direction Mesh Adaptive Direct Search Approach for SVM Hyperparameter Tuning

In this paper, we propose the use of a black-box optimization method cal...
research
08/23/2018

Multiclass Universum SVM

We introduce Universum learning for multiclass problems and propose a no...
research
03/19/2020

Faster SVM Training via Conjugate SMO

We propose an improved version of the SMO algorithm for training classif...

Please sign up or login with your details

Forgot password? Click here to reset