Low-dimensional Interpretable Kernels with Conic Discriminant Functions for Classification

07/17/2020
by   Gurhan Ceylan, et al.
0

Kernels are often developed and used as implicit mapping functions that show impressive predictive power due to their high-dimensional feature space representations. In this study, we gradually construct a series of simple feature maps that lead to a collection of interpretable low-dimensional kernels. At each step, we keep the original features and make sure that the increase in the dimension of input data is extremely low, so that the resulting discriminant functions remain interpretable and amenable to fast training. Despite our persistence on interpretability, we obtain high accuracy results even without in-depth hyperparameter tuning. Comparison of our results against several well-known kernels on benchmark datasets show that the proposed kernels are competitive in terms of prediction accuracy, while the training times are significantly lower than those obtained with state-of-the-art kernel implementations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2012

Random Feature Maps for Dot Product Kernels

Approximating non-linear kernels using feature maps has gained a lot of ...
research
11/10/2019

Interpretable Multiple-Kernel Prototype Learning for Discriminative Representation and Feature Selection

Prototype-based methods are of the particular interest for domain specia...
research
02/04/2022

Complex-to-Real Random Features for Polynomial Kernels

Kernel methods are ubiquitous in statistical modeling due to their theor...
research
08/06/2017

Interpretable Low-Dimensional Regression via Data-Adaptive Smoothing

We consider the problem of estimating a regression function in the commo...
research
02/04/2016

Random Feature Maps via a Layered Random Projection (LaRP) Framework for Object Classification

The approximation of nonlinear kernels via linear feature maps has recen...
research
09/23/2019

Scalable Kernel Learning via the Discriminant Information

Kernel approximation methods have been popular techniques for scalable k...
research
06/23/2020

SWAG: A Wrapper Method for Sparse Learning

Predictive power has always been the main research focus of learning alg...

Please sign up or login with your details

Forgot password? Click here to reset