Scalable Kernel Learning via the Discriminant Information

09/23/2019
by   Mert Al, et al.
0

Kernel approximation methods have been popular techniques for scalable kernel based learning. They create explicit, low-dimensional kernel feature maps to deal with the high computational and memory complexity of standard techniques. This work studies a supervised kernel learning methodology to optimize such mappings. We utilize the Discriminant Information criterion, a measure of class separability, which is extended to cover a wider range of kernels. By exploiting the connection of this criterion to the minimum Kernel Ridge Regression loss, we propose a novel training strategy that is especially suitable for stochastic gradient methods, allowing kernel optimization to scale to large datasets. Experimental results on 3 datasets showcase that our techniques can improve optimization and generalization performances over state of the art kernel learning methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2018

Supervising Nyström Methods via Negative Margin Support Vector Selection

Pattern recognition on big data can be challenging for kernel machines a...
research
01/17/2016

Learning the kernel matrix via predictive low-rank approximations

Efficient and accurate low-rank approximations of multiple data sources ...
research
02/26/2019

Efficient online learning with kernels for adversarial large scale problems

We are interested in a framework of online learning with kernels for low...
research
01/31/2012

Random Feature Maps for Dot Product Kernels

Approximating non-linear kernels using feature maps has gained a lot of ...
research
07/17/2020

Low-dimensional Interpretable Kernels with Conic Discriminant Functions for Classification

Kernels are often developed and used as implicit mapping functions that ...
research
03/12/2015

Compact Nonlinear Maps and Circulant Extensions

Kernel approximation via nonlinear random feature maps is widely used in...
research
02/11/2018

Quadrature-based features for kernel approximation

We consider the problem of improving kernel approximation via randomized...

Please sign up or login with your details

Forgot password? Click here to reset