Large Scale Kernel Learning using Block Coordinate Descent

02/17/2016
by   Stephen Tu, et al.
0

We demonstrate that distributed block coordinate descent can quickly solve kernel regression and classification problems with millions of data points. Armed with this capability, we conduct a thorough comparison between the full kernel, the Nyström method, and random features on three large classification tasks from various domains. Our results suggest that the Nyström method generally achieves better statistical accuracy than random features, but can require significantly more iterations of optimization. Lastly, we derive new rates for block coordinate descent which support our experimental findings when specialized to kernel methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2022

Greedy double subspaces coordinate descent method via orthogonalization

The coordinate descent method is an effective iterative method for solvi...
research
09/30/2016

A Primer on Coordinate Descent Algorithms

This monograph presents a class of algorithms called coordinate descent ...
research
12/22/2017

Learning the Kernel for Classification and Regression

We investigate a series of learning kernel problems with polynomial comb...
research
09/14/2018

Revisiting Random Binning Features: Fast Convergence and Strong Parallelizability

Kernel method has been developed as one of the standard approaches for n...
research
10/22/2020

Model identification and local linear convergence of coordinate descent

For composite nonsmooth optimization problems, Forward-Backward algorith...
research
12/17/2012

Feature Clustering for Accelerating Parallel Coordinate Descent

Large-scale L1-regularized loss minimization problems arise in high-dime...
research
03/04/2022

Improved Pathwise Coordinate Descent for Power Penalties

Pathwise coordinate descent algorithms have been used to compute entire ...

Please sign up or login with your details

Forgot password? Click here to reset