Kernel methods through the roof: handling billions of points efficiently

06/18/2020
by   Giacomo Meanti, et al.
0

Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since naïve implementations scale poorly with data size. Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections. Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware. Towards this end, we designed a preconditioned gradient solver for kernel methods exploiting both GPU acceleration and parallelization with multiple GPUs, implementing out-of-core variants of common linear algebra operations to guarantee optimal hardware utilization. Further, we optimize the numerical precision of different operations and maximize efficiency of matrix-vector multiplications. As a result we can experimentally show dramatic speedups on datasets with billions of points, while still guaranteeing state of the art performance. Additionally, we make our software available as an easy to use library.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2017

Algorithmic patterns for H-matrices on many-core processors

In this work, we consider the reformulation of hierarchical (H) matrix a...
research
11/16/2020

Indirection Stream Semantic Register Architecture for Efficient Sparse-Dense Linear Algebra

Sparse-dense linear algebra is crucial in many domains, but challenging ...
research
09/12/2021

H2Opus: A distributed-memory multi-GPU software package for non-local operators

Hierarchical ℋ^2-matrices are asymptotically optimal representations for...
research
09/02/2021

A Study of Mixed Precision Strategies for GMRES on GPUs

Support for lower precision computation is becoming more common in accel...
research
09/15/2023

MPCGPU: Real-Time Nonlinear Model Predictive Control through Preconditioned Conjugate Gradient on the GPU

Nonlinear Model Predictive Control (NMPC) is a state-of-the-art approach...
research
09/06/2023

CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra

Many areas of machine learning and science involve large linear algebra ...
research
05/16/2021

Experimental Evaluation of Multiprecision Strategies for GMRES on GPUs

Support for lower precision computation is becoming more common in accel...

Please sign up or login with your details

Forgot password? Click here to reset