Structured Block Basis Factorization for Scalable Kernel Matrix Evaluation

05/03/2015
by   Ruoxi Wang, et al.
0

Kernel matrices are popular in machine learning and scientific computing, but they are limited by their quadratic complexity in both construction and storage. It is well-known that as one varies the kernel parameter, e.g., the width parameter in radial basis function kernels, the kernel matrix changes from a smooth low-rank kernel to a diagonally-dominant and then fully-diagonal kernel. Low-rank approximation methods have been widely-studied, mostly in the first case, to reduce the memory storage and the cost of computing matrix-vector products. Here, we use ideas from scientific computing to propose an extension of these methods to situations where the matrix is not well-approximated by a low-rank matrix. In particular, we construct an efficient block low-rank approximation method---which we call the Block Basis Factorization---and we show that it has O(n) complexity in both time and memory. Our method works for a wide range of kernel parameters, extending the domain of applicability of low-rank approximation methods, and our empirical results demonstrate the stability (small standard deviation in error) and superiority over current state-of-art kernel approximation algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2017

Is Input Sparsity Time Possible for Kernel Low-Rank Approximation?

Low-rank approximation is a common tool used to accelerate kernel method...
research
06/23/2017

On the numerical rank of radial basis function kernels in high dimension

Low-rank approximations are popular methods to reduce the high computati...
research
06/23/2017

On the numerical rank of radial basis function kernel matrices in high dimension

Low-rank approximations are popular techniques to reduce the high comput...
research
08/02/2016

Hierarchically Compositional Kernels for Scalable Nonparametric Learning

We propose a novel class of kernels to alleviate the high computational ...
research
08/03/2022

Block Discrete Empirical Interpolation Methods

We present two block variants of the discrete empirical interpolation me...
research
04/22/2022

Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention

Self-Attention is a widely used building block in neural modeling to mix...
research
02/08/2022

Linear Time Kernel Matrix Approximation via Hyperspherical Harmonics

We propose a new technique for constructing low-rank approximations of m...

Please sign up or login with your details

Forgot password? Click here to reset