Bolt: Accelerated Data Mining with Fast Vector Compression

06/30/2017
by   Davis W Blalock, et al.
0

Vectors of data are at the heart of machine learning and data mining. Recently, vector quantization methods have shown great promise in reducing both the time and space costs of operating on vectors. We introduce a vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x. Because it can encode over 2GB of vectors per second, it makes vector quantization cheap enough to employ in many more circumstances. For example, using our technique to compute approximate dot products in a nested loop can multiply matrices faster than a state-of-the-art BLAS implementation, even when our algorithm must first compress the matrices. In addition to showing the above speedups, we demonstrate that our approach can accelerate nearest neighbor search and maximum inner product search by over 100x compared to floating point operations and up to 10x compared to other vector quantization methods. Our approximate Euclidean distance and dot product computations are not only faster than those of related algorithms with slower encodings, but also faster than Hamming distance computations, which have direct hardware support on the tested platforms. We also assess the errors of our algorithm's approximate distances and dot products, and find that it is competitive with existing, slower vector quantization algorithms.

READ FULL TEXT

page 7

page 8

research
08/05/2020

Fast top-K Cosine Similarity Search through XOR-Friendly Binary Quantization on GPUs

We explore the use of GPU for accelerating large scale nearest neighbor ...
research
01/04/2007

On the use of self-organizing maps to accelerate vector quantization

Self-organizing maps (SOM) are widely used for their topology preservati...
research
11/12/2019

Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search

Vector quantization (VQ) techniques are widely used in similarity search...
research
09/07/2016

Polysemous codes

This paper considers the problem of approximate nearest neighbor search ...
research
01/06/2010

Accelerating Competitive Learning Graph Quantization

Vector quantization(VQ) is a lossy data compression technique from signa...
research
02/08/2022

Orthogonal Matrices for MBAT Vector Symbolic Architectures, and a "Soft" VSA Representation for JSON

Vector Symbolic Architectures (VSAs) give a way to represent a complex o...
research
06/21/2021

Multiplying Matrices Without Multiplying

Multiplying matrices is among the most fundamental and compute-intensive...

Please sign up or login with your details

Forgot password? Click here to reset