Stacked Quantizers for Compositional Vector Compression

11/08/2014
by   Julieta Martinez, et al.
0

Recently, Babenko and Lempitsky introduced Additive Quantization (AQ), a generalization of Product Quantization (PQ) where a non-independent set of codebooks is used to compress vectors into small binary codes. Unfortunately, under this scheme encoding cannot be done independently in each codebook, and optimal encoding is an NP-hard problem. In this paper, we observe that PQ and AQ are both compositional quantizers that lie on the extremes of the codebook dependence-independence assumption, and explore an intermediate approach that exploits a hierarchical structure in the codebooks. This results in a method that achieves quantization error on par with or lower than AQ, while being several orders of magnitude faster. We perform a complexity analysis of PQ, AQ and our method, and evaluate our approach on standard benchmarks of SIFT and GIST descriptors, as well as on new datasets of features obtained from state-of-the-art convolutional neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2017

Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations

We present a new approach to learn compressible representations in deep ...
research
12/18/2019

Interleaved Composite Quantization for High-Dimensional Similarity Search

Similarity search retrieves the nearest neighbors of a query vector from...
research
10/31/2017

Deep Hashing with Triplet Quantization Loss

With the explosive growth of image databases, deep hashing, which learns...
research
08/11/2019

Unsupervised Neural Quantization for Compressed-Domain Similarity Search

We tackle the problem of unsupervised visual descriptors compression, wh...
research
07/20/2023

Communication-Efficient Split Learning via Adaptive Feature-Wise Compression

This paper proposes a novel communication-efficient split learning (SL) ...
research
09/13/2016

Towards Deep Compositional Networks

Hierarchical feature learning based on convolutional neural networks (CN...

Please sign up or login with your details

Forgot password? Click here to reset