In Defense of Product Quantization

11/23/2017
by   Benjamin Klein, et al.
0

Despite their widespread adoption, Product Quantization techniques were recently shown to be inferior to other hashing techniques. In this work, we present an improved Deep Product Quantization (DPQ) technique that leads to more accurate retrieval and classification than the latest state of the art methods, while having similar computational complexity and memory footprint as the Product Quantization method. To our knowledge, this is the first work to introduce a representation that is inspired by Product Quantization and which is learned end-to-end, and thus benefits from the supervised signal. DPQ explicitly learns soft and hard representations to enable an efficient and accurate asymmetric search, by using a straight-through estimator. A novel loss function, Joint Central Loss, is introduced, which both improves the retrieval performance, and decreases the discrepancy between the soft and the hard representations. Finally, by using a normalization technique, we improve the results for cross-domain category retrieval.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2017

Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations

We present a new approach to learn compressible representations in deep ...
research
08/09/2017

SUBIC: A supervised, structured binary code for image search

For large-scale visual search, highly compressed yet meaningful represen...
research
07/01/2021

Orthonormal Product Quantization Network for Scalable Face Image Retrieval

Recently, deep hashing with Hamming distance metric has drawn increasing...
research
07/12/2018

Learning Product Codebooks using Vector Quantized Autoencoders for Image Retrieval

The Vector Quantized-Variational Autoencoder (VQ-VAE) provides an unsupe...
research
10/18/2021

Wideband and Entropy-Aware Deep Soft Bit Quantization

Deep learning has been recently applied to physical layer processing in ...
research
10/30/2016

Accurate Deep Representation Quantization with Gradient Snapping Layer for Similarity Search

Recent advance of large scale similarity search involves using deeply le...
research
12/12/2016

FastText.zip: Compressing text classification models

We consider the problem of producing compact architectures for text clas...

Please sign up or login with your details

Forgot password? Click here to reset