DeepAI AI Chat
Log In Sign Up

In Defense of Product Quantization

by   Benjamin Klein, et al.

Despite their widespread adoption, Product Quantization techniques were recently shown to be inferior to other hashing techniques. In this work, we present an improved Deep Product Quantization (DPQ) technique that leads to more accurate retrieval and classification than the latest state of the art methods, while having similar computational complexity and memory footprint as the Product Quantization method. To our knowledge, this is the first work to introduce a representation that is inspired by Product Quantization and which is learned end-to-end, and thus benefits from the supervised signal. DPQ explicitly learns soft and hard representations to enable an efficient and accurate asymmetric search, by using a straight-through estimator. A novel loss function, Joint Central Loss, is introduced, which both improves the retrieval performance, and decreases the discrepancy between the soft and the hard representations. Finally, by using a normalization technique, we improve the results for cross-domain category retrieval.


page 1

page 2

page 3

page 4


Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations

We present a new approach to learn compressible representations in deep ...

SUBIC: A supervised, structured binary code for image search

For large-scale visual search, highly compressed yet meaningful represen...

Orthonormal Product Quantization Network for Scalable Face Image Retrieval

Recently, deep hashing with Hamming distance metric has drawn increasing...

Learning Product Codebooks using Vector Quantized Autoencoders for Image Retrieval

The Vector Quantized-Variational Autoencoder (VQ-VAE) provides an unsupe...

Wideband and Entropy-Aware Deep Soft Bit Quantization

Deep learning has been recently applied to physical layer processing in ...
12/12/2016 Compressing text classification models

We consider the problem of producing compact architectures for text clas...