Accurate Deep Representation Quantization with Gradient Snapping Layer for Similarity Search

10/30/2016
by   Shicong Liu, et al.
0

Recent advance of large scale similarity search involves using deeply learned representations to improve the search accuracy and use vector quantization methods to increase the search speed. However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task. Existing methods simply leverage quantization loss and similarity loss, which result in unexpectedly biased back-propagating gradients and affect the search performances. To this end, we propose a novel gradient snapping layer (GSL) to directly regularize the back-propagating gradient towards a neighboring codeword, the generated gradients are un-biased for reducing similarity loss and also propel the learned representations to be accurately quantized. Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network. The proposed framework is compatible with various existing vector quantization approaches. Experimental results demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art large scale similarity search methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2020

VecQ: Minimal Loss DNN Model Compression With Vectorized Weight Quantization

Quantization has been proven to be an effective method for reducing the ...
research
02/01/2021

Rescuing Deep Hashing from Dead Bits Problem

Deep hashing methods have shown great retrieval accuracy and efficiency ...
research
02/28/2019

End-to-End Efficient Representation Learning via Cascading Combinatorial Optimization

We develop hierarchically quantized efficient embedding representations ...
research
04/22/2018

MQGrad: Reinforcement Learning of Gradient Quantization in Parameter Server

One of the most significant bottleneck in training large scale machine l...
research
11/08/2018

GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training

Data parallelism can boost the training speed of convolutional neural ne...
research
10/05/2022

Active Image Indexing

Image copy detection and retrieval from large databases leverage two com...
research
11/23/2017

In Defense of Product Quantization

Despite their widespread adoption, Product Quantization techniques were ...

Please sign up or login with your details

Forgot password? Click here to reset