Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings

04/01/2022
by   Shitao Xiao, et al.
8

Vector quantization (VQ) based ANN indexes, such as Inverted File System (IVF) and Product Quantization (PQ), have been widely applied to embedding based document retrieval thanks to the competitive time and memory efficiency. Originally, VQ is learned to minimize the reconstruction loss, i.e., the distortions between the original dense embeddings and the reconstructed embeddings after quantization. Unfortunately, such an objective is inconsistent with the goal of selecting ground-truth documents for the input query, which may cause severe loss of retrieval quality. Recent works identify such a defect, and propose to minimize the retrieval loss through contrastive learning. However, these methods intensively rely on queries with ground-truth documents, whose performance is limited by the insufficiency of labeled data. In this paper, we propose Distill-VQ, which unifies the learning of IVF and PQ within a knowledge distillation framework. In Distill-VQ, the dense embeddings are leveraged as "teachers", which predict the query's relevance to the sampled documents. The VQ modules are treated as the "students", which are learned to reproduce the predicted relevance, such that the reconstructed embeddings may fully preserve the retrieval result of the dense embeddings. By doing so, Distill-VQ is able to derive substantial training signals from the massive unlabeled data, which significantly contributes to the retrieval quality. We perform comprehensive explorations for the optimal conduct of knowledge distillation, which may provide useful insights for the learning of VQ based ANN index. We also experimentally show that the labeled data is no longer a necessity for high-quality vector quantization, which indicates Distill-VQ's strong applicability in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/14/2022

Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval

Ad-hoc search calls for the selection of appropriate answers from a mass...
research
08/23/2021

Query Embedding Pruning for Dense Retrieval

Recent advances in dense retrieval techniques have offered the promise o...
research
02/13/2022

Uni-Retriever: Towards Learning The Unified Embedding Based Retriever in Bing Sponsored Search

Embedding based retrieval (EBR) is a fundamental building block in many ...
research
05/25/2023

Enhancing the Ranking Context of Dense Retrieval Methods through Reciprocal Nearest Neighbors

Sparse annotation poses persistent challenges to training dense retrieva...
research
08/25/2021

On Approximate Nearest Neighbour Selection for Multi-Stage Dense Retrieval

Dense retrieval, which describes the use of contextualised language mode...
research
12/13/2022

Domain Adaptation for Dense Retrieval through Self-Supervision by Pseudo-Relevance Labeling

Although neural information retrieval has witnessed great improvements, ...
research
06/28/2022

Cooperative Retriever and Ranker in Deep Recommenders

Deep recommender systems jointly leverage the retrieval and ranking oper...

Please sign up or login with your details

Forgot password? Click here to reset