Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search

11/12/2019 ∙ by Xinyan Dai, et al. ∙ The Chinese University of Hong Kong 0

Vector quantization (VQ) techniques are widely used in similarity search for data compression, fast metric computation and etc. Originally designed for Euclidean distance, existing VQ techniques (e.g., PQ, AQ) explicitly or implicitly minimize the quantization error. In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error. We show that quantization errors in norm have much higher influence on inner products than quantization errors in direction, and small quantization error does not necessarily lead to good performance in maximum inner product search (MIPS). Based on this observation, we propose norm-explicit quantization (NEQ) — a general paradigm that improves existing VQ techniques for MIPS. NEQ quantizes the norms of items in a dataset explicitly to reduce errors in norm, which is crucial for MIPS. For the direction vectors, NEQ can simply reuse an existing VQ technique to quantize them without modification. We conducted extensive experiments on a variety of datasets and parameter configurations. The experimental results show that NEQ improves the performance of various VQ techniques for MIPS, including PQ, OPQ, RQ and AQ.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Given a dataset that contains vectors (also called items) and a query , maximum inner product search (MIPS) finds an item that has the largest inner product with the query,

(1)

The definition of MIPS can be easily extended to top- inner product search, which is used more commonly in practice. MIPS has many important applications such as recommendation based on user and item embeddings [18]

, multi-class classification with linear classifier 

[7]

, and object matching in computer vision 

[9]. Recently, MIPS is also used for Bayesian interference [22], memory network training [5]

and reinforcement learning 

[15].

Vector quantization (VQ). VQ quantizes items in the dataset with codebooks . Each codebook contains codewords and each codeword is a -dimensional vector, i.e., , for and . Denote as the index of the codeword in codebook that item maps to, then is approximated by . Therefore, the inner product between query and item , i.e., , is approximated by . There are a number of VQ algorithms with different quantization strategies and codebook learning procedures, such as product quantization (PQ) [13], optimized product quantization (OPQ) [10], residual quantization (RQ) [6] and additive quantization (AQ) [2]. We describe them in greater details in Section 2.

VQ can be used for data compressionfast inner product computation and candidate generation in MIPS. For data compression, the codeword indexes {} is stored instead of the original -dimensional vector , which enables storing very large datasets (e.g., with 1 billion items) in the main memory of a single machine [14]. When the inner products between query and all codewords are precomputed and stored in look-up tables, the approximate inner product of an item (i.e., ) can be computed with a complexity of instead of . With two codebooks, VQ can use the efficient multi-index algorithm [1] to generate candidates for MIPS. Note that VQ is orthogonal to existing MIPS algorithms, such as tree-based methods [17, 24], locality sensitive hashing (LSH) based methods [23, 25], proximity graph based method [21] and pruning based methods [19, 26]. These algorithms focus on generating good candidates for MIPS, while VQ focuses on data compression and computation acceleration. Actually, VQ can be used as a component of these algorithms for compression and fast computation as in [8].

When using VQ for similarity search, the primary performance indicator is the quality of the similarity value calculated with the codebook-based approximation . Existing VQ techniques were primarily designed for Euclidean nearest neighbor search (Euclidean NNS) instead of MIPS. They minimize the quantization error () explicitly or implicitly because it provides an upper bound for the error of the codebook based approximate Euclidean distance, i.e., . However, inner product is different from Euclidean distance in several important aspects. In particular, inner product does not satisfy the triangle inequality and non-negativity. The inner product between an item and itself (i.e., ) is not guaranteed to be the largest, while self-distance (i.e., ) is guaranteed to be the smallest for Euclidean distance. These differences prompt us to ask the following two questions: Does minimizing quantization error necessarily lead to good performance for MIPS? Do we need a different design principle for VQ techniques when used for MIPS (than for Euclidean NNS)?

To answer these questions, we start by analyzing the quantization errors of VQ techniques from a new angle. Instead of treating the quantization error as a whole, we decompose it into two parts: norm error () and angular error (). We found that norm error has a more significant influence on inner product than angular error. Based on this observation, we propose norm-explicit quantization (NEQ), which quantizes the norm and the unit-norm direction vector separately. Quantizing norm explicitly using dedicated codebooks allows to reduce errors in norm, which is beneficial for MIPS. The direction vector can be quantized using existing VQ techniques without modification. NEQ is simple in that the complexity of both codebook learning and approximate inner product computation is not increased compared with the baseline VQ technique used for direction vector quantization. More importantly, NEQ is general and powerful in that it can significantly boost the performance of many existing VQ techniques for MIPS.

We evaluated NEQ on four popular benchmark datasets, where the cardinalities of the datasets range from 17K to 100M and their norm distributions are significantly different. The experimental results show that NEQ improves the performance of PQ [13], OPQ [10], RQ [6] and AQ [2] for MIPS consistently on all datasets and parameter configurations (e.g., the number of codebooks and the required top- items). NEQ also significantly outperforms the state-of-the-art LSH-based MIPS methods and provides better time-recall performance than the graph-based ip-NSW algorithm.

Contributions. Our contributions are three-folds. First, we challenge the common wisdom of minimizing the quantization error in existing VQ techniques and questioned whether it is a suitable design principle for MIPS. Second, we show that norm error has more significant influence on inner product than angular error, which leads to a more suitable design principle for MIPS. Third, we propose NEQ, a general framework that can be seamlessly combined with existing VQ techniques and consistently improves their performance for MIPS, which is beneficial to applications that involve MIPS.

2 Related Work

In this section, we introduce some popular VQ techniques to facilitate further discussion and discuss the relation between NEQ and some related work.

PQ and OPQ. PQ [13] first generates sub-datasets for the original dataset, each containing

features from all items. K-means is used to learn a codebook on each sub-dataset independently and each codeword is a

-dimensional vector. An item is approximated by the concatenation of its corresponding codewords from each of the codebooks, i.e., . OPQ [10] uses an orthonormal matrix to rotate the items by

before applying PQ. OPQ achieves lower quantization error when the features are correlated or some features have larger variance than others. However, codebook learning is more complex for OPQ as it involves multiple rounds of alternating optimization of the codebooks and the rotation matrix

.

RQ and AQ. Different from PQ and OPQ, in RQ [6] every codebook covers all features and each codeword is a -dimensional vector. The original data are used to train the first codebook with K-means and the residues () are used to train the second codebook. This process is recursive in that the -th codebook is trained with the residues from the previous () codebooks. Similar to RQ, each codebook in AQ [2] also covers all features. AQ improves RQ by jointly optimizing all the codebooks. Beam search is used for encoding (finding the optimal codeword indexes of an item in the codebooks) with given codebooks and a least-square formulation is used to optimize the codebooks under given encoding.

In addition to the VQ techniques introduced above, there are many other VQ techniques, such as CQ [30], TQ [3] LOPQ [16] and LSQ [20]. Although these VQ techniques differ in their quantization strategies (e.g., partitioning the features or not) and the codebook learning algorithms (e.g., K-means or alternating minimization), all of them explicitly or implicitly minimize the quantization error , which is believed to provide good performance for Euclidean NNS. In the next section, we show that this principle does not apply for MIPS.

Existing work. Similar to some other VQ algorithms used for similarity search (e.g., PQ and RQ), the prototype of NEQ can also be found in earlier researches on signal compression. The shape-gain algorithm [11] separately quantizes the magnitude and direction of a signal to achieve efficiency with some loss in accuracy. Instead of hurting accuracy, NEQ shows that the separate quantization of norm and direction actually improves performance for MIPS. A recent work, multi-scale quantization [27] also explicitly quantizes the norm and the motivation is to better reduce the quantization error when the dynamic range (i.e., spread of the norm distribution) is large. In contrast, NEQ does not try to minimize the quantization error and is not limited to the case that the data have large dynamic range. In fact, NEQ still provides significant performance improvement even if items in the dataset have almost identical norm.

3 Analysis of Quantization Error for MIPS

Figure 1: Illustration of Theorem 1

For Euclidean distance, quantization error provides an upper bound on the error in the approximate Euclidean distance due to the triangle inequality, i.e., . Therefore, almost all VQ techniques try to minimize the quantization error when learning the codebooks. For approximate inner product, provides a trivial error bound because . As high-dimensional vectors tend to be orthogonal to each other [4], the bound is loose and can be significantly smaller than . Thus, we need to understand the influence of quantization error on inner product from a new angle. The exact inner product and its codebook-based approximation can be expressed as,

(2)

in which and are the unit-norm direction vectors of and , respectively. It can be observed from (2) that the accuracy of the approximate inner product depends on two factors, i.e., the quality of norm approximation ( for ) and the quality of direction vector approximation ( for ). But how do the two factors affect the quality of approximate inner product? Does one have greater influence than the other? To facilitate further analysis, we formally define inner product error, norm error, and angular error as follows.

Definition 1.

For an item and its codebook-based approximation , given a query , the inner product error , norm error , and angular error are given as:

We define the inner product error and norm error as ratios over the actual values to exclude the scaling effect of and . For angular error, if and are perfectly aligned in direction.

To analyze the influence of norm error and angular error individually, we need to exclude the influence of the other. Therefore, we used the approximation , which is accurate in direction, to calculate inner product error caused by norm approximation. Similarly, we used , which is accurate in norm, to calculate the inner product error caused by direction approximation. A norm error of will cause an inner product error when there is no angular error as . Theorem 1 formally establishes that there are cases that an angular error results in an inner product error .

Theorem 1.

For an item , its approximation which is accurate in norm but inaccurate in direction, and a query , denote the angle between and as and assume , the angle between and as and assume , the angle between the two planes defined by (, ) and (, ) as . The inner product error is not larger than the angular error if angle satisfies .

We provide an illustration of the vectors in Theorem 1 in Figure 1 and the proof can found in the supplementary material. We also plot the width of the feasible region of in the range of , i.e., the difference between the maximum value and minimum value for Theorem 1 to hold, under different configurations of and in Figure 1. The results show that when both and are small and , for almost all , the the inner product error is smaller than the angular error. The required conditions are not very restrictive as we analyze below.

We consider an item having large inner product with as it is easy to distinguish items having large inner products with the query from those having small inner products. To achieve good performance, a VQ method should be able to distinguish items having large but similar inner products with the query. Firstly, the conditions that and is small are easy to satisfy as is the codebook based approximation of and it should have a small angle with . Secondly, as has a large inner product with query , its approximation should also have a small angle with , therefore the condition that and is small is likely to hold. Finally, as is trained to approximate and is not, is again easy to satisfy. As , and have small angles with each other, is likely to fall in .

Figure 2: Influence of norm error and angular error on inner product for PQ (left) and RQ (right)

Theorem 1 is also supported by the following experiment on the SIFT1M dataset 111SIFT1M is sampled from the SIFT100M dataset used in the experiments in Section 5.. We used 10,000 randomly selected queries and the errors are calculated on their ground-truth top-20 MIPS results 222Researches [23, 25, 12] on MIPS usually use a value of ranging from 1 to 50, 20 is the middle of this range. in the dataset. We experimented with PQ and RQ using 8 codebooks each containing 256 codewords. For each item-query pair (, ), we plot two points in Figure 2. One (in red) shows the norm error and the inner product error caused by inaccurate norm (using ). The other (in gray) shows the angular error and the inner product error caused by inaccurate direction vector (using ). The results show that all red points reside on the line with a slope of 1, which verifies that a norm error of will cause an inner product error . In contrast, most of the gray points are below the red line, which means that an angular error usually results in an inner product error . We fitted a line for the gray points and the slopes for PQ and RQ are 0.510 and 0.426, respectively. We also plot the influence of norm error and angular error on Euclidean distance in the supplementary material, which shows that angular error has larger influence than norm error on Euclidean distance.

In conclusion, the results in this section show that norm error has more significant influence on inner product than angular error does. Therefore, to improve the performance of VQ techniques for MIPS, we should reduce quantization errors in norm. To achieve this goal, we can modify the formulations of the codebook learning problem in existing VQ algorithms to consider norm error (e.g., incorporating norm error into the cost function or constraints). However, this methodology has a problem in generality as we need to modify each VQ algorithm individually. In contrast, norm-explicit quantization (NEQ) uses the fact that norm is a scalar summary of the vector and explicitly quantizes it to reduce error. As a result, NEQ can be naturally combined with any VQ algorithm by using it to quantize the direction vector.

4 Norm-Explicit Quantization

Existing VQ techniques try to minimize the quantization error and do not allow explicit control of norm error and angular error. However, MIPS could benefit from methods that explicitly reduce the error in norm because accurate norm is important for MIPS. Therefore, the core idea of NEQ is to quantize the norm and the direction vector of the items separately. The norm is encoded explicitly using separate codebooks to achieve a small error, while the direction vector can be quantized using an existing VQ quantization technique without modification. To be more specific, the codebooks in NEQ are divided into two parts. The first codebooks are norm codebooks, in which each codeword for and . The other codebooks are vector codebooks for the direction vector. In NEQ, the codebook based approximation of can be expressed as,

(3)

in which are the codeword indexes of in the codebooks. According to (3), NEQ-based approximate inner product can be calculated using Algorithm 1. Lines 4-6 reconstruct the approximate norm of and Lines 7-9 compute the inner product between and the approximate direction vector of . Note that the inner product computation in Line 8 can be replaced by table lookup when the inner products between and the codewords are precomputed.

1:  Input: Query , codeword indexes of item
2:  Output: An approximation of
3:  ;
4:  for  from to  do
5:     ;
6:  end for
7:  for  from to  do
8:     ;
9:  end for
10:  return ;
Algorithm 1 NEQ: Approximate Inner Product Calculation

The remaining problem is how to train the norm and vector codebooks. A straightforward solution, which trains the norm codebooks with and the vector codebooks with , does not work. This is because the codebook based approximation of the direction vector is not guaranteed to be unit norm due to the intrinsic norm errors of vector quantization. Therefore, even if we quantize accurately with the norm codebooks, in  (3) can still have large norm error. NEQ solves this problem with the codebook learning process in Algorithm 2.

1:  Input: Dataset , # codebook , # norm codebook
2:  Output: norm codebooks, vector codebooks
3:  Extract the direction vector ;
4:  Train vector codebooks on using a VQ method;
5:  Encode with the vector codebooks, obtain the codebook based approximation of ;
6:  Get the relative norm of item as ;
7:  Train norm codebooks to quantize ;
8:  Return the codebooks;
Algorithm 2 NEQ: Codebook Learning

Line 4 trains the vector codebooks using an exiting VQ method, such as PQ or RQ. Instead of quantizing the actual norm , NEQ quantizes the relative norm in Line 7 of Algorithm 2. This design absorbs the norm error of VQ into the relative norm and ensures that the codebook based approximation in (3) has the same norm as if is quantized accurately. As we will show in the experiments, NEQ also works for datasets in which items have almost identical norms thanks to this design. The norm codebooks are learned in a recursive manner similar to RQ. The norm is used to train the first codebook with K-means. The residuals () are used to train and this process is conducted iteratively. The normalization in Line 3 may look unnecessary as we can quantize the original item directly using the vector codebooks and define the relative norm as . However, we observed that this alternative does not perform as well as Algorithm 2. One possible reason is that unit vectors may be easier to quantize for VQ techniques.

As a demonstration of the effectiveness of NEQ in reducing the quantization error in norm, we report some statistics of the Yahoo!Music dataset. For the original RQ, a norm error of and are achieved with 8 and 16 codebooks, respectively. Keeping the total number of codebooks the same and using only one codebook for norm, norm explicit quantization based RQ reaches a norm error of under both 8 and 16 codebooks. We will show that the lower norm error of NEQ translates into better performance for MIPS in the experiments in Section 5.

Setting the number of norm codebooks. Generally, a good can be chosen by testing the recall-item performance of all  333There should be at least 1 and at most norm codebooks. configurations on a set of sample queries. When the number of codewords in each codebook is 256 (i.e., ), we found empirically that using one codebook for norm provides the best performance in most cases. This is because the norm error is already small with one norm codebook. Using more codebooks for norm provides limited reduction in norm error but increases angular error as the number of angular codebooks is reduced.

Why not storing the norm? As the relative norm is a scalar, one may wonder why not storing its exact value to completely eliminate norm error. This is because storing with a 4-byte floating point number costs too much space and VQ algorithms are usually evaluated with a fixed per-item space budget (especially when used for data compression). With the usual setting , using codebooks results in a per-item index size of bytes. If is stored exactly, the direction vector can only use codebooks. Empirically, we found that using 1 norm codebook already makes the norm error very small, which leaves direction vector codebooks and achieves better overall performance.

Complexity analysis. For index building, NEQ learns vector codebooks and the original VQ method learns vector codebooks. Although NEQ needs to conduct normalization twice (Line 3 and Line 6 of Algorithm 2) and learn the norm codebooks, the complexity of these operations is generally low compared with learning vector codebooks. For inner product computation with lookup table, the original VQ method needs lookups and additions. NEQ needs lookups and additions to reconstruct the relative norm, and lookups and additions to add the inner product. Then one more multiplication is needed to assemble the final result. Thus, approximate inner product computation in NEQ costs lookups and additions, which is exactly the same as the original VQ method. Therefore, NEQ does not increase the complexity of codebook learning and approximate inner product computation.

We would like to emphasize that the strength of NEQ lies in its simplicity and generality. NEQ is simple in that it uses existing VQ methods to quantize the direction vector without modifying their formulations of the codebook learning problem. This makes NEQ easy to implement as off-the-shelf VQ libraries can be reused. NEQ is also general in that it can be combined with any VQ methods, including PQ, OPQ, RQ and AQ. In the supplementary material, we show that NEQ with two codebooks can adopt the multi-index algorithm [1] for candidate generation in MIPS. We will also show in Section 5 that NEQ boosts the performance of many VQ methods for MIPS.

5 Experiments

Dataset Netflix Yahoo!Music ImageNet SIFT100M
# items 17,770 136,736 2,340,373 100,000,000
# dimensions 300 300 150 128
Table 1: Dataset statistics

Experiment setting. We used four popular datasets, Netflix, Yahoo!Music, ImageNet and SIFT100M, whose statistics are summarized in Table 1. Netflix and Yahoo!Music record user ratings for items. We obtained item and user embeddings from these two datasets using alternating least square (ALS) [29] based matrix factorization. The item embeddings were used as dataset items, while the user embeddings were used as queries. ImageNet and SIFT100M contain descriptors of images. The four datasets vary significantly in norm distribution (see details in the supplementary material) and we deliberately chose them to test NEQ’s robustness to different norm distributions. ImageNet has a long tail in its norm distribution, while items in SIFT100M have almost the same norm. For Netflix and Yahoo!Music, most items have a norm close to the maximum.

Following the standard protocol for evaluating VQ techniques [2, 3, 30], we used the recall-item curve as the main performance metric and it measures the ability of a VQ method to preserve the similarity ranking of the items. To obtain the recall-item curve, all items in a dataset are first sorted according to the codebook based approximate inner products. For a query, denote the set of items ranking top as and the set of ground truth top- MIPS results as , the recall is . At each value of , we report the average recall of 10,000 randomly selected queries. We do not report the running time as the VQ methods have almost identical running time 444AQ and RQ have more expensive inner product table computation than PQ and OPQ. However, this difference has negligible impact on the running time when the dataset is large. given the same number of codebooks .

For a VQ method X (e.g., RQ), its NEQ version is denoted as NE-X (e.g., NE-RQ). The NEQ variants use the same number of codebooks (norm codebooks plus direction codebooks) as the original VQ methods. Each codebook has codewords and only one codebook is used for norm in NEQ. The default value of (the number of target top inner product items) is 20. For Neflix, the codebooks were trained using the entire dataset. For the other datasets, the codebooks were trained using a sample of size 100,000.

Figure 3: Item-recall performance of the VQ methods and their NEQ-based variants

Improvements over existing VQ methods. We report the performance of the original VQ methods (in dotted lines) and their NEQ-based variants (in solid lines) in Figure 3. The number of codebooks is 8. We do not report the performance of AQ and NE-AQ on SIFT100M as the encoding process of AQ did not finish in 72 hours. The results show that the NEQ-based variants consistently outperform their counterparts on all the four datasets. The performance improvements of NEQ on PQ and OPQ are much more significant than on AQ and RQ. Moreover, there is a trend that the performance benefit increases with the dataset cardinality. These two phenomenons can be explained by the fact that reducing the error in norm is more helpful when the quantization error is large. With 8 codebooks, the small Netflix dataset is already quantized accurately, while the SIFT100M dataset is not well quantized. With the same number of codebooks, PQ and OPQ generally have larger quantization errors than RQ and AQ and thus the performance gain of NEQ is more significant.

Figure 4: Different number of codebooks
Figure 5: Different values of

Next, we test the robustness of NEQ to the parameter configurations, i.e., the number of codebooks and the value of . We report the performance of RQ and NE-RQ on the SIFT100M dataset in Figure 5 and Figure 5 (the results of other VQ methods and datasets can be found in the supplementary material). Figure 5 shows that NE-RQ outperforms RQ across different number of codebooks. Figure 5 shows that NE-RQ consistently outperforms RQ for different values of with 8 codebooks and the performance gap is similar for different values of . The results in the supplementary material show that the robustness of NEQ to the parameter configurations also holds for PQ, OPQ and AQ.

Figure 6: Comparison with LSH & graph methods
Figure 7: Quantization error of NE-RQ and RQ

Comparison with other methods Norm-Range LSH [28] and Simple-LSH [23] use binary hashing and are the state-of-the-art LSH-based algorithms for MIPS. QUIP [12] is a vector quantization method specialized for MIPS, which explicitly minimizes the squared inner product error () to learn the codebooks. QUIP has several variants and we used QUIP-cov(x) for fair comparison as other variants use knowledge about the queries but NEQ does not. According to the QUIP paper, the performance gap between other variants and QUIP-cov(x) is small for the ImageNet dataset. For Norm-Range LSH, we partitioned the dataset into 64 sub-datasets as recommended in [28]. We report the performance results on the ImageNet dataset in Figure 7 (left). Simple-LSH and Norm-Range used 64 bit binary code. NE-PQ and QUIP use two codebooks each containing 256 codewords. This means that the per item index size of NE-PQ (and QUIP) is 16 bit and only a quarter of that of the LSH-based methods. The results show that the vector quantization based methods (NE-PQ and QUIP) outperform the LSH-based algorithms with smaller per-item index size. Moreover, NE-PQ significantly outperforms QUIP even if QUIP uses a more complex codebook learning strategy.

We also compared the recall-time performance of NE-RQ with the proximity graph-based ip-NSW algorithm [21] on the ImageNet dataset in Figure 7 (right). ip-NSW is shown to achieve the state of the art recall-time performance in existing MIPS algorithms in [21]. NE-RQ with two codebooks was used for candidate generation (by combining with the multi-index algorithm [1]) and the candidates were verified by calculating the exact inner product in this experiment. The results show that NE-RQ achieves higher recall than ip-NSW given the same query processing time. As the implementation may affect the running time, we also plot recall vs. inner product calculation in the supplementary material, which shows that NE-RQ requires fewer inner product computation at the same recall. However, we found ip-NSW provides better recall-time performance than NEQ on the SIFT1M dataset. Although the main design goal of NEQ is good recall-item performance instead of recall-time performance, this experiment shows that using NEQ to generate candidate is beneficial to some datasets.

Insights. A natural question arises after observing the good performance of NEQ: Does NEQ only reduce the error in norm? Or it reduces the quantization error as a by-product of its design? To answer this question, we compared the quantization error ( normalized by the maximum norm in the dataset) and the norm error of RQ and NE-RQ in Figure 7. The number of codebooks is 8 and the reported errors are averaged over all items in the dataset. The results show that NE-RQ indeed reduces norm error significantly but its quantization error is slightly larger than RQ on all the four datasets. This can be explained by the fact that NE-RQ uses 1 codebook to encode the norm and has fewer vector codebooks than RQ. This result shows that a smaller quantization error does not necessarily result in better performance for MIPS. Originally designed for Euclidean distance, existing VQ methods minimize the quantization error. With NEQ, we have shown that the minimizing quantization error is not a suitable design principle for inner product due to its unique properties.

6 Conclusions

In this paper, we questioned whether minimizing the quantization error is a suitable design principle of VQ techniques for MIPS. We found that the quantization error in norm have great influence on inner product and can be significantly reduced by explicitly encoding it using separate codebooks. Based on this observation, we proposed NEQ — a general paradigm that specializes existing VQ techniques for MIPS. NEQ is simple as it does not modify the codebook learning process of existing VQ methods. NEQ is also general as it can be easily combined with existing VQ methods. Experimental results show that NEQ provides good performance consistently on various datasets and parameter configurations. Our work shows that inner product requires different design principles from Euclidean distance for VQ techniques and we hope to inspire more researches in this direction.

References

  • [1] A. Babenko and V. S. Lempitsky (2012) The inverted multi-index. In CVPR, pp. 3069–3076. Cited by: §1, §4, §5.
  • [2] A. Babenko and V. S. Lempitsky (2014) Additive quantization for extreme vector compression. In CVPR, pp. 931–938. Cited by: §1, §1, §2, §5.
  • [3] A. Babenko and V. S. Lempitsky (2015) Tree quantization for large-scale similarity search and classification. In CVPR, pp. 4240–4248. Cited by: §2, §5.
  • [4] T. T. Cai, J. Fan, and T. Jiang (2013) Distributions of angles in random packing on spheres.

    Journal of Machine Learning Research

    14 (1), pp. 1837–1864.
    Cited by: §3.
  • [5] S. Chandar, S. Ahn, H. Larochelle, P. Vincent, G. Tesauro, and Y. Bengio (2016) Hierarchical memory networks. arXiv preprint arXiv:1605.07427. Cited by: §1.
  • [6] Y. Chen, T. Guan, and C. Wang (2010) Approximate nearest neighbor search by residual vector quantization. Sensors 10 (12), pp. 11259–11273. Cited by: §1, §1, §2.
  • [7] T. L. Dean, M. A. Ruzon, M. Segal, J. Shlens, S. Vijayanarasimhan, and J. Yagnik (2013) Fast, accurate detection of 100, 000 object classes on a single machine. In CVPR, pp. 1814–1821. Cited by: §1.
  • [8] M. Douze, A. Sablayrolles, and H. Jégou (2018) Link and code: fast indexing with graphs and compact regression codes. In CVPR, pp. 3646–3654. Cited by: §1.
  • [9] P. F. Felzenszwalb, R. B. Girshick, D. A. McAllester, and D. Ramanan (2010) Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32, pp. 1627–1645. Cited by: §1.
  • [10] T. Z. Ge, K. He, Q. Ke, and J. Sun (2013) Optimized product quantization for approximate nearest neighbor search. In ICCV, pp. 2946–2953. Cited by: §1, §1, §2.
  • [11] A. Gersho and R. M. Gray (2012) Vector quantization and signal compression. Vol. 159, Springer Science & Business Media. Cited by: §2.
  • [12] R. Guo, S. Kumar, K. Choromanski, and D. Simcha (2016) Quantization based fast inner product search. In Artificial Intelligence and Statistics, pp. 482–490. Cited by: §5, footnote 2.
  • [13] H. Jégou, M. Douze, and C. Schmid (2011) Product quantization for nearest neighbor search. IEEE Trans. Pattern Anal. Mach. Intell. 33 (1), pp. 117–128. Cited by: §1, §1, §2.
  • [14] J. Johnson, M. Douze, and H. Jégou (2017) Billion-scale similarity search with gpus. CoRR. Cited by: §1.
  • [15] K. Jun, A. Bhargava, R. Nowak, and R. Willett (2017) Scalable generalized linear bandits: online computation and hashing. In Advances in Neural Information Processing Systems, pp. 99–109. Cited by: §1.
  • [16] Y. Kalantidis and Y. S. Avrithis (2014) Locally optimized product quantization for approximate nearest neighbor search. In CVPR, pp. 2329–2336. Cited by: §2.
  • [17] N. Koenigstein, P. Ram, and Y. Shavitt (2012) Efficient retrieval of recommendations in a matrix factorization framework. In CIKM, pp. 535–544. Cited by: §1.
  • [18] Y. Koren, R. M. Bell, and C. Volinsky (2009) Matrix factorization techniques for recommender systems. IEEE Computer 42, pp. 30–37. Cited by: §1.
  • [19] H. Li, T. N. Chan, M. L. Yiu, and N. Mamoulis (2017) FEXIPRO: fast and exact inner product retrieval in recommender systems. In SIGMOD, pp. 835–850. Cited by: §1.
  • [20] J. Martinez, J. Clement, H. H. Hoos, and J. J. Little (2016) Revisiting additive quantization. In European Conference on Computer Vision, pp. 137–153. Cited by: §2.
  • [21] S. Morozov and A. Babenko (2018) Non-metric similarity graphs for maximum inner product search. In NeurIPS, pp. 4726–4735. Cited by: §1, §5.
  • [22] S. Mussmann and S. Ermon (2016) Learning and inference via maximum inner product search. In International Conference on Machine Learning, pp. 2587–2596. Cited by: §1.
  • [23] B. Neyshabur and N. Srebro (2015) On symmetric and asymmetric lshs for inner product search. In ICML, pp. 1926–1934. Cited by: §1, §5, footnote 2.
  • [24] P. Ram and A. G. Gray (2012) Maximum inner-product search using cone trees. In KDD, pp. 931–939. Cited by: §1.
  • [25] A. Shrivastava and P. Li (2014) Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In NIPS, pp. 2321–2329. Cited by: §1, footnote 2.
  • [26] C. Teflioudi, R. Gemulla, and O. Mykytiuk (2015) LEMP: fast retrieval of large entries in a matrix product. In SIGMOD, pp. 107–122. Cited by: §1.
  • [27] X. Wu, R. Guo, A. T. Suresh, S. Kumar, D. N. Holtmann-Rice, D. Simcha, and F. Yu (2017) Multiscale quantization for fast similarity search. In Advances in Neural Information Processing Systems, pp. 5745–5755. Cited by: §2.
  • [28] X. Yan, J. Li, X. Dai, H. Chen, and J. Cheng (2018) Norm-ranging LSH for maximum inner product search. In NeurIPS 2018, pp. 2956–2965. Cited by: §5.
  • [29] H. Yun, H. F. Yu, C.J. Hsieh, S. V. N. Vishwanathan, and I. S. Dhillon (2013) NOMAD: non-locking, stochastic multi-machine algorithm for asynchronous and decentralized matrix completion. CoRR. Cited by: §5.
  • [30] T. Zhang, C. Du, and J. Wang (2014) Composite quantization for approximate nearest neighbor search. In ICML, pp. 838–846. Cited by: §2, §5.