1 Introduction
Given a dataset that contains vectors (also called items) and a query , maximum inner product search (MIPS) finds an item that has the largest inner product with the query,
(1) 
The definition of MIPS can be easily extended to top inner product search, which is used more commonly in practice. MIPS has many important applications such as recommendation based on user and item embeddings [18]
, multiclass classification with linear classifier
[7], and object matching in computer vision
[9]. Recently, MIPS is also used for Bayesian interference [22], memory network training [5][15].Vector quantization (VQ). VQ quantizes items in the dataset with codebooks . Each codebook contains codewords and each codeword is a dimensional vector, i.e., , for and . Denote as the index of the codeword in codebook that item maps to, then is approximated by . Therefore, the inner product between query and item , i.e., , is approximated by . There are a number of VQ algorithms with different quantization strategies and codebook learning procedures, such as product quantization (PQ) [13], optimized product quantization (OPQ) [10], residual quantization (RQ) [6] and additive quantization (AQ) [2]. We describe them in greater details in Section 2.
VQ can be used for data compression, fast inner product computation and candidate generation in MIPS. For data compression, the codeword indexes {} is stored instead of the original dimensional vector , which enables storing very large datasets (e.g., with 1 billion items) in the main memory of a single machine [14]. When the inner products between query and all codewords are precomputed and stored in lookup tables, the approximate inner product of an item (i.e., ) can be computed with a complexity of instead of . With two codebooks, VQ can use the efficient multiindex algorithm [1] to generate candidates for MIPS. Note that VQ is orthogonal to existing MIPS algorithms, such as treebased methods [17, 24], locality sensitive hashing (LSH) based methods [23, 25], proximity graph based method [21] and pruning based methods [19, 26]. These algorithms focus on generating good candidates for MIPS, while VQ focuses on data compression and computation acceleration. Actually, VQ can be used as a component of these algorithms for compression and fast computation as in [8].
When using VQ for similarity search, the primary performance indicator is the quality of the similarity value calculated with the codebookbased approximation . Existing VQ techniques were primarily designed for Euclidean nearest neighbor search (Euclidean NNS) instead of MIPS. They minimize the quantization error () explicitly or implicitly because it provides an upper bound for the error of the codebook based approximate Euclidean distance, i.e., . However, inner product is different from Euclidean distance in several important aspects. In particular, inner product does not satisfy the triangle inequality and nonnegativity. The inner product between an item and itself (i.e., ) is not guaranteed to be the largest, while selfdistance (i.e., ) is guaranteed to be the smallest for Euclidean distance. These differences prompt us to ask the following two questions: Does minimizing quantization error necessarily lead to good performance for MIPS? Do we need a different design principle for VQ techniques when used for MIPS (than for Euclidean NNS)?
To answer these questions, we start by analyzing the quantization errors of VQ techniques from a new angle. Instead of treating the quantization error as a whole, we decompose it into two parts: norm error () and angular error (). We found that norm error has a more significant influence on inner product than angular error. Based on this observation, we propose normexplicit quantization (NEQ), which quantizes the norm and the unitnorm direction vector separately. Quantizing norm explicitly using dedicated codebooks allows to reduce errors in norm, which is beneficial for MIPS. The direction vector can be quantized using existing VQ techniques without modification. NEQ is simple in that the complexity of both codebook learning and approximate inner product computation is not increased compared with the baseline VQ technique used for direction vector quantization. More importantly, NEQ is general and powerful in that it can significantly boost the performance of many existing VQ techniques for MIPS.
We evaluated NEQ on four popular benchmark datasets, where the cardinalities of the datasets range from 17K to 100M and their norm distributions are significantly different. The experimental results show that NEQ improves the performance of PQ [13], OPQ [10], RQ [6] and AQ [2] for MIPS consistently on all datasets and parameter configurations (e.g., the number of codebooks and the required top items). NEQ also significantly outperforms the stateoftheart LSHbased MIPS methods and provides better timerecall performance than the graphbased ipNSW algorithm.
Contributions. Our contributions are threefolds. First, we challenge the common wisdom of minimizing the quantization error in existing VQ techniques and questioned whether it is a suitable design principle for MIPS. Second, we show that norm error has more significant influence on inner product than angular error, which leads to a more suitable design principle for MIPS. Third, we propose NEQ, a general framework that can be seamlessly combined with existing VQ techniques and consistently improves their performance for MIPS, which is beneficial to applications that involve MIPS.
2 Related Work
In this section, we introduce some popular VQ techniques to facilitate further discussion and discuss the relation between NEQ and some related work.
PQ and OPQ. PQ [13] first generates subdatasets for the original dataset, each containing
features from all items. Kmeans is used to learn a codebook on each subdataset independently and each codeword is a
dimensional vector. An item is approximated by the concatenation of its corresponding codewords from each of the codebooks, i.e., . OPQ [10] uses an orthonormal matrix to rotate the items bybefore applying PQ. OPQ achieves lower quantization error when the features are correlated or some features have larger variance than others. However, codebook learning is more complex for OPQ as it involves multiple rounds of alternating optimization of the codebooks and the rotation matrix
.RQ and AQ. Different from PQ and OPQ, in RQ [6] every codebook covers all features and each codeword is a dimensional vector. The original data are used to train the first codebook with Kmeans and the residues () are used to train the second codebook. This process is recursive in that the th codebook is trained with the residues from the previous () codebooks. Similar to RQ, each codebook in AQ [2] also covers all features. AQ improves RQ by jointly optimizing all the codebooks. Beam search is used for encoding (finding the optimal codeword indexes of an item in the codebooks) with given codebooks and a leastsquare formulation is used to optimize the codebooks under given encoding.
In addition to the VQ techniques introduced above, there are many other VQ techniques, such as CQ [30], TQ [3] LOPQ [16] and LSQ [20]. Although these VQ techniques differ in their quantization strategies (e.g., partitioning the features or not) and the codebook learning algorithms (e.g., Kmeans or alternating minimization), all of them explicitly or implicitly minimize the quantization error , which is believed to provide good performance for Euclidean NNS. In the next section, we show that this principle does not apply for MIPS.
Existing work. Similar to some other VQ algorithms used for similarity search (e.g., PQ and RQ), the prototype of NEQ can also be found in earlier researches on signal compression. The shapegain algorithm [11] separately quantizes the magnitude and direction of a signal to achieve efficiency with some loss in accuracy. Instead of hurting accuracy, NEQ shows that the separate quantization of norm and direction actually improves performance for MIPS. A recent work, multiscale quantization [27] also explicitly quantizes the norm and the motivation is to better reduce the quantization error when the dynamic range (i.e., spread of the norm distribution) is large. In contrast, NEQ does not try to minimize the quantization error and is not limited to the case that the data have large dynamic range. In fact, NEQ still provides significant performance improvement even if items in the dataset have almost identical norm.
3 Analysis of Quantization Error for MIPS
For Euclidean distance, quantization error provides an upper bound on the error in the approximate Euclidean distance due to the triangle inequality, i.e., . Therefore, almost all VQ techniques try to minimize the quantization error when learning the codebooks. For approximate inner product, provides a trivial error bound because . As highdimensional vectors tend to be orthogonal to each other [4], the bound is loose and can be significantly smaller than . Thus, we need to understand the influence of quantization error on inner product from a new angle. The exact inner product and its codebookbased approximation can be expressed as,
(2) 
in which and are the unitnorm direction vectors of and , respectively. It can be observed from (2) that the accuracy of the approximate inner product depends on two factors, i.e., the quality of norm approximation ( for ) and the quality of direction vector approximation ( for ). But how do the two factors affect the quality of approximate inner product? Does one have greater influence than the other? To facilitate further analysis, we formally define inner product error, norm error, and angular error as follows.
Definition 1.
For an item and its codebookbased approximation , given a query , the inner product error , norm error , and angular error are given as:
We define the inner product error and norm error as ratios over the actual values to exclude the scaling effect of and . For angular error, if and are perfectly aligned in direction.
To analyze the influence of norm error and angular error individually, we need to exclude the influence of the other. Therefore, we used the approximation , which is accurate in direction, to calculate inner product error caused by norm approximation. Similarly, we used , which is accurate in norm, to calculate the inner product error caused by direction approximation. A norm error of will cause an inner product error when there is no angular error as . Theorem 1 formally establishes that there are cases that an angular error results in an inner product error .
Theorem 1.
For an item , its approximation which is accurate in norm but inaccurate in direction, and a query , denote the angle between and as and assume , the angle between and as and assume , the angle between the two planes defined by (, ) and (, ) as . The inner product error is not larger than the angular error if angle satisfies .
We provide an illustration of the vectors in Theorem 1 in Figure 1 and the proof can found in the supplementary material. We also plot the width of the feasible region of in the range of , i.e., the difference between the maximum value and minimum value for Theorem 1 to hold, under different configurations of and in Figure 1. The results show that when both and are small and , for almost all , the the inner product error is smaller than the angular error. The required conditions are not very restrictive as we analyze below.
We consider an item having large inner product with as it is easy to distinguish items having large inner products with the query from those having small inner products. To achieve good performance, a VQ method should be able to distinguish items having large but similar inner products with the query. Firstly, the conditions that and is small are easy to satisfy as is the codebook based approximation of and it should have a small angle with . Secondly, as has a large inner product with query , its approximation should also have a small angle with , therefore the condition that and is small is likely to hold. Finally, as is trained to approximate and is not, is again easy to satisfy. As , and have small angles with each other, is likely to fall in .
Theorem 1 is also supported by the following experiment on the SIFT1M dataset ^{1}^{1}1SIFT1M is sampled from the SIFT100M dataset used in the experiments in Section 5.. We used 10,000 randomly selected queries and the errors are calculated on their groundtruth top20 MIPS results ^{2}^{2}2Researches [23, 25, 12] on MIPS usually use a value of ranging from 1 to 50, 20 is the middle of this range. in the dataset. We experimented with PQ and RQ using 8 codebooks each containing 256 codewords. For each itemquery pair (, ), we plot two points in Figure 2. One (in red) shows the norm error and the inner product error caused by inaccurate norm (using ). The other (in gray) shows the angular error and the inner product error caused by inaccurate direction vector (using ). The results show that all red points reside on the line with a slope of 1, which verifies that a norm error of will cause an inner product error . In contrast, most of the gray points are below the red line, which means that an angular error usually results in an inner product error . We fitted a line for the gray points and the slopes for PQ and RQ are 0.510 and 0.426, respectively. We also plot the influence of norm error and angular error on Euclidean distance in the supplementary material, which shows that angular error has larger influence than norm error on Euclidean distance.
In conclusion, the results in this section show that norm error has more significant influence on inner product than angular error does. Therefore, to improve the performance of VQ techniques for MIPS, we should reduce quantization errors in norm. To achieve this goal, we can modify the formulations of the codebook learning problem in existing VQ algorithms to consider norm error (e.g., incorporating norm error into the cost function or constraints). However, this methodology has a problem in generality as we need to modify each VQ algorithm individually. In contrast, normexplicit quantization (NEQ) uses the fact that norm is a scalar summary of the vector and explicitly quantizes it to reduce error. As a result, NEQ can be naturally combined with any VQ algorithm by using it to quantize the direction vector.
4 NormExplicit Quantization
Existing VQ techniques try to minimize the quantization error and do not allow explicit control of norm error and angular error. However, MIPS could benefit from methods that explicitly reduce the error in norm because accurate norm is important for MIPS. Therefore, the core idea of NEQ is to quantize the norm and the direction vector of the items separately. The norm is encoded explicitly using separate codebooks to achieve a small error, while the direction vector can be quantized using an existing VQ quantization technique without modification. To be more specific, the codebooks in NEQ are divided into two parts. The first codebooks are norm codebooks, in which each codeword for and . The other codebooks are vector codebooks for the direction vector. In NEQ, the codebook based approximation of can be expressed as,
(3) 
in which are the codeword indexes of in the codebooks. According to (3), NEQbased approximate inner product can be calculated using Algorithm 1. Lines 46 reconstruct the approximate norm of and Lines 79 compute the inner product between and the approximate direction vector of . Note that the inner product computation in Line 8 can be replaced by table lookup when the inner products between and the codewords are precomputed.
The remaining problem is how to train the norm and vector codebooks. A straightforward solution, which trains the norm codebooks with and the vector codebooks with , does not work. This is because the codebook based approximation of the direction vector is not guaranteed to be unit norm due to the intrinsic norm errors of vector quantization. Therefore, even if we quantize accurately with the norm codebooks, in (3) can still have large norm error. NEQ solves this problem with the codebook learning process in Algorithm 2.
Line 4 trains the vector codebooks using an exiting VQ method, such as PQ or RQ. Instead of quantizing the actual norm , NEQ quantizes the relative norm in Line 7 of Algorithm 2. This design absorbs the norm error of VQ into the relative norm and ensures that the codebook based approximation in (3) has the same norm as if is quantized accurately. As we will show in the experiments, NEQ also works for datasets in which items have almost identical norms thanks to this design. The norm codebooks are learned in a recursive manner similar to RQ. The norm is used to train the first codebook with Kmeans. The residuals () are used to train and this process is conducted iteratively. The normalization in Line 3 may look unnecessary as we can quantize the original item directly using the vector codebooks and define the relative norm as . However, we observed that this alternative does not perform as well as Algorithm 2. One possible reason is that unit vectors may be easier to quantize for VQ techniques.
As a demonstration of the effectiveness of NEQ in reducing the quantization error in norm, we report some statistics of the Yahoo!Music dataset. For the original RQ, a norm error of and are achieved with 8 and 16 codebooks, respectively. Keeping the total number of codebooks the same and using only one codebook for norm, norm explicit quantization based RQ reaches a norm error of under both 8 and 16 codebooks. We will show that the lower norm error of NEQ translates into better performance for MIPS in the experiments in Section 5.
Setting the number of norm codebooks. Generally, a good can be chosen by testing the recallitem performance of all ^{3}^{3}3There should be at least 1 and at most norm codebooks. configurations on a set of sample queries. When the number of codewords in each codebook is 256 (i.e., ), we found empirically that using one codebook for norm provides the best performance in most cases. This is because the norm error is already small with one norm codebook. Using more codebooks for norm provides limited reduction in norm error but increases angular error as the number of angular codebooks is reduced.
Why not storing the norm? As the relative norm is a scalar, one may wonder why not storing its exact value to completely eliminate norm error. This is because storing with a 4byte floating point number costs too much space and VQ algorithms are usually evaluated with a fixed peritem space budget (especially when used for data compression). With the usual setting , using codebooks results in a peritem index size of bytes. If is stored exactly, the direction vector can only use codebooks. Empirically, we found that using 1 norm codebook already makes the norm error very small, which leaves direction vector codebooks and achieves better overall performance.
Complexity analysis. For index building, NEQ learns vector codebooks and the original VQ method learns vector codebooks. Although NEQ needs to conduct normalization twice (Line 3 and Line 6 of Algorithm 2) and learn the norm codebooks, the complexity of these operations is generally low compared with learning vector codebooks. For inner product computation with lookup table, the original VQ method needs lookups and additions. NEQ needs lookups and additions to reconstruct the relative norm, and lookups and additions to add the inner product. Then one more multiplication is needed to assemble the final result. Thus, approximate inner product computation in NEQ costs lookups and additions, which is exactly the same as the original VQ method. Therefore, NEQ does not increase the complexity of codebook learning and approximate inner product computation.
We would like to emphasize that the strength of NEQ lies in its simplicity and generality. NEQ is simple in that it uses existing VQ methods to quantize the direction vector without modifying their formulations of the codebook learning problem. This makes NEQ easy to implement as offtheshelf VQ libraries can be reused. NEQ is also general in that it can be combined with any VQ methods, including PQ, OPQ, RQ and AQ. In the supplementary material, we show that NEQ with two codebooks can adopt the multiindex algorithm [1] for candidate generation in MIPS. We will also show in Section 5 that NEQ boosts the performance of many VQ methods for MIPS.
5 Experiments
Dataset  Netflix  Yahoo!Music  ImageNet  SIFT100M 
# items  17,770  136,736  2,340,373  100,000,000 
# dimensions  300  300  150  128 
Experiment setting. We used four popular datasets, Netflix, Yahoo!Music, ImageNet and SIFT100M, whose statistics are summarized in Table 1. Netflix and Yahoo!Music record user ratings for items. We obtained item and user embeddings from these two datasets using alternating least square (ALS) [29] based matrix factorization. The item embeddings were used as dataset items, while the user embeddings were used as queries. ImageNet and SIFT100M contain descriptors of images. The four datasets vary significantly in norm distribution (see details in the supplementary material) and we deliberately chose them to test NEQ’s robustness to different norm distributions. ImageNet has a long tail in its norm distribution, while items in SIFT100M have almost the same norm. For Netflix and Yahoo!Music, most items have a norm close to the maximum.
Following the standard protocol for evaluating VQ techniques [2, 3, 30], we used the recallitem curve as the main performance metric and it measures the ability of a VQ method to preserve the similarity ranking of the items. To obtain the recallitem curve, all items in a dataset are first sorted according to the codebook based approximate inner products. For a query, denote the set of items ranking top as and the set of ground truth top MIPS results as , the recall is . At each value of , we report the average recall of 10,000 randomly selected queries. We do not report the running time as the VQ methods have almost identical running time ^{4}^{4}4AQ and RQ have more expensive inner product table computation than PQ and OPQ. However, this difference has negligible impact on the running time when the dataset is large. given the same number of codebooks .
For a VQ method X (e.g., RQ), its NEQ version is denoted as NEX (e.g., NERQ). The NEQ variants use the same number of codebooks (norm codebooks plus direction codebooks) as the original VQ methods. Each codebook has codewords and only one codebook is used for norm in NEQ. The default value of (the number of target top inner product items) is 20. For Neflix, the codebooks were trained using the entire dataset. For the other datasets, the codebooks were trained using a sample of size 100,000.
Improvements over existing VQ methods. We report the performance of the original VQ methods (in dotted lines) and their NEQbased variants (in solid lines) in Figure 3. The number of codebooks is 8. We do not report the performance of AQ and NEAQ on SIFT100M as the encoding process of AQ did not finish in 72 hours. The results show that the NEQbased variants consistently outperform their counterparts on all the four datasets. The performance improvements of NEQ on PQ and OPQ are much more significant than on AQ and RQ. Moreover, there is a trend that the performance benefit increases with the dataset cardinality. These two phenomenons can be explained by the fact that reducing the error in norm is more helpful when the quantization error is large. With 8 codebooks, the small Netflix dataset is already quantized accurately, while the SIFT100M dataset is not well quantized. With the same number of codebooks, PQ and OPQ generally have larger quantization errors than RQ and AQ and thus the performance gain of NEQ is more significant.
Next, we test the robustness of NEQ to the parameter configurations, i.e., the number of codebooks and the value of . We report the performance of RQ and NERQ on the SIFT100M dataset in Figure 5 and Figure 5 (the results of other VQ methods and datasets can be found in the supplementary material). Figure 5 shows that NERQ outperforms RQ across different number of codebooks. Figure 5 shows that NERQ consistently outperforms RQ for different values of with 8 codebooks and the performance gap is similar for different values of . The results in the supplementary material show that the robustness of NEQ to the parameter configurations also holds for PQ, OPQ and AQ.
Comparison with other methods NormRange LSH [28] and SimpleLSH [23] use binary hashing and are the stateoftheart LSHbased algorithms for MIPS. QUIP [12] is a vector quantization method specialized for MIPS, which explicitly minimizes the squared inner product error () to learn the codebooks. QUIP has several variants and we used QUIPcov(x) for fair comparison as other variants use knowledge about the queries but NEQ does not. According to the QUIP paper, the performance gap between other variants and QUIPcov(x) is small for the ImageNet dataset. For NormRange LSH, we partitioned the dataset into 64 subdatasets as recommended in [28]. We report the performance results on the ImageNet dataset in Figure 7 (left). SimpleLSH and NormRange used 64 bit binary code. NEPQ and QUIP use two codebooks each containing 256 codewords. This means that the per item index size of NEPQ (and QUIP) is 16 bit and only a quarter of that of the LSHbased methods. The results show that the vector quantization based methods (NEPQ and QUIP) outperform the LSHbased algorithms with smaller peritem index size. Moreover, NEPQ significantly outperforms QUIP even if QUIP uses a more complex codebook learning strategy.
We also compared the recalltime performance of NERQ with the proximity graphbased ipNSW algorithm [21] on the ImageNet dataset in Figure 7 (right). ipNSW is shown to achieve the state of the art recalltime performance in existing MIPS algorithms in [21]. NERQ with two codebooks was used for candidate generation (by combining with the multiindex algorithm [1]) and the candidates were verified by calculating the exact inner product in this experiment. The results show that NERQ achieves higher recall than ipNSW given the same query processing time. As the implementation may affect the running time, we also plot recall vs. inner product calculation in the supplementary material, which shows that NERQ requires fewer inner product computation at the same recall. However, we found ipNSW provides better recalltime performance than NEQ on the SIFT1M dataset. Although the main design goal of NEQ is good recallitem performance instead of recalltime performance, this experiment shows that using NEQ to generate candidate is beneficial to some datasets.
Insights. A natural question arises after observing the good performance of NEQ: Does NEQ only reduce the error in norm? Or it reduces the quantization error as a byproduct of its design? To answer this question, we compared the quantization error ( normalized by the maximum norm in the dataset) and the norm error of RQ and NERQ in Figure 7. The number of codebooks is 8 and the reported errors are averaged over all items in the dataset. The results show that NERQ indeed reduces norm error significantly but its quantization error is slightly larger than RQ on all the four datasets. This can be explained by the fact that NERQ uses 1 codebook to encode the norm and has fewer vector codebooks than RQ. This result shows that a smaller quantization error does not necessarily result in better performance for MIPS. Originally designed for Euclidean distance, existing VQ methods minimize the quantization error. With NEQ, we have shown that the minimizing quantization error is not a suitable design principle for inner product due to its unique properties.
6 Conclusions
In this paper, we questioned whether minimizing the quantization error is a suitable design principle of VQ techniques for MIPS. We found that the quantization error in norm have great influence on inner product and can be significantly reduced by explicitly encoding it using separate codebooks. Based on this observation, we proposed NEQ — a general paradigm that specializes existing VQ techniques for MIPS. NEQ is simple as it does not modify the codebook learning process of existing VQ methods. NEQ is also general as it can be easily combined with existing VQ methods. Experimental results show that NEQ provides good performance consistently on various datasets and parameter configurations. Our work shows that inner product requires different design principles from Euclidean distance for VQ techniques and we hope to inspire more researches in this direction.
References
 [1] (2012) The inverted multiindex. In CVPR, pp. 3069–3076. Cited by: §1, §4, §5.
 [2] (2014) Additive quantization for extreme vector compression. In CVPR, pp. 931–938. Cited by: §1, §1, §2, §5.
 [3] (2015) Tree quantization for largescale similarity search and classification. In CVPR, pp. 4240–4248. Cited by: §2, §5.

[4]
(2013)
Distributions of angles in random packing on spheres.
Journal of Machine Learning Research
14 (1), pp. 1837–1864. Cited by: §3.  [5] (2016) Hierarchical memory networks. arXiv preprint arXiv:1605.07427. Cited by: §1.
 [6] (2010) Approximate nearest neighbor search by residual vector quantization. Sensors 10 (12), pp. 11259–11273. Cited by: §1, §1, §2.
 [7] (2013) Fast, accurate detection of 100, 000 object classes on a single machine. In CVPR, pp. 1814–1821. Cited by: §1.
 [8] (2018) Link and code: fast indexing with graphs and compact regression codes. In CVPR, pp. 3646–3654. Cited by: §1.
 [9] (2010) Object detection with discriminatively trained partbased models. IEEE Trans. Pattern Anal. Mach. Intell. 32, pp. 1627–1645. Cited by: §1.
 [10] (2013) Optimized product quantization for approximate nearest neighbor search. In ICCV, pp. 2946–2953. Cited by: §1, §1, §2.
 [11] (2012) Vector quantization and signal compression. Vol. 159, Springer Science & Business Media. Cited by: §2.
 [12] (2016) Quantization based fast inner product search. In Artificial Intelligence and Statistics, pp. 482–490. Cited by: §5, footnote 2.
 [13] (2011) Product quantization for nearest neighbor search. IEEE Trans. Pattern Anal. Mach. Intell. 33 (1), pp. 117–128. Cited by: §1, §1, §2.
 [14] (2017) Billionscale similarity search with gpus. CoRR. Cited by: §1.
 [15] (2017) Scalable generalized linear bandits: online computation and hashing. In Advances in Neural Information Processing Systems, pp. 99–109. Cited by: §1.
 [16] (2014) Locally optimized product quantization for approximate nearest neighbor search. In CVPR, pp. 2329–2336. Cited by: §2.
 [17] (2012) Efficient retrieval of recommendations in a matrix factorization framework. In CIKM, pp. 535–544. Cited by: §1.
 [18] (2009) Matrix factorization techniques for recommender systems. IEEE Computer 42, pp. 30–37. Cited by: §1.
 [19] (2017) FEXIPRO: fast and exact inner product retrieval in recommender systems. In SIGMOD, pp. 835–850. Cited by: §1.
 [20] (2016) Revisiting additive quantization. In European Conference on Computer Vision, pp. 137–153. Cited by: §2.
 [21] (2018) Nonmetric similarity graphs for maximum inner product search. In NeurIPS, pp. 4726–4735. Cited by: §1, §5.
 [22] (2016) Learning and inference via maximum inner product search. In International Conference on Machine Learning, pp. 2587–2596. Cited by: §1.
 [23] (2015) On symmetric and asymmetric lshs for inner product search. In ICML, pp. 1926–1934. Cited by: §1, §5, footnote 2.
 [24] (2012) Maximum innerproduct search using cone trees. In KDD, pp. 931–939. Cited by: §1.
 [25] (2014) Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In NIPS, pp. 2321–2329. Cited by: §1, footnote 2.
 [26] (2015) LEMP: fast retrieval of large entries in a matrix product. In SIGMOD, pp. 107–122. Cited by: §1.
 [27] (2017) Multiscale quantization for fast similarity search. In Advances in Neural Information Processing Systems, pp. 5745–5755. Cited by: §2.
 [28] (2018) Normranging LSH for maximum inner product search. In NeurIPS 2018, pp. 2956–2965. Cited by: §5.
 [29] (2013) NOMAD: nonlocking, stochastic multimachine algorithm for asynchronous and decentralized matrix completion. CoRR. Cited by: §5.
 [30] (2014) Composite quantization for approximate nearest neighbor search. In ICML, pp. 838–846. Cited by: §2, §5.
Comments
There are no comments yet.