DeepAI
Log In Sign Up

Sublinear Partition Estimation

08/07/2015
by   Pushpendre Rastogi, et al.
0

The output scores of a neural network classifier are converted to probabilities via normalizing over the scores of all competing categories. Computing this partition function, Z, is then linear in the number of categories, which is problematic as real-world problem sets continue to grow in categorical types, such as in visual object recognition or discriminative language modeling. We propose three approaches for sublinear estimation of the partition function, based on approximate nearest neighbor search and kernel feature maps and compare the performance of the proposed approaches empirically.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

04/11/2021

Sublinear Time Nearest Neighbor Search over Generalized Weighted Manhattan Distance

Nearest Neighbor Search (NNS) over generalized weighted distance is fund...
06/25/2014

N^4 -Fields: Neural Network Nearest Neighbor Fields for Image Transforms

We propose a new architecture for difficult image processing operations,...
11/24/2021

Matroid Partition Property and the Secretary Problem

A matroid ℳ on a set E of elements has the α-partition property, for som...
09/11/2013

Approximate Counting CSP Solutions Using Partition Function

We propose a new approximate method for counting the number of the solut...
11/19/2018

DeepIR: A Deep Semantics Driven Framework for Image Retargeting

We present Deep Image Retargeting (DeepIR), a coarse-to-fine framework f...
06/28/2020

Reinforcement Learning Based Handwritten Digit Recognition with Two-State Q-Learning

We present a simple yet efficient Hybrid Classifier based on Deep Learni...
07/31/2019

Sublinear Subwindow Search

We propose an efficient approximation algorithm for subwindow search tha...

Code Repositories

cylsh

Cython/Python bindings of E2LSH by Andoni and Symmetric LSH for MIPS by Neyshabur


view repo

1 Introduction

Neural Networks (and log-linear models) have out-performed other machine learning frameworks on a number of difficult multi-class classification tasks such as object recognition in images, large vocabulary speech recognition and many others 

[19, 10, 25]

. These classification tasks become “large scale” as the number of potential classes/categories increases. For instance, discriminative language models need to choose the most probable word from the entire vocabulary which can have more than 100,000 words in case of the English language. And in the field of computer vision, the latest datasets for object recognition contain more than 10,000 object classes and the number of categories is increasing every year 

[5].

In certain applications neural networks may be used as sub-systems of a larger model in which case it may be necessary to convert the unnormalized score assigned to a class by a neural network to a probability value. To perform this conversion we need to compute the so-called Partition Function of a neural network which is just a sum of the scores assigned by the neural network to all the classes. Let be the number of output classes and let where represents weights of th class. Also, let be the score of the class, i.e. then the partition function is defined as:

(1)

The problem of assigning a probability to the most probable class can be stated as:

(2)
(3)

Recently, [21, 17, 3] have presented methods for solving  (2) by building upon frameworks for performing fast randomized nearest neighbor searches such as Locality Sensitive Hashing and Randomized k-d trees. This paper presents methods for estimating the value of required in  (3). While brute force parallelization is one effective strategy for reducing the time needed for this computation our goal is to estimate the partition function in asymptotically lesser runtime.

2 Previous Work

A number of techniques have been used previously for speeding up the computation of the partition function in artificial neural networks. Computational efficiency of the normalizing step seems to be especially important for these tasks because of the large size of vocabulary needed to achieve state of the art performance in these tasks. The prior work can be categorized by the technique it uses as follows:

Importance Sampling: The partition function can be written as where the distribution of is uniform over . However, the attempts to estimate by simply drawing

samples from the uniform distribution and replacing the expectation by its sample estimate are marred by the high variance of the estimate.

[4] were the first to use importance sampling for reducing the variance of the sample average estimator. Their aim was to speed up the training of neural language models and they used an -gram language model as the proposal distribution. Though they do not use their method for actually computing the partition function at inference time, their method could be easily extended for that purpose. The problem however with their method is that it requires the use of an external model for constructing the proposal distribution. Such an external model requires extra engineering and knowledge, specific to the problem domain, that may not be available.

Hierarchical decomposition[13] introduced a method for breaking the original -way decision problem into a hierarchical one such that it requires only computations. That method requires a change in the model from a single decision problem into a chain of decision problems computed over a tree. Also, since there is no a priori single most preferable way of growing a tree for doing this computation therefore an external model is needed for creating the hierarchical tree.

Self-Normalization: Some of the prior work side-steps the problem of computing the partition function and trains neural networks with the added constraint that the partition function should remain close to for inputs seen during test time.

[14]

used Noise Contrastive Estimation

111Please see section 3 for a brief overview and section 4.2 for a detailed explanation of NCE.

with a heuristic to clamp the values of

to during training. They demonstrated empirically that by doing so the partition function at test time also remained close to though they did not provide any theoretical analysis on how close to the value of the partition function remains at test time.

On the other hand, [6] added a penalty term to the training objective of the neural network and empirically demonstrated on a large scale machine translation task that they do not suffer from a large loss in accuracy even if they assume that the value of the partition function is close to for all inputs at test time. Recently, [1] showed that after training on training examples, with probability the expected value of lies in an interval of size centered at . Here and are the number of output classes and number of features in a loglinear model and and are upper bounds on the infinity norm of and respectively.

3 Background

Maximum Inner Product Search (MIPS)

: MIPS refers to the problem of finding points that have the highest inner product with an input query vector.

222Note that simply by querying for one can also find the vectors with the smallest inner product. Let us define to be the set of vectors that have the highest inner product with the vector and let us assume for simplicity that MIPS algorithms allow us to retrieve for arbitrary and in sublinear time. The exact order of runtime depends on the dataset and the indexing algorithm chosen for retrieval. For example, one could use the popular library FLANN [16, 15] or PCA-Trees[24] or LSH itself[7] for retrieving . Also, [9] presented a measure of hardness of the dataset for nearest neighbor algorithms.

[21, 22] and [17] presented methods for MIPS based on Asymmetric Locality Sensitive Hashing (LSH) i.e., they used two separate hash functions for the query and data. A different approach that relies on reducing the problem of performing maximum inner product search over a set of -dimensional vectors to the problem of performing nearest neighbor search in Euclidean distance over a set of dimensional vectors was presented by [3, 17]. These algorithms enable us to draw high probability samples from the unnormalized distribution over induced by and we will use this property heavily for creating our estimators.

NCE: NCE was introduced by [8] as an objective function that can be computed and optimized more efficiently than the likelihood objective in cases where normalizing a distribution is an expensive operation. In the same paper they proved that the NCE objective has a unique maxima, and that it achieves that maxima for the same parameter values that maximize the true likelihood function. Moreover the normalization constant itself can be estimated as an outcome of the optimization. The NCE objective relies on at least a single sample being available from the true distribution which may be unnormalized and a noise distribution which should be normalized.

Kernel Feature Maps: The function is a kernel that depends on the dot product of and , therefore, is a dot product kernel[20, 11]. Every kernel that satisfies certain conditions333It is sufficient for a kernel to be analytic, and have positive coefficients in its Taylor expansion around zero. These conditions are satisfied by the function. also has an associated feature map such that the kernel can be decomposed as a countable sum of products of feature functions[23]:

If the values of decrease fast enough then one could approximate the kernel up to some small tolerance by a finite summation of its feature maps as follows:

Log Normal Distribution of : If we assume that then

is also a normal random variable with distribution

and

is log-normal distributed. In this case,

is the sum of dependent log-normal random variables. There is no analytical formula known for the distribution of , however, in general it is known that the distribution of is governed by the distribution of the when is high enough due to a result by [2].444They show that equals the number of that have the highest variance and highest mean. Here is the tail of a log-normal CDF. This suggests that one could reasonably estimate when its value is high enough, by exponentiating and then summing only the top few . Unfortunately, when the value of is not very large, then all become significant for calculating . E.g. consider the pathological case that which means that . In practice, for most values of , the value of is not large enough to ignore the contributions due to the tail of .

4 Methods

4.1 MIMPS: MIPS Based Importance Sampling

In Section 2 we discussed the importance sampling based approach for estimating the partition function and pointed out that the work so far relies on the presence of an external model or proposal distribution that can produce samples from the high probability region. However, by utilizing the algorithms for solving MIPS problem, we can overcome problems of engineering proposal distributions, since we can retrieve the set (See Section 3). A naive estimator, which we call Naive MIMPS or NMIMPS, that utilizes is the following:

(4)

Unfortunately NMIMPS requires to be very high and is not realistic. Let represent a set of vectors sampled uniformly from amongst the vectors that are not in then a better way of estimating is:

(5)

In effect we are assuming that the values at the tail end of the probability distribution lie in a small range and thus a small sample size still has a small variance. A better estimator could be created by modeling the tail of the probability distribution, perhaps as a power law curve.

4.2 MINCE: MIPS Based NCE

NCE is a general parameter estimation technique that can be used any where in place of maximum likelihood estimation. Specifically if we consider the values of the partition function to be a parameter of the unnormalized distribution over induced by then by generating samples from the true distribution and a noise distribution ideally we can estimate it. Since NCE requires samples from the true distribution which we can generate by querying for therefore methods that perform MIPS can be used for estimating the value of as well. If our noise distribution is uniform over the vectors not present in then the NCE objective is; , where:

(6)

It is worthy to note that if we let and analogously define then the objective simplifies into a very convenient form shown in  (7) of which even the third derivatives can be found efficiently. Efficient computation of the third derivative utilized through Halley’s method, leads to considerable speedup during optimization compared to using only the second derivatives and Newton’s method.

(7)

We also briefly note that one way of estimating the partition function could be to assume a parameteric form on the output distribution and then to use Maximum likelihood estimation which is the most efficient estimator possible when the form of the distribution is known. However even though individual class scores follow the lognormal distribution it is not clear how one could use MLE for computing the partition function in our setting.

4.3 FMBE: Feature Map Based Estimation

In Section 3 we sketched how kernels could be linearized into a sum over products of feature maps. This decomposition of a kernel can be utilized for speeding up the computation of as follows:

(8)

Essentially, one could precompute during training and reduce the summation to along with a constant factor, say , needed to compute . Note that even though computing involves the mapping which in general is unknown. Let us now detail how we would compute . Overall this scheme would lead to savings in time if . Although [23]

gave explicit formulas for deriving the eigenvalues

and eigen-functions in terms of spherical harmonics, unfortunately, we are not aware of any method for efficiently computing the spherical harmonics in high dimensions. Instead we will rely on a technique developed by  [11] for creating a randomized kernel feature map for approximating the dot product kernel as follows:

(9)
(10)

Here is a hyper-parameter, usually taken to be 2. is the th coefficient in the taylor expansion of and

is chosen by drawing a sample from a geometric distribution

and is a binary random vector each coordinate of which is chosen from with equal chance. Refer to [11] for details555[11] also present one more algorithm for creating random feature maps that we would not discuss here..

5 Experiments

We want to answer the following questions: (1) As a function of , if we have access to a system that can retrieve 666We defined to be the set of vectors that have the highest inner product with the vector in Section 3 then what accuracy can be achieved by the proposed algorithms? (2) For a given , how does the accuracy then change in the face of error in retrieval (such as would result from the use of an approximate nearest neighbor routine such as MIPS)? (3) What is the accuracy of our proposed methods as opposed to existing methods for estimating the partition function?

5.1 Oracle Experiments

In Section 3 in the discussion of the log-normal distribution of we explained how the number of neighbors needed for estimating was dependent on the value of itself.777Also see figure 1 Our first set of experiments relies on real-world, publicly available collection of vectors: the neural word embeddings dataset released by [12] that consists of 3 million, dimensional vectors, each representing a distinct word or phrase trained on a, 100 billion token, monolingual corpus of news text.888URL:code.google.com/p/word2vec Each vector represents a single word or phrase. More pertinently, the dot product between the vectors , associated to the vocabulary items respectively, represents the unnormalized log probability of observing given :

(11)

For our experiments we used the first vectors from the 3 Million word vectors and all experiments in this subsection are on this set. Note that we do not normalize the vectors in any way: this ensures that we stay true to real-world situation in which vectors are the weights of a trained neural network, and consequently we can not modify them.

Figure 1: CDF over vocabulary items sorted in descending order from left to right according to their individual contribution to the distribution. Every curve is associated with a distinct context word marked in the legend. The bracketed numbers in the legend indicate the frequency of occurrence of these words in an English Wikipedia corpus. We can see that high frequency, common words tend to induce flat distributions.

In Figure 1 we show CDFs over words given context, sorted such that the words contributing the highest probability appear to the left. We can see that less than nearest neighbors (in terms of largest magnitude dot-product) are needed for recovering 80% of the true value of the partition function for the rare words Chipotle and Kobe_Bryant, but close to 80K neighbors are needed for common words that have high frequency of occurrence in a monolingual corpus. This is explained by the fact that common terms such as “The” occur in a wide variety of contexts and therefore induce a somewhat flat probability distribution over words. These patterns indicate that the Naive MIMPS estimator would need an unreasonably large number of nearest neighbors for correctly estimating the partition function of common words and therefore we do not experiment with it further and focus on MIMPS, MINCE and FMBE.

We implemented MIMPS, MINCE and FMBE based on an oracle ability to recover , to which we then add errors in a deterministic fashion. The resultant estimates of are then tabulated based on their mean absolute relative error.999Percentage Absolute Relative error = .

Our query set consists of items taken from across the top vectors chosen initially. Each query represents the context (features) that are best “classified” by one of the many categories. In the case of word-embeddings and language models, this would be some preceding word context which would be extracted and used in measuring the surprisal of the next word in a sequence.101010Surprisal being a function of the probability assigned by the model to an observation given context, and that probability assignment requiring the computation of .

We simulate this context by taking the representation of a given item from the vocabulary (a query vector) and randomly adding varied levels of noise with controlled relative norms. Every experimental setting was ran three times with different seeds to maintain a low standard error.

Table 1 presents the hyper-parameter tuning results for the different algorithms (UNIFORM, MIMPS and MINCE). We can see a symmetric behavior in the table for MIMPS which is surprising and we can see that the uniform case (which we model as a special case of MIMPS where k=0) performs badly. It is good to see that at and the error in is quite low but more exciting to see that when and then the error is only 7.1% with only 0.1% standard error. This means that by retrieving only 0.1% of the original vocabulary one can reasonably estimate the value of with low error. The MINCE estimator and the FMBE estimators do not fare well although the decrease in error of the MINCE algorithm as the number of noise samples is increased agrees with intuition. The FMBE algorithm had at and at . The standard error in both cases was lower than . Clearly the FMBE algorithm would require far higher number of dimensions in the feature map created through random projections before giving reasonable results and it might be better to experiment with the newer methods for generating kernel feature maps that come with better theoretical guarantees, e.g. by [18]. We defer this investigation to future work.

l=1000 l=100 l=10
Uniform 101.8 3.1 117.3 10.4 97.3 10.5
MIMPS (k=1000) 0.8 0.0 2.7 0.0 8.2 0.1
MIMPS (k=100) 2.4 0.0 7.1 0.1 16.1 0.2
MIMPS (k=10) 8.1 0.1 17.1 0.3 27.4 0.7
MIMPS (k=1) 28.7 0.6 39.3 2.1 47.0 2.7
MINCE (k=1000) 96285.4 2124.1 12413.0 363.9 2527.3 72.9
MINCE (k=100) 3780.9 125.8 667.4 20.2 846.5 5.1
MINCE (k=10) 230.9 7.9 330.3 2.1 827.1 5.0
MINCE (k=1) 133.7 0.8 317.3 2.0 525.2 3.5
Table 1: Mean absolute relative error, , and their associated standard error, , for different algorithms at varying settings of the hyper-parameters and that govern the number of vectors retrieved.

Table 2 shows the results of adding noise to the query vectors. As we mentioned before and interesting experiment for us was to restrictively simulate the type of errors that these estimators might encounter in a real setting where the vector with the highest or second highest inner product might not be made available to the estimators. We tabulated the performance of the estimators on these type of errors in Table 3. It is disconcerting to see the huge increase in error when the most important neighbor is absent from the retrieved set and clearly the importance of neighbors decreases as their rank increases. This indicates that one should use retrieval mechanism that have a high chance of retrieving the single best nearest neighbor in practice. This evidence is important while deciding between different indexing schemes that solve the MIPS problem.

noise=0% noise=10% noise=20% noise=30%
Uniform 101.8 3.1 103.6 3.1 104.1 3.1 105.0 3.1
MIMPS 0.8 0.0 0.9 0.0 0.9 0.0 0.9 0.0
MINCE 230.9 7.9 229.9 7.9 233.7 8.0 231.5 8.4
FMBE 83.8 0.2 85.2 0.2 85.8 0.2 87.1 0.2
Table 2: Results at varying levels of gaussian noise added to the query vectors to make them deviate from the actual. The header of the column indicates the norm of the noisy vector relative to the norm of the original vector. and were both set to for MIMPS and to and for MINCE.
ret err=None ret err=1 ret err=2 ret err=[1 2]
MIMPS 0.8 0.0 39.3 0.2 6.1 0.0 45.0 0.2
MINCE 133.7 0.8 133.7 0.8 133.7 0.8 133.7 0.8
Table 3: The performance of the estimators with simulated retrieval errors in the oracle system. “ret err=None” represents no error where “ret” stands for retrieval and “ret err=1” represents that the most closest vector in terms of innre product was missing from the retrieved by the oracle and “ret err=0 1” means that the first and second items were missing. We can see that the error increases as more and more items go missing. and were both set to for MIMPS and to and for MINCE.

5.2 Language Modeling Experiments

We now move beyond controlled experiments and do an end-to-end experiment by training a log bilinear language model [13] on text data from sections 0–20 of the Penn Treebank Corpus. At test time we estimate the value of the partition function for the contexts in sections 21–22 of the Penn Treebank and compare the approximation to the true values of the partition function. We train the log-bilinear language models using NCE and clamp the value of the partition function to be one while training the language model which enables us to do evaluate the accuracy of our method against the most common usage of NCE for language modeling. For the following experiments we use the method MIMPS that we implement using the specific MIPS algorithm presented by [3]

that in turn is implemented by modifying the implementation of K-Means Tree in FLANN 

[16].

Remember that our goal is to estimate the true value of the partition function in the test corpus. There are two main hyper parameters in our approach, the number of “head” samples and the number of “tail” samples. We train the LBL language model with the dimensionality of 300 and context size of 9 and tabulate the results as the number of head and tail samples is varied in table 4. We can see that with around 100 head samples and 100 tail samples the estimation accuracy becomes better than the heuristic of assuming that the value of is .

AbsE-MIPS AbsE-NCE %Better Speedup AbsE-MIPS AbsE-NCE %Better Speedup
1063.5 352 34 18.5 728.5 352 47.5 13.5
989.5 352 46.5 16 554 352 61.5 13
229 352 55.5 14.5 198.5 352 70.5 10
Table 4: AbsE column contains the total absolute difference between the estimated value of the partition function and the true value over the test set (Section 21–22 of the Penn Treebank Corpus) for the corresponding estimators. The test set contained close to contexts. %Better refers to the number of times the MIPS estimator gives a better estimate than the NCE heuristic as a percentage of the total number of contexts in the test set. The Speedup refers to the speedup achieved over brute force computation by the corresponding MIPS method.

6 Conclusions

We presented three new methods to estimate the partition function of a neural network or a log-linear model using recent algorithms from the field of randomized algorithms for nearest neighbor search, new statistical estimators and randomized kernel feature maps. We found that it is possible to compute the true value of the partition function with a small number of samples both under ideal conditions where we have an oracle for retrieving the true set k vectors closest to a query vector and on at least one real dataset using the algorithm for MIPS described in [3] implemented using the FLANN toolkit. We also noted that the estimator MIMPS seems to be the most reasonable way to do so. Initially we were hopeful that the MINCE estimator could also be successfully used but we found that it did not work so well.

While the data used for our experiments was always a real world dataset, we performed both controlled experiments where settings such as retrieval error and the creation of query vectors were carefully controlled to tease apart the sources of errors and end-to-end tasks. Based on the control experiments we can see that the performance of the algorithms critically depend on the indexing mechanism employed and it might be possible to extend some of the guarantees of those algorithms to our problem by using the results described in [9].

We also note that while a theoretical analysis of the performance of an estimator of the partition function would be extremely desirable, doing so for methods that rely on LSH that would need a three step analysis: (1) Analyze how the actual data (text or images) affects the weights learnt on the outer layer of a neural network. This process is not well understood. (2) How that distribution of weights would affect the performance of nearest neighbor retrieval. Perhaps the approach taken in [9] could be extended for this purpose. (3). Finally, how the error in Nearest neighbor retrieval would affect the accuracy of the estimator. This analysis could be done by assuming some parameteric distribution on the distributions of scores assigned to the output classes. We defer solutions to one or more of these steps to future work.

References

  • [1] J. Andreas and D. Klein. When and why are log-linear models self-normalizing? In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 244–249, Denver, Colorado, May–June 2015. Association for Computational Linguistics.
  • [2] S. Asmussen and L. Rojas-Nandayapa. Asymptotics of sums of lognormal random variables with gaussian copula. Statistics & Probability Letters, 78(16):2709–2714, 2008.
  • [3] Y. Bachrach, Y. Finkelstein, R. Gilad-Bachrach, L. Katzir, N. Koenigstein, N. Nice, and U. Paquet. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. In 8th ACM Conference on Recommender systems, 2014.
  • [4] Y. Bengio and J.-S. Sénécal. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4):713–722, 2008.
  • [5] J. Deng, A. C. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image categories tell us? In ECCV, pages 71–84. Springer-Verlag, 2010.
  • [6] J. Devlin, R. Zbib, Z. Huang, T. Lamar, R. Schwartz, and J. Makhoul. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1370–1380. Association for Computational Linguistics, 2014.
  • [7] W. Dong, Z. Wang, W. Josephson, M. Charikar, and K. Li. Modeling lsh for performance tuning. In ACM CIKM, pages 669–678. ACM, 2008.
  • [8] M. Gutmann and A. Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS, pages 297–304, 2010.
  • [9] J. He, S. Kumar, and S.-f. Chang. On the difficulty of nearest neighbor search. In ICML, pages 1127–1134, 2012.
  • [10] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012.
  • [11] P. Kar and H. Karnick. Random feature maps for dot product kernels. In AISTATS, pages 583–591, 2012.
  • [12] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119, 2013.
  • [13] A. Mnih and G. E. Hinton. A scalable hierarchical distributed language model. In NIPS, pages 1081–1088, 2008.
  • [14] A. Mnih and Y. W. Teh. A fast and simple algorithm for training neural probabilistic language models. In ICML, 2012.
  • [15] M. Muja and D. Lowe.

    Scalable nearest neighbour algorithms for high dimensional data.

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 2014.
  • [16] M. Muja and D. G. Lowe. Fast approximate nearest neighbors with automatic algorithm configuration. In VISAPP, 2009.
  • [17] B. Neyshabur and N. Srebro. On Symmetric and Asymmetric LSHs for Inner Product Search. ArXiv e-prints, Oct. 2014.
  • [18] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In ACM SIGKDD, pages 239–247. ACM, 2013.
  • [19] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet: Large Scale Visual Recognition Challenge, 2014.
  • [20] B. Schölkopf and A. J. Smola.

    Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning)

    .
    The MIT Press, 1st edition, December 2001.
  • [21] A. Shrivastava and P. Li. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). In NIPS, pages 2321–2329. Curran Associates, Inc., 2014.
  • [22] A. Shrivastava and P. Li. Asymmetric Minwise Hashing. ArXiv e-prints, Nov. 2014.
  • [23] A. J. Smola, Z. L. Óvári, and R. C. Williamson. Regularization with dot-product kernels. In NIPS, pages 308–314, 2001.
  • [24] R. F. Sproull. Refinements to nearest-neighbor searching in k-dimensional trees. Algorithmica, 6(1-6):579–589, 1991.
  • [25] G. P. Zhang. Neural networks for classification: a survey. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 30(4):451–462, 2000.