1 Introduction
Deep neural networks have proven highly successful at various image, speech, language and video recognition tasks (Krizhevsky et al., 2012; Mikolov et al., 2013; Karpathy et al., 2014)
. These networks typically have several layers of units connected in feedforward fashion between the input and output spaces. Each layer performs a specific function such as convolution, pooling, normalization, or plain matrix products in the case of fully connected layers followed by some form of nonlinear activation such as sigmoid or rectified linear units.
Despite their attractive qualities, and the relative efficiency of their local architecture, these networks are still prohibitively expensive to train and apply for largescale problems containing millions of classes or nodes. There are several such problems proposed in the literature. The Imagenet dataset which is one of the largest datasets for image classification contains around
classes. Wordnet, which is a superset of Imagenet, consists of synsets. Freebase, which is a communitycurated database of wellknown people, places, and things contains close to million entities. Image models of text queries have ranged from queries in academic benchmarks Weston et al. (2011b) to several million in commercial search engines such as Google, Bing, Yahoo, etc. Duplicate video content identification (Shang et al., 2010; Song et al., 2011; Zhao et al., 2007) and video recommendations are also largescale problems with millions of classes.We note that the key computation common to softmax/logistic regression layers is a matrix product between the activations from a layer,
, and the weights of the connections to the next layer, . As the number of classes increases this computation becomes the main bottleneck of the entire network. Based on this observation, we exploit a fast localitysensitive hashing technique (Yagnik et al., 2011) in order to approximate the actual dot product in the final output layer which enables us to scale up the training and inference to millions of output classes.Our main idea is to approximate the dot product between the output layer’s parameter vector and the input activations using hashing. We first compute binary hash codes for the parameter vectors, , of a layer’s output nodes and store the indices of the nodes in locations corresponding to the hash codes within hash tables. During inference, given an input activation vector, , we compute the hash codes of the vector and retrieve the set of output nodes that are closest to the input vector in the hash space. Following this we compute the actual dot product between and the parameter vectors of and set all other values to zero.
By avoiding the expensive dot product operation between the input activation vector and all output nodes we show that our approach can easily scale up to millions of output classes during inference. Furthermore, using the same technique when training the models, we show that our approach can train largescale models at a faster rate both in terms of number of steps and the total time compared to both the standard softmax layers and the more computationally efficient hierarchical softmax layer of
(Mikolov et al., 2013).2 Related Work
Several methods have been proposed for performing classification in deep networks over large vocabularies. Traditional methods such as logistic regression and softmax (multinomial regression) are known to have poor scaling properties with the number of classes (Dean et al., 2013) as the number of dot products grows with the number of classes C that must be considered.
One method for contending with this is hierarchical softmax whereby a tree is constructed of depth
in which the leaves are the individual classes which must be classified
(Morin & Bengio, 2005; Mikolov et al., 2013). A benefit of this approach is that each step merely requires computations associated with the tree traversal to an individual leaf.A second direction is to instead train a dense embedding space representation and perform classification by employing knearestneighbors in this embedding space on unseen examples. Typical methods for training such embedding representations employ a hinge rank loss with a clever selection of negative examples, e.g. (Weston et al., 2011a).
Locality sensitive hashing (LSH) (Gionis et al., 1999) provides a third alternative by providing methods to perform approximate nearest neighbor search in sublinear time for various similarity metrics. An LSH scheme based on ordinal similarity is proposed in (Yagnik et al., 2011) which is used in (Dean et al., 2013) to speedup filter based object detection. We expand on these techniques to enable learning largescale deep network models.
3 Approach
The goal of this work is to enable approximate computation of the matrix product of the parameters of a layer and its input activations, , in a deep network so that the number of output dimensions can be increased by several orders of magnitude. In the following sections, we demonstrate that a locality sensitive hashing based approximation can provide such a solution without too much degradation in overall accuracy. As a first step we employ this technique to scale up the final classification layer since the benefits of hashing are easily seen when the cardinality is quite large.
3.1 Softmax/Logistic Classification
Softmax and logistic regression functions are two popular choices for the final layer of a deep network for multiclass and binary classification problems respectively. Formally, the two functions are defined as
(1)  
(2) 
where
is the probability of the
class given the input vector and are distinct linear functions for the classes.When the number of classes is large, not all classes are relevant to a given input example. Therefore, in many situations we are only interested in the classes with the highest probabilities. We could obtain the top classes by equivalently determining the vectors, , that have the largest dot products with the input vector and computing the probabilities for only these classes, setting all others to zero.
We note that this is equivalent to the problem of finding the approximate nearest neighbors of a vector based on cosine (dot product) similarity which has a rich literature beginning with the seminal work of (Gionis et al., 1999). It has been shown that approximate nearest neighbors can be obtained in time that is sublinear in the number of database vectors with certain guarantees which is the key motivation for our approach.
In our case the database vectors are the parameter vectors of the output layer, , and the query vector is the input activation from the previous layer . In this work, we employ the subfamily of hash functions, winnertakeall (WTA) hashing introduced in (Yagnik et al., 2011), since it has been successfully applied for the similar task of scaling up filterbased object detection in (Dean et al., 2013).
3.2 WinnerTakeAll Hashing (WTA)
Given a vector or in , its WTA hash is defined by permuting its elements using distinct permutations and recording the index of the maximum value of the first elements (Yagnik et al., 2011). Each index can be compactly represented using bits resulting in bits for the entire hash. The WTA hash has several desirable properties; since the only operation involved in computing the hash is comparison, it can be completely implemented using integer arithmetic and the algorithm can be efficiently implemented without accruing branch prediction penalties.
Furthermore, each WTA hash function defines an ordinal embedding and it has been shown in (Yagnik et al., 2011) that as , the dot product between two WTA hashes tends to the rank correlation between the underlying vectors. Therefore, WTA is well suited as a basis for localitysensitive hashing as ordinal similarity can be used as a more robust proxy for dot product similarity.
Given binary hash codes, of a vector, , there are several schemes that can be employed in order to perform approximate nearest neighbor search. In this work, we employ the scheme used in (Dean et al., 2013) due to its simplicity and limited overhead.
In this scheme, we first divide the compact hash code, , containing elements of bits into bands, , each containing elements. We create a hash table for each band and store the index of the vector in the hash bins corresponding to in each hash table . During retrieval, we similarly compute the hash codes of and divide it into bands and retrieve the set of all IDs in the corresponding hash bins along with their counts. The counts provide a lower bound for the dot product between the two hash vectors which is related to the ordinal similarity between the two vectors. Therefore, the top IDs from this list approximate the nearest neighbors to the input vector based on dot product similarity. The actual dot product can now be computed for these vectors with the input vector to obtain their probabilities.
The complexity of the scheme proposed above depends on the dimensionality of the vectors for computing the hash codes, the number of bands or hash tables that are used during retrieval, , and the number of IDs for which the actual dot product is computed, . Since all three quantities are independent of the number of classes in the output layer our approach can accomodate any number of classes in the output layer. As shown in Figure 1, the naive softmax has a complexity of whereas our WTA based approximation has a complexity of . The overall speedup we obtain is of the order of assuming the cost of computing the hash function and lookup are much smaller than . Of course, since both and relate to the accuracy of the approximation they provide a tradeoff between the time complexity and the accuracy of the network.
3.3 Inference
We can apply our proposed approximation during both model inference and training. For inference, given a learned model, we first compute the hash codes of the parameter vectors of the softmax/logistic regression layer and store the IDs of the corresponding classes in the hash table as described in Section 3.2. This is a one time operation that is performed before running inference on any input examples.
Given an input example, we pass it through all layers leading up to the classification layer as before and compute the hash codes of the input activations to the classification layer. We then query the hash table to retrieve the top classes and compute probabilties using Equation LABEL:eq:softmax for only these classes. Figure 1 shows a rough schematic of this procedure.
3.4 Training
We train the models using downpour SGD, an asynchronous stocastic gradient descent procedure supporting a large number of model replicas, proposed in (Dean et al., 2012)
. During backpropagation we only propagate gradients based on the top
classes that were retrieved during the forward pass of the model and update the parameter vectors of only these retrieved classes using the error vector. Additionally, we add all the positive labels for an input example to the list of nonzero classes in order to always provide a positive gradient. In Section 4 we show empirical results of performing only these top updates. These sparse gradients are much more computationally efficient and additionally perform the function of hard negative mining since only the closest classes to a particular example are updated.While inference using WTA hashing is straightforward to implement, there are several challenges that need to be solved to make training efficient using such a scheme. Firstly, unlike during inference, the parameter vectors are constantly changing as new examples are seen. It would be infeasible to request updated parameters for all classes and update the hash table after every step.
However, we found that gradients based on a small set of examples do not perturb the parameter vector significantly and WTA hashing is only sensitive to changes in the ordering of the various dimensions and is more robust to small changes in the absolute values of the different dimensions. Based on these observations we implemented a scheme where the hash table locations of classes are updated in batches in a roundrobin fashion such that all classes are updated over a course of several hundred or thousand steps which turned out to be quite effective.
Therefore, we only request updated parameters for the set of retrieved classes, the positive training classes and the classes selected in the roundrobin scheme. Figure 2 shows a schematic of these interactions with the parameter server and the hash tables.
4 Experiments
We empirically evaluate our proposed technique on several largescale datasets with the aim of investigating the tradeoff between accuracy and time complexity of the WTA based softmax classifier in comparison to the baseline approaches of softmax (exhaustive) and hierarchical softmax.
4.1 Imagenet 21K
The 2011 Imagenet 21K dataset consists of 21,900 classes and 14 million images. We split the set into equal partitions of training and testing sets as done in (Le et al., 2012). We selected values of , , for the , , parameters of the WTA approach for all experiments based on results on a small set of images which agreed with the parameters mentioned in (Dean et al., 2013). We varied the value of , which is the number of retrieved classes for which the actual dot product is computed, since it directly affects both the accuracy and the time complexity of the approach.
We used the artichecture proposed in (Krizhevsky et al., 2012) (AlexNet) for all experiments replacing only the classification layer with the proposed approach. All methods were optimized using downpour SGD with a starting learning rate of 0.001 with exponential decay in conjunction with a momentum of 0.9. We used a cluster of about machines containing multicore CPUs with of RAM running at to perform training and inference.
Figure 4 first reports the time taken during inference by the WTA Softmax and Softmax layers alone (ignoring the rest of the model) as both the batch size and the top is varied for this problem. We note that WTA Softmax provides significant speedup over Softmax for both small batch sizes and small values of . For large batch sizes Softmax is very efficient due to optimizations over dense matrices. For large values of the dot product with the retrieved vectors begins to dominate the time complexity.
Figure 4 report the accuracies obtained when using WTA during inference on a learned model as compared to the baseline accuracy of the softmax model. We find that even with as few as retrieved classes our approach is able to reach up to of the baseline accuracy and almost matches the baseline accuracy with retrieved classes. Note that the ceiling on this problem is the accuracy of the base network since we are approximating an already trained network using WTA. This vindicates our claim that only a small percentage of classes are relevant to any input example and WTA hashing provides an efficient technique for obtaining the top most relevant classes for a given input example. Based on these figures we conclude that the proposed approach is advantageous when either is very large or for small batch sizes.
Figure 6 reports the tradeoff between the speedup achieved over baseline softmax at a fixed batch size and the percentage of the baseline accuracy reached by the WTA model. We find that the WTA model achieves a speedup of 10x over the baseline model at accuracy. Figure 6 reports the speedup achieved at of the baseline accuracy for various batch sizes. As noted previously we find that the WTA model achieves higher speedups for smaller batch sizes.
4.2 Skipgram dataset
One popular application of deep networks has been building and training models of language and semantics. Recent work from (Mikolov et al., 2013; Q.V. Le, 2014) has demonstrated that a shallow, simple architecture can be trained efficiently by across language corpora. The resulting embedding vector representation for language exhibits rich structure about semantics and syntactics that can be exploited for many other purposes. For all of these models, a crucial aspect of training is to predict surrounding and nearby words in a sequence. The prediction task is typically quite large, i.e. the cardinality is the size of the vocabulary, O(1M10M) words.
A key insight of recent work has been to exploit novel and efficient methods for performing discrete classification over large cardinalities. In particular, said work employs a hierarchical softmax to fast inference and evaluation.
As a test of the predictive performance our hashing techniques, we compare the performance of WTA hashing on the language modeling task. We note that this is an extremely difficult task the perplexity of language (or just cooccurrence statistics of words in a sentence) is quite high. Thus, any method attempts to predict nearby words will at best report low predictive performances.
In our experiments, we download and parse Wikipedia consisting of several billion sentences. We tokenize this text corpora with the 1M most popular words. The task of the network is to perform a 1Mway prediction nearby words based on neighboring words.
We performed our experiments with three loss functions, traditional softmax, hierarchical softmax and WTAbased softmax. We found measure the precision@K for the top K predictions from each softmax model.
We compare all networks with three loss functions after 100 hours of training time across similar CPU time. We find that all networks have converged within this time frame although the hierarhical softmax has processed 100 billion examples while the WTA softmax has processed 100 million examples.
As seen in Table 1, we find that WTA softmax achieves superior predictive performance than the hierarchical softmax even though hierarchical softmax has processed O(100) times more examples. In particular, we find that WTA softmax achieves roughly twofold better predictive performance.
However, the WTA softmax produces underlying embedding vector representations that do not perform as well on analogy tasks as highlighted by (Mikolov et al., 2013). For instance, the hierarchical softmax achieves 50% accuracy on analogy tasks where as WTA softmax produces 5% accuracy on similar tasks. This is partly due to the fewer number of examples processed by WTA in the same time frame as hierarchical softmax is significantly faster than WTA because it performs just dot products.
HSoftmax  WTASoftmax  

precision@1  1.15%  1.93% 
precision@3  2.36%  5.18% 
precision@5  3.14%  7.48% 
precision@10  4.52%  10.2% 
precision@20  6.28%  13.4% 
precision@50  9.63%  16.5% 
precision@100  13.2%  18.5% 
average precision  2.31%  4.53% 
4.3 Video Identification
While the 21K problem is one of the largest tested for the baseline softmax model, the benefits of hashing are best seen for problems of much larger cardinality. In order to illustrate this we next consider a largescale classification task for video identification. This task is modeled on Youtube’s content ID classification problem which has also been addressed in several recent work under various settings Shang et al. (2010); Song et al. (2011); Zhao et al. (2007).
The task we propose is to predict the ID of a video based on its frames. We use the Sports 1M action recognition dataset introduced in (Karpathy et al., 2014) for this problem. The Sports 1M dataset consists of roughly 1.2 million Youtube sports videos annotated with 487 classes. We divide the first five minutes of each video into two parts where the first of the video’s frames are used for training and the remaining are used for evaluating the models. The prediction space of the problem spans 1.2 million classes and each class has roughly 150 frames for training and evaluation.
We trained three models for this problem with the AlexNet architecture where the top layer uses one of softmax, WTA softmax and hierarchical softmax each. We used learning rates of and report the best results for each of the models. For WTA we used a value of 3000 for the parameter based on the results in the previous section and a batch size of for all models.
Figure 9 reports the accuracy on the evalution set against the number of steps trained for each model and Figure 9 reports the accuracy against the actual time taken to complete these steps. We find that on both counts the WTA based model learns faster than both softmax and hierarchical softmax.
The step time of the WTA model is about times lower than the softmax model but about higher than hierarchical softmax. This is because hierarchical softmax is much more efficient as it only computes dot products compared to for WTA. However, even though hierarchical softmax processes significantly more number of examples the WTA models is able to achieve much higher accuracies.
In order to better understand the significant difference between WTA and the baselines on this task as opposed to the Imagenet 21K problem we computed the inclass variance of all the classes in the two datasets based on the
dim feature from the penultimate layer of the AlexNet model. Figure 9 reports a histogram of the inclass variance of the examples belonging to a class on the two datasets. We find that in the Imagenet task the examples within a class are much more spread out than the Sports 1M task which is expected given that frames within a video would have similar context and more correlation. This could explain the relative efficiency of the top gradient updates used by the WTA model on the Sports 1M task.5 Conclusions
We proposed a locality sensitive hashing approach for approximating the computation of in the classification layer of deep network models which enables us to scale up the training and inference of these models to millions of classes. Empirical evaluations of the proposed model on various largescale datasets shows that the proposed approach provides significant speedups over baseline softmax models and can train such largescale models at a faster rate than alternatives such as hierarchical softmax. Our approach is advantageous whenever the number of classes considered is large or where batching is not possible.
In the future we would like to extend this technique to intermediate layers also as the proposed method explicitly places sparsity constraints which is desirable in hierarchical learning. Given the scaling properties of hashing, our approach could, for instance, be used to increase the number of filters used in the convolutional layers from hundreds to tens of thousands with a few hundred being active at any time.
References
 Dean et al. (2012) Dean, Jeffrey, Corrado, Greg S., Monga, Rajat, Chen, Kai, Devin, Matthieu, Le, Quoc V., Mao, Mark Z., Senior, Andrew, Tucker, Paul, Yang, Ke, and Ng, Andrew Y. Large scale distributed deep networks. In Advances in Neural Information Processing Systems. 2012.

Dean et al. (2013)
Dean, Thomas, Ruzon, Mark A., Segal, Mark, Shlens, Jonathon, Vijayanarasimhan,
Sudheendra, and Yagnik, Jay.
Fast, accurate detection of 100,000 object classes on a single
machine.
In
Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition
, CVPR ’13, pp. 1814–1821. IEEE Computer Society, Washington, DC, USA, 2013. ISBN 9780769549897. doi: 10.1109/CVPR.2013.237. URL http://dx.doi.org/10.1109/CVPR.2013.237.  Gionis et al. (1999) Gionis, Aristides, Indyk, Piotr, and Motwani, Rajeev. Similarity search in high dimensions via hashing. In Atkinson, Malcolm P., Orlowska, Maria E., Valduriez, Patrick, Zdonik, Stanley B., and Brodie, Michael L. (eds.), Proceedings of the 25th International Conference on Very Large Data Bases, pp. 518–529. Morgan Kaufmann, 1999.

Karpathy et al. (2014)
Karpathy, Andrej, Toderici, George, Shetty, Sanketh, Leung, Thomas, Sukthankar,
Rahul, and FeiFei, Li.
Largescale video classification with convolutional neural networks.
In Proc. CVPR, pp. 1725–1732. Columbus, Ohio, USA, 2014.  Krizhevsky et al. (2012) Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classification with deep convolutional neural networks. In Proc. NIPS, pp. 1097–1105. Lake Tahoe, Nevada, USA, 2012.

Le et al. (2012)
Le, Quoc V., Monga, Rajat, Devin, Matthieu, Chen, Kai, Corrado, Greg S., Dean,
Jeff, and Ng, Andrew Y.
Building highlevel features using large scale unsupervised learning.
InIn International Conference on Machine Learning
. 2012.  Mikolov et al. (2013) Mikolov, Tomas, Sutskever, Ilya, Chen, Kai, Corrado, Greg S., and Dean, Jeff. Distributed Representations of Words and Phrases and their Compositionality. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 26, pp. 3111–3119. Curran Associates, Inc., 2013. URL http://papers.nips.cc/paper/5021distributedrepresentationsofwordsandphrasesandtheircompositionality.pdf.
 Morin & Bengio (2005) Morin, Frederic and Bengio, Yoshua. Hierarchical probabilistic neural network language model. In AISTATS’05, pp. 246–252. 2005.
 Q.V. Le (2014) Q.V. Le, T. Mikolov. Distributed representations of sentences and documents. In ICML. 2014.
 Shang et al. (2010) Shang, Lifeng, Yang, Linjun, Wang, Fei, Chan, KwokPing, and Hua, XianSheng. Realtime large scale nearduplicate web video retrieval. In Proceedings of the International Conference on Multimedia, MM ’10, pp. 531–540, New York, NY, USA, 2010. ACM. ISBN 9781605589336. doi: 10.1145/1873951.1874021. URL http://doi.acm.org/10.1145/1873951.1874021.
 Song et al. (2011) Song, Jingkuan, Yang, Yi, Huang, Zi, Shen, Heng Tao, and Hong, Richang. Multiple feature hashing for realtime large scale nearduplicate video retrieval. In Proceedings of the 19th ACM International Conference on Multimedia, MM ’11, pp. 423–432, New York, NY, USA, 2011. ACM. ISBN 9781450306164. doi: 10.1145/2072298.2072354. URL http://doi.acm.org/10.1145/2072298.2072354.

Weston et al. (2011a)
Weston, Jason, Bengio, Samy, and Usunier, Nicolas.
Wsabie: Scaling up to large vocabulary image annotation.
In
Proceedings of the TwentySecond International Joint Conference on Artificial Intelligence  Volume Volume Three
, IJCAI’11, pp. 2764–2770. AAAI Press, 2011a. ISBN 9781577355151. doi: 10.5591/9781577355168/IJCAI11460. URL http://dx.doi.org/10.5591/9781577355168/IJCAI11460.  Weston et al. (2011b) Weston, Jason, Bengio, Samy, and Usunier, Nicolas. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of the TwentySecond International Joint Conference on Artificial Intelligence  Volume Volume Three, IJCAI’11, pp. 2764–2770. AAAI Press, 2011b. ISBN 9781577355151. doi: 10.5591/9781577355168/IJCAI11460. URL http://dx.doi.org/10.5591/9781577355168/IJCAI11460.
 Yagnik et al. (2011) Yagnik, Jay, Strelow, Dennis, Ross, David A., and Lin, Rueisung. The power of comparative reasoning. In IEEE International Conference on Computer Vision. IEEE, 2011.
 Zhao et al. (2007) Zhao, WanLei, Ngo, ChongWah, Tan, HungKhoon, and Wu, Xiao. Nearduplicate keyframe identification with interest point matching and pattern learning. Trans. Multi., 9(5):1037–1048, August 2007. ISSN 15209210. doi: 10.1109/TMM.2007.898928. URL http://dx.doi.org/10.1109/TMM.2007.898928.
Comments
There are no comments yet.