LightRNN: Memory and Computation-Efficient Recurrent Neural Networks

10/31/2016 ∙ by Xiang Li, et al. ∙ Nanjing University Microsoft 0

Recurrent neural networks (RNNs) have achieved state-of-the-art performances in many natural language processing tasks, such as language modeling and machine translation. However, when the vocabulary is large, the RNN model will become very big (e.g., possibly beyond the memory capacity of a GPU device) and its training will become very inefficient. In this work, we propose a novel technique to tackle this challenge. The key idea is to use 2-Component (2C) shared embedding for word representations. We allocate every word in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector. Depending on its position in the table, a word is jointly represented by two components: a row vector and a column vector. Since the words in the same row share the row vector and the words in the same column share the column vector, we only need 2 √(|V|) vectors to represent a vocabulary of |V| unique words, which are far less than the |V| vectors required by existing approaches. Based on the 2-Component shared embedding, we design a new RNN algorithm and evaluate it using the language modeling task on several benchmark datasets. The results show that our algorithm significantly reduces the model size and speeds up the training process, without sacrifice of accuracy (it achieves similar, if not better, perplexity as compared to state-of-the-art language models). Remarkably, on the One-Billion-Word benchmark Dataset, our algorithm achieves comparable perplexity to previous language models, whilst reducing the model size by a factor of 40-100, and speeding up the training process by a factor of 2. We name our proposed algorithm LightRNN to reflect its very small model size and very high training speed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

LightRNN-NIPS2016-Tensorflow_code

The tensorflow implementation of NIPS2016 paper "LightRNN: Memory and Computation-Efficient Recurrent Neural Networks" (https://arxiv.org/abs/1610.09893)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently recurrent neural networks (RNNs) have been used in many natural language processing (NLP) tasks, such as language modeling mikolov2010recurrent , machine translation sutskever2014sequence

, sentiment analysis

tang2015document , and question answering weston2014memory

. A popular RNN architecture is long short-term memory (LSTM)

gers2000learning ; hochreiter1997long ; sundermeyer2012lstm , which can model long-term dependence and resolve the gradient-vanishing problem by using memory cells and gating functions. With these elements, LSTM RNNs have achieved state-of-the-art performance in several NLP tasks, although almost learning from scratch.

While RNNs are becoming increasingly popular, they have a known limitation: when applied to textual corpora with large vocabularies, the size of the model will become very big. For instance, when using RNNs for language modeling, a word is first mapped from a one-hot vector (whose dimension is equal to the size of the vocabulary) to an embedding vector by an input-embedding matrix. Then, to predict the probability of the next word, the top hidden layer is projected by an output-embedding matrix onto a probability distribution over all the words in the vocabulary. When the vocabulary contains tens of millions of unique words, which is very common in Web corpora, the two embedding matrices will contain tens of billions of elements, making the RNN model too big to fit into the memory of GPU devices. Take the ClueWeb dataset

pomikalek2012building as an example, whose vocabulary contains over M words. If the embedding vectors are of dimensions and each dimension is represented by a 32-bit floating point, the size of the input-embedding matrix will be around GB. Further considering the output-embedding matrix and those weights between hidden layers, the RNN model will be larger than GB, which is far beyond the capability of the best GPU devices on the market appleyard2016optimizing . Even if the memory constraint is not a problem, the computational complexity for training such a big model will also be too high to afford. In RNN language models, the most time-consuming operation is to calculate a probability distribution over all the words in the vocabulary, which requires the multiplication of the output-embedding matrix and the hidden state at each position of a sequence. According to simple calculations, we can get that it will take tens of years for the best single GPU today to finish the training of a language model on the ClueWeb dataset. Furthermore, in addition to the challenges during the training phase, even if we can successfully train such a big model, it is almost impossible to host it in mobile devices for efficient inferences.

To address the above challenges, in this work, we propose to use 2-Component (2C) shared embedding for word representations in RNNs. We allocate all the words in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector. Then we use two components to represent a word depending on its position in the table: the corresponding row vector and column vector. Since the words in the same row share the row vector and the words in the same column share the column vector, we only need vectors to represent a vocabulary with unique words, and thus greatly reduce the model size as compared with the vanilla approach that needs unique vectors. In the meanwhile, due to the reduced model size, the training of the RNN model can also significantly speed up. We therefore call our proposed new algorithm (LightRNN), to reflect its very small model size and very high training speed.

A central technical challenge of our approach is how to appropriately allocate the words into the table. To this end, we propose a bootstrap framework: (1) We first randomly initialize the word allocation and then train the LightRNN model. (2) We fix the trained embedding vectors (corresponding to the row and column vectors in the table), and refine the allocation to minimize the training loss, which is a minimum weight perfect matching problem in graph theory and can be effectively solved. (3) We repeat the second step until certain stopping criterion is met.

We evaluate LightRNN using the language modeling task on several benchmark datasets. The experimental results show that LightRNN achieves comparable (if not better) accuracy to state-of-the-art language models in terms of perplexity, while reducing the model size by a factor of up to 100 and speeding up the training process by a factor of 2.

Please note that it is desirable to have a highly compact model (without accuracy drop). First, it makes it possible to put the RNN model into a GPU or even a mobile device. Second, if the training data is large and one needs to perform distributed data-parallel training, the communication cost for aggregating the models from local workers will be low. In this way, our approach makes previously expensive RNN algorithms very economical and scalable, and therefore has its profound impact on deep learning for NLP tasks.

2 Related work

In the literature of deep learning, there have been several works that try to resolve the problem caused by the large vocabulary of the text corpus.

Some works focus on reducing the computational complexity of the softmax operation on the output-embedding matrix. In mnih2009scalable ; morin2005hierarchical

, a binary tree is used to represent a hierarchical clustering of words in the vocabulary. Each leaf node of the tree is associated with a word, and every word has a unique path from the root to the leaf where it is in. In this way, when calculating the probability of the next word, one can replace the original

-way normalization with a sequence of binary normalizations. In goodman2001classes ; mikolov2011extensions , the words in the vocabulary are organized into a tree with two layers: the root node has roughly intermediate nodes, each of which also has roughly

leaf nodes. Each intermediate node represents a cluster of words, and each leaf node represents a word in the cluster. To calculate the probability of the next word, one first calculates the probability of the cluster of the word and then the conditional probability of the word given its cluster. Besides, methods based on sampling-based approximations intend to select randomly or heuristically a small subset of the output layer and estimate the gradient only from those samples, such as importance sampling

bengio2003quick and BlackOut ji2015blackout . Although these methods can speed up the training process by means of efficient softmax, they do not reduce the size of the model.

Some other works focus on reducing the model size. Techniques chen2015strategies ; sak2014long like differentiated softmax and recurrent projection are employed to reduce the size of the output-embedding matrix. However, they only slightly compress the model, and the number of parameters is still in the same order of the vocabulary size. Character-level convolutional filters are used to shrink the size of the input-embedding matrix in kim2015character . However, it still suffers from the gigantic output-embedding matrix. Besides, these methods have not addressed the challenge of computational complexity caused by the time-consuming softmax operations.

As can be seen from the above introductions, no existing work has simultaneously achieved the significant reduction of both model size and computational complexity. This is exactly the problem that we will address in this paper.

3 LightRNN

In this section, we introduce our proposed LightRNN algorithm.

3.1 RNN Model with 2-Component Shared Embedding

A key technical innovation in the LightRNN algorithm is its 2-Component shared embedding for word representations. As shown in Figure 1, we allocate all the words in the vocabulary into a table. The -th row of the table is associated with an embedding vector and the -th column of the table is associated with an embedding vector . Then a word in the -th row and the -th column is represented by two components: and . By sharing the embedding vector among words in the same row (and also in the same column), for a vocabulary with words, we only need unique vectors for the input word embedding. It is the same case for the output word embedding.

Figure 1: An example of the word table

Figure 2: LightRNN (left) vs. Conventional RNN (right).

With the 2-Component shared embedding, we can construct the LightRNN model by doubling the basic units of a vanilla RNN model, as shown in Figure 2. Let and denote the dimension of a row/column input vector and that of a hidden state vector respectively. To compute the probability distribution of , we need to use the column vector , the row vector , and the hidden state vector . The column and row vectors are from input-embedding matrices respectively. Next two hidden state vectors are produced by applying the following recursive operations:

(1)

In the above function, are parameters of affine transformations, and

is a nonlinear activation function (e.g., the sigmoid function).

The probability of a word at position is determined by its row probability and column probability :

(2)
(3)

where is the row index of word , is its column index, is the -th vector of , is the -th vector of , and and denote the set of rows and columns of the word table respectively. Note that we do not see the -th word before predicting it. In Figure 2, given the input column vector of the -th word, we first infer the row probability of the -th word, and then choose the index of the row with the largest probability in to look up the next input row vector . Similarly, we can then infer the column probability of the -th word.

We can see that by using Eqn.(3), we effectively reduce the computation of the probability of the next word from a -way normalization (in standard RNN models) to two -way normalizations. To better understand the reduction of the model size, we compare the key components in a vanilla RNN model and in our proposed LightRNN model by considering an example with embedding dimension , hidden unit dimension and vocabulary size M. Suppose we use -bit floating point representation for each dimension. The total size of the two embedding matrices is for the vanilla RNN model and that of the four embedding matrices in LightRNN is . It is clear that LightRNN shrinks the model size by a significant factor so that it can be easily fit into the memory of a GPU device or a mobile device.

The cell of hidden state can be implemented by a LSTM sundermeyer2012lstm

or a gated recurrent unit (GRU)

chung2014empirical , and our idea works with any kind of recurrent unit. Please note that in LightRNN, the input and output use different embedding matrices but they share the same word-allocation table.

3.2 Bootstrap for Word Allocation

The LightRNN algorithm described in the previous subsection assumes that there exists a word allocation table. It remains as a problem how to appropriately generate this table, i.e., how to allocate the words into appropriate columns and rows. In this subsection, we will discuss on this issue.

Specifically, we propose a bootstrap procedure to iteratively refine word allocation based on the learned word embedding in the LightRNN model:

  1. For cold start, randomly allocate the words into the table.

  2. Train the input/output embedding vectors in LightRNN based on the given allocation until convergence. Exit if a stopping criterion (e.g., training time, or perplexity for language modeling) is met, otherwise go to the next step.

  3. Fixing the embedding vectors learned in the previous step, refine the allocation in the table, to minimize the loss function over all the words. Go to Step (2).

As can be seen above, the refinement of the word allocation table according to the learned embedding vectors is a key step in the bootstrap procedure. We will provide more details about it, by taking language modeling as an example.

The target in language modeling is to minimize the negative log-likelihood of the next word in a sequence, which is equivalent to optimizing the cross-entropy between the target probability distribution and the prediction given by the LightRNN model. Given a context with words, the overall negative log-likelihood can be expressed as follows:

(4)

can be expanded with respect to words: , where is the negative log-likelihood for a specific word .

For ease of deduction, we rewrite as , where is the position of word in the word allocation table. In addition, we use and to represent the row component and column component of (which we call row loss and column loss of word for ease of reference). The relationship between these quantities is

(5)

where is the set of all the positions for the word in the corpus.

Now we consider adjusting the allocation table to minimize the loss function . For word , suppose we plan to move it from the original cell to another cell in the table. Then we can calculate the row loss if it is moved to row while its column and the allocation of all the other words remain unchanged. We can also calculate the column loss in a similar way. Next we define the total loss of this move as which is equal to according to Eqn.(5). The total cost of calculating all is , by assuming , since we only need to calculate the loss of each word allocated in every row and column separately. In fact, all and have already been calculated during the forward part of LightRNN training: to predict the next word we need to compute the scores (i.e., in Eqn.(2), and for all ) of all the words in the vocabulary for normalization and is the sum of over all the appearances of word in the training data. After we calculate for all possible , we can write the reallocation problem as the following optimization problem:

(6)

where means allocating word to position of the table, and and denote the row set and column set of the table respectively.

By defining a weighted bipartite graph with , in which the weight of the edge in connecting a node and node is , we will see that the above optimization problem is equivalent to a standard minimum weight perfect matching problem papadimitriou1982combinatorial on graph . This problem has been well studied in the literature, and one of the best practical algorithms for the problem is the minimum cost maximum flow (MCMF) algorithm ahuja1988network , whose basic idea is shown in Figure 3. In Figure 3(a), we assign each edge connecting a word node and a position node with flow capacity and cost . The remaining edges starting from source or ending at destination are all with flow capacity and cost . The thick solid lines in Figure 3(a) give an example of the optimal weighted matching solution, while Figure 3(b) illustrates how the allocation gets updated correspondingly. Since the computational complexity of MCMF is , which is still costly for a large vocabulary, we alternatively leverage a linear time (with respect to ) -approximation algorithm preis1999linear in our experiments whose computational complexity is . When the number of tokens in the dataset is far larger than the size of the vocabulary (which is the common case), this complexity can be ignored as compared with the overall complexity of LightRNN training (which is around , where

is the number of epochs in the training process and

is the total number of tokens in the training data).

Figure 3: The MCMF algorithm for minimum weight perfect matching

4 Experiments

To test LightRNN, we conducted a set of experiments on the language modeling task.

4.1 Settings

We use perplexity () as the measure to evaluate the performance of an algorithm for language modeling (the lower, the better), defined as , where is the number of tokens in the test set. We used all the linguistic corpora from 2013 ACL Workshop Morphological Language Datasets (ACLW) botha2014compositional and the One-Billion-Word Benchmark Dataset (BillionW) chelba2013one in our experiments. The detailed information of these public datasets is listed in Table 1.

Dataset #Token Vocabulary Size
ACLW-Spanish M K
ACLW-French M K
ACLW-English M K
ACLW-Czech M K
ACLW-German M K
ACLW-Russian M K
BillionW M K
Table 1: Statistics of the datasets

For the ACLW datasets, we kept all the training/validation/test sets exactly the same as those in botha2014compositional ; kim2015character by using their processed data 111https://www.dropbox.com/s/m83wwnlz3dw5zhk/large.zip?dl=0. For the BillionW dataset, since the data222http://tiny.cc/1billionLM are unprocessed, we processed the data according to the standard procedure as listed in chelba2013one

: We discarded all words with count below 3 and padded the sentence boundary markers

<S>,<\S>. Words outside the vocabulary were mapped to the <UNK> token. Meanwhile, the partition of training/validation/test sets on BillionW was the same with public settings in chelba2013one for fair comparisons.

We trained LSTM-based LightRNN using stochastic gradient descent with truncated backpropagation through time

graves2013generating ; werbos1990backpropagation . The initial learning rate was 1.0 and then decreased by a ratio of 2 if the perplexity did not improve on the validation set after a certain number of mini batches. We clipped the gradients of the parameters such that their norms were bounded by 5.0. We further performed dropout with probability 0.5 zaremba2014recurrent . All the training processes were conducted on one single GPU K20 with 5GB memory.

4.2 Results and Discussions

For the ACLW datasets, we mainly compared LightRNN with two state-of-the-art LSTM RNN algorithms in kim2015character : one utilizes hierarchical softmax for word prediction (denoted as HSM), and the other one utilizes hierarchical softmax as well as character-level convolutional filters for input embedding (denoted as C-HSM). We explored several choices of dimensions of shared embedding for LightRNN: 200, 600, and 1000. Note that 200 is exactly the word embedding size of HSM and C-HSM models used in kim2015character . Since our algorithm significantly reduces the model size, it allows us to use larger dimensions of embedding vectors while still keeping our model size very small. Therefore, we also tried 600 and 1000 in LightRNN, and the results are showed in Table 2. We can see that with larger embedding sizes, LightRNN achieves better accuracy in terms of perplexity. With 1000-dimensional embedding, it achieves the best result while the total model size is still quite small. Thus, we set 1000 as the shared embedding size while comparing with baselines on all the ACLW datasets in the following experiments.

Embedding size #param
200 340 M
600 208 M
1000 176 M
Table 2: Test of LightRNN on the ACLW-French dataset w.r.t. embedding sizes

Table 5 shows the perplexity and model sizes in all the ACLW datasets. As can be seen, LightRNN significantly reduces the model size, while at the same time outperforms the baselines in terms of perplexity. Furthermore, while the model sizes of the baseline methods increase linearly with respect to the vocabulary size, the model size of LightRNN almost keeps constant on the ACLW datasets.

For the BillionW dataset, we mainly compared with BlackOut for RNN ji2015blackout

(B-RNN) which achieves the state-of-the-art result by interpolating with KN (Kneser-Ney) 5-gram. Since the best single model reported in the paper is a 1-layer RNN with 2048-dimenional word embedding, we also used this embedding size for LightRNN. In addition, we compared with the HSM result reported in

chen2015strategies , which used 1024 dimensions for word embedding, but still has 40x more parameters than our model. For further comparisons, we also ensembled LightRNN with the KN 5-gram model. We utilized the KenLM Language Model Toolkit 333http://kheafield.com/code/kenlm/ to get the probability distribution from the KN model with the same vocabulary setting.

Figure 4: Perplexity curve on ACLW-French.

The results on BillionW are shown in Table 4. It is easy to see that LightRNN achieves the lowest perplexity whilst significantly reducing the model size. For example, it reduces the model size by a factor of 40 as compared to HSM and by a factor of 100 as compared to B-RNN. Furthermore, through ensemble with the KN 5-gram model, LightRNN achieves a perplexity of 43.

In our experiments, the overall training of LightRNN consisted of several rounds of word table refinement. In each round, the training stopped until the perplexity on the validation set converged. Figure 4 shows how the perplexity gets improved with respect to the table refinement on one of the ACLW datasets. Based on our observations, 3-4 rounds of refinements usually give satisfactory results.

ACLW
Method Runtime(hours) Reallocation/Training
C-HSMkim2015character 168
LightRNN 82 0.19%
BillionW
Method Runtime(hours) Reallocation/Training
HSMchen2015strategies 168
LightRNN 70 2.36%
Table 4: Results on BillionW dataset
Method #param
KNchelba2013one 68 G
HSMchen2015strategies 85 G
B-RNNji2015blackout 68 G
LightRNN 66
KN + HSMchen2015strategies 56
KN + B-RNNji2015blackout 47
KN + LightRNN 43
Table 3: Runtime comparisons in order to achieve the HSMs’ baseline
on ACLW test
Method Spanish/#P French/#P English/#P Czech/#P German/#P Russian/#P
KNbotha2014compositional 219/– 243/– 291/– 862/– 463/– 390/–
HSMkim2015character 186/61M 202/56M 236/25M 701/83M 347/137M 353/200M
C-HSMkim2015character 169/48M 190/44M 216/20M 578/64M 305/104M 313/152M
LightRNN 157/ 176/ 191/ 558/ 281/ 288/
Table 5: results in test set for various linguistic datasets on ACLW datasets. Italic results are the previous state-of-the-art. #P denotes the number of Parameters.

Table 4 shows the training time of our algorithm in order to achieve the same perplexity as some baselines on the two datasets. As can be seen, LightRNN saves half of the runtime to achieve the same perplexity as C-HSM and HSM. This table also shows the time cost of word table refinement in the whole training process. Obviously, the word reallocation part accounts for very little fraction of the total training time.

Figure 5 shows a set of rows in the word allocation table on the BillionW dataset after several rounds of bootstrap. Surprisingly, our approach could automatically discover the semantic and syntactic relationship of words in natural languages. For example, the place names are allocated together in row 832; the expressions about the concept of time are allocated together in row 889; and URLs are allocated together in row 887. This automatically discovered semantic/syntactic relationship may explain why LightRNN, with such a small number of parameters, sometimes outperforms those baselines that assume all the words are independent of each other (i.e., embedding each word as an independent vector).

Figure 5: Case study of word allocation table

5 Conclusion and future work

In this work, we have proposed a novel algorithm, LightRNN, for natural language processing tasks. Through the 2-Component shared embedding for word representations, LightRNN achieves high efficiency in terms of both model size and running time, especially for text corpora with large vocabularies.

There are many directions to explore in the future. First, we plan to apply LightRNN on even larger corpora, such as the ClueWeb dataset, for which conventional RNN models cannot be fit into a modern GPU. Second, we will apply LightRNN to other NLP tasks such as machine translation and question answering. Third, we will explore -Component shared embedding () and study the role of in the tradeoff between efficiency and effectiveness. Fourth, we are cleaning our codes and will release them soon through CNTK yu2014introduction .

Acknowledgments

The authors would like to thank the anonymous reviewers for their critical and constructive comments and suggestions. This work was partially supported by the National Science Fund of China under Grant Nos. 91420201, 61472187, 61502235, 61233011 and 61373063, the Key Project of Chinese Ministry of Education under Grant No. 313030, the 973 Program No. 2014CB349303, and Program for Changjiang Scholars and Innovative Research Team in University. We also would like to thank Professor Xiaolin Hu from Department of Computer Science and Technology, Tsinghua National Laboratory for Information Science and Technology (TNList) for giving a lot of wonderful advices.

References