Factorization tricks for LSTM networks

03/31/2017 ∙ by Oleksii Kuchaiev, et al. ∙ Nvidia 0

We present two simple ways of reducing the number of parameters and accelerating the training of large Long Short-Term Memory (LSTM) networks: the first one is "matrix factorization by design" of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the state-of the art perplexity. On the One Billion Word Benchmark we improve single model perplexity down to 23.36.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

LSTM networks (Hochreiter & Schmidhuber, 1997) have been successfully used in language modeling (Jozefowicz et al., 2016; Shazeer et al., 2017), speech recognition (Xiong et al., 2016), machine translation (Wu et al., 2016), and many other tasks. However, these networks have millions of parameters, and require weeks of training on multi-GPU systems.

We introduce two modifications of LSTM cell with projection, LSTMP (Sak et al., 2014), to reduce the number of parameters and speed-up training. The first method, factorized LSTM (F-LSTM) approximates big LSTM matrix with a product of two smaller matrices. The second method, group LSTM (G-LSTM) partitions LSTM cell into the independent groups. We test F-LSTM and G-LSTM architectures on the task of language modeling using One Billion Word Benchmark (Chelba et al., 2013). As a baseline, we used BIGLSTM model without CNN inputs described by Jozefowicz et al. (2016). We train all networks for 1 week on a DGX Station system with 4 Tesla V100 GPUs, after which BIGLSTM’s evaluation perplexity was 35.1. Our G-LSTM based model got 36 and F-LSTM based model got 36.3 while using two to three times less RNN parameters.

1.1 Long Short-Term Memory overview

Learning long-range dependencies with Recurrent Neural Networks (RNN) is challenging due to the vanishing and exploding gradient problems

(Bengio et al., 1994; Pascanu et al., 2013). To address this issue, the LSTM cell has been introduced by Hochreiter & Schmidhuber (1997), with the following recurrent computations:

(1)

where is input, is cell’s state, and is cell’s memory. We consider LSTM cell with projection of size , LSTMP, where Equation 1 is computed as follows (Sak et al., 2014; Zaremba et al., 2014). First, cell gates are computed:

(2)

where , , and is an affine transform .

Next state and memory are computed using following equations:

where is a linear projection. The major part of LSTMP cell computation is in computing affine transform because it involves multiplication with matrix . Thus we focus on reducing the number of parameters in .

1.2 Related Work

The partition of layer into parallel groups have been introduced by Krizhevsky et al. (2012) in AlexNet, where some convolutional layers have been divided into two groups to split the model between two GPUs. Multi-group convnets have been widely used to reduce network weights and required compute, for example by Esser et al. (2016). This multi-group approach was extended to the extreme in Xception architecture by Chollet (2016). The idea of factorization of large convolutinal layer into the stack of layers with smaller filters was used, for example, in VGG networks (Simonyan & Zisserman, 2014), and in ResNet “bottleneck design” (He et al., 2016). Denil et al. (2013) have shown that it is possible to train several different deep architectures by learning only a small number of weights and predicting the rest. In case of LSTM networks, ConvLSTM (Shi et al., 2015), has been introduced to better exploit possible spatiotemporal correlations, which is conceptually similar to grouping.

2 Models

2.1 Factorized LSTM cell

Factorized LSTM (F-LSTM) replaces matrix by the product of two smaller matrices that essentially try to approximate as , where is of size , is , and (”factorization by design”). The key assumption here is that can be well approximated by the matrix of rank . Such approximation contains less LSTMP parameters than original model - versus and, therefore, can be computed faster and synchronized faster in the case of distributed training.

Figure 1: Language model using: (a) 2 regular LSTM layers, (b) 2 F-LSTM layers, and (c) 2 G-LSTM layers with 2 group in each layer. Equations inside cells show what kind of affine transforms are computed by those cells at each time step. Here for models without groups and , for model with two groups; and time index dropped for clarity.

2.2 Group LSTM cell

This approach is inspired by groups in Alexnet (Krizhevsky et al., 2012). We postulate that some parts of the input and hidden state can be thought of as independent feature groups. For example, if we use two groups, then both and

are effectively split into two vectors concatenated together

and , with only dependent on , and cell’s memory state. Therefore, for groups Equation 2 changes to:

(3)

where, is a group ’s affine transform from to . The partitioned will now have parameters. This cell architecture is well suited for model parallelism since every group computation is independent. An alternative interpretation of G-LSTM layers is demonstrated in the Figure 1 (c). While this might look similar to ensemble (Shazeer et al., 2017) or multi-tower (Ciregan et al., 2012) models, the key differences are: (1) input to different groups is different and assumed independent, and (2) instead of computing ensemble output, it is concatenated into independent pieces.

3 Experiments and Results

For testing we used the task of learning the joint probabilities over word sequences of arbitrary lengths

: , such that “real” sentences have high probabilities compared to the random sequences of words. Figure 1

(a) shows the typical LSTM-based model, where first the words are embedded into the low dimensional dense input for RNN, then the “context” is learned using RNNs via number of steps and, finally, the softmax layer converts RNN output into the probability distribution

. We test the following models:

  • BIGLSTM - model with projections but without CNN inputs from Jozefowicz et al. (2016)

  • BIG F-LSTM F512 - with intermediate rank of 512 for LSTM matrix ,

  • BIG G-LSTM G-4, with 4 groups in both layers

  • BIG G-LSTM G-16, with 16 groups in both layers.

We train all models on DGX Station with 4 GV100 GPUs for one ween using Adagrad optimizer, projection size of 1024, cell size of 8192, mini-batch of 256 per GPU, sampled softmax with 8192 samples and 0.2 learning rate. Note that the use of projection is crucial as it helps to keep down embedding and softmax layer sizes. Table 1 summarizes our experiments.

Judging from the training loss Plots 2 in Appendix, it is clearly visible that at the same step count, model with more parameters wins. However, given the same amount of time, factorized models train faster. While the difference between BIGLSTM and BIG G-LSTM-G2 is clearly visible, BIG G-LSTM-G2 contains almost 2 times less RNN parameters than BIGLSTM, trains faster and, as a results, achieves similar evaluation perplexity within the same training time budget (1 week).

Our code is available at https://github.com/okuchaiev/f-lm

Model Perplexity Step Num of RNN parameters Words/sec
BIGLSTM baseline 35.1 0.99M 151,060,480 33.8K
BIG F-LSTM F512 36.3 1.67 M 52,494,336  56.5K
BIG G-LSTM G-2 36 1.37M 83,951,616  41.7K
BIG G-LSTM G-4 40.6 1.128M 50,397,184  56K
BIG G-LSTM G-8 39.4 850.4K 33,619,968  58.5K
Table 1: One Billion Words benchmark evaluation results after 1 week of training using one DGX Station with 4 Tesla V100 GPUs.

3.1 Future research

While one might go further and try to approximate transform

using arbitrary feed forward neural network with

inputs and outputs, during our initial experiments we did not see immediate benefits of doing so. Hence, it remains a topic of future research.

It might be possible to reduce the number of RNN parameters even further by stacking G-LSTM layers with increasing group counts on top of each other. In our second, smaller experiment, we replace the second layer of BIG G-LSTM-G4 network by the layer with 8 groups instead of 4, and call it BIG G-LSTM-G4-G8. We let both BIG G-LSTM-G4 and BIG G-LSTM-G4-G8 ran for 1 week on 4 GPUs each and achieved very similar perplexities. Hence, the model with “hierarchical” groups did not lose much accuracy, ran faster and got better perplexity. Such “hierarchical” group layers look intriguing as they might provide a way for learning different levels of abstractions but this remains a topic of future research.

Acknowledgements We are grateful to Scott Gray and Ciprian Chelba for helping us identify and correct issues with earlier versions of this work.

References

Appendix: Training loss for 4 LSTM-like models

Figure 2: Y-axis: same for (A) and (B) - training loss log-scale, X-axis: for (A) - step, or mini-batch count, for (B) - hours (w.g. wall time) of training. BIGLSTM baseline, BIG G-LSTM-G4, BIG G-LSTM-G16, and BIG F-LSTM-F512 all trained for exactly one week. It is clearly visible, that at the same step count, the model with more parameters wins. On the other hand, factorized models can do significantly more iterations in the given amount of time and therefore get to the better results given same amount of time. (full extent of X-axis for both (A) and (B) is 1 week).