1 Introduction
Recurrent neural networks (RNNs) are specific type of neural networks which are designed to model the sequence data. In last decades, various RNN architectures have been proposed, such as LongShortTerm Memory (LSTM) (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units Cho et al. (2014). They have enabled the RNNs to achieve stateofart performance in many applications, e.g., language models (Mikolov et al., 2010)
(Sutskever et al., 2014; Wu et al., 2016), automatic speech recognition (Graves et al., 2013), image captions (Vinyals et al., 2015), etc. However, the models often build on high dimensional input/output,e.g., large vocabulary in language models, or very deep inner recurrent networks, making the models have too many parameters to deploy on portable devices with limited resources. In addition, RNNs can only be executed sequentially with dependence on current hidden states. This causes large latency during inference. For applications in the server with large scale concurrent requests, e.g., online machine translation and speech recognition, large latency leads to limited requests processed per machine to meet the stringent response time requirements. Thus much more costly computing resources are in demand for RNN based models.To alleviate the above problems, several techniques can be employed, i.e., low rank approximation (Sainath et al., 2013; Jaderberg et al., 2014; Lebedev et al., 2014; Tai et al., 2016), sparsity (Liu et al., 2015; Han et al., 2015, 2016; Wen et al., 2016), and quantization. All of them are build on the redundancy of current networks and can be combined. In this work, we mainly focus on quantization based methods. More precisely, we are to quantize all parameters into multiple binary codes .
The idea of quantizing both weights and activations is firstly proposed by (Hubara et al., 2016a). It has shown that even
bit binarization can achieve reasonably good performance in some visual classification tasks. Compared with the full precision counterpart, binary weights reduce the memory by a factor of
. And the costly arithmetic operations between weights and activations can then be replaced by cheap XNOR and bitcount operations
(Hubara et al., 2016a), which potentially leads to much acceleration. Rastegari et al. (2016)further incorporate a real coefficient to compensate for the binarization error. They apply the method to the challenging ImageNet dataset and achieve better performance than pure binarization in
(Hubara et al., 2016a). However, it is still of large gap compared with the full precision networks. To bridge this gap, some recent works (Hubara et al., 2016b; Zhou et al., 2016, 2017) further employ quantization with more bits and achieve plausible performance. Meanwhile, quite an amount of works, e.g., (Courbariaux et al., 2015; Li et al., 2016; Zhu et al., 2017; Guo et al., 2017), quantize the weights only. Although much memory saving can be achieved, the acceleration is very limited in modern computing devices (Rastegari et al., 2016).Among all existing quantization works, most of them focus on convolutional neural networks (CNNs) while pay less attention to RNNs. As mentioned earlier, the latter is also very demanding. Recently,
(Hou et al., 2017) showed that binarized LSTM with preconditioned coefficients can achieve promising performance in some easy tasks such as predicting the next character. However, for RNNs with large input/output, e.g., large vocabulary in language models, it is still very challenging for quantization. Both works of Hubara et al. (2016b) and Zhou et al. (2017) test the effectiveness of their multibit quantized RNNs to predict the next word. Although using up to bits, the results with quantization still have noticeable gap with those with full precision. This motivates us to find a better method to quantize RNNs. The main contribution of this work is as follows:
We formulate the multibit quantization as an optimization problem. The binary codes are learned instead of rulebased. For the first time, we observe that the codes can be optimally derived by the binary search tree once the coefficients are knowns in advance, see, e.g., Algorithm 1. Thus the whole optimization is eased by removing the discrete unknowns, which are very difficult to handle.

We propose to use alternating minimization to tackle the quantization problem. By separating the binary codes and real coefficients into two parts, we can solve the subproblem efficiently when one part is fixed. With proper initialization, we only need two alternating cycles to get high precision approximation, which is effective enough to even quantize the activations online.

We systematically evaluate the effectiveness of our alternating quantization on language models. Two wellknown RNN structures, i.e., LSTM and GRU, are tested with different quantization bits. Compared with the fullprecision counterpart, by bit quantization we can achieve memory saving and real inference acceleration on CPUs, with a reasonable loss on the accuracy. By bit quantization, we can achieve almost no loss in accuracy or even surpass the original model with memory saving and real inference acceleration. Both results beat the exiting quantization works with large margins. To illustrate that our alternating quantization is very general to extend, we apply it to image classification tasks. In both RNNs and feedforward neural networks, the technique still achieves very plausible performance.
2 Existing Multibit Quantization Methods
Before introducing our proposed multibit quantization, we first summarize existing works as follows:

Uniform quantization method (Rastegari et al., 2016; Hubara et al., 2016b) firstly scales its value in the range . Then it adopts the following bit quantization:
(1) after which the method scales back to the original range. Such quantization is rule based thus is very easy to implement. The intrinsic benefit is that when computing inner product of two quantized vectors, it can employ cheap bit shift and count operations to replace costly multiplications and additions operations. However, the method can be far from optimum when quantizing nonuniform data, which is believed to be the trained weights and activations of deep neural network
(Zhou et al., 2017). 
Balanced quantization (Zhou et al., 2017) alleviates the drawbacks of the uniform quantization by firstly equalizing the data. The method constructs intervals which contain roughly the same percentage of the data. Then it linearly maps the center of each interval to the corresponding quantization code in (1). Although sounding more reasonable than the uniform one, the affine transform on the centers can still be suboptimal. In addition, there is no guarantee that the evenly spaced partition is more suitable if compared with the nonevenly spaced partition for a specific data distribution.

Greedy approximation (Guo et al., 2017) instead tries to learn the quantization by tackling the following problem:
(2) For , the above problem has a closedform solution (Rastegari et al., 2016). Greedy approximation extends to bit () quantization by sequentially minimizing the residue. That is
(3) Then the optimal solution is given as
(4) Greedy approximation is very efficient to implement in modern computing devices. Although not able to reach a high precision solution, the formulation of minimizing quantization error is very promising.

Refined greedy approximation (Guo et al., 2017) extends to further decrease the quantization error. In the th iteration after minimizing problem (3), the method adds one extra step to refine all computed with the least squares solution:
(5) In experiments of quantizing the weights of CNN, the refined approximation is verified to be better than the original greedy one. However, as we will show later, the refined method is still far from satisfactory for quantization accuracy.
Besides the general multibit quantization as summarized above, Li et al. (2016) propose ternary quantization by extending bit binarization with one more feasible state, . It does quantization by tackling with . However, no efficient algorithm is proposed in (Li et al., 2016). They instead empirically set the entries with absolute scales less than to and binarize the left entries as (4). In fact, ternary quantization is a special case of the bit quantization in (2), with an additional constraint that . When the binary codes are fixed, the optimal coefficient (or ) can be derived by least squares solution similar to (5).
In parallel to the binarized quantization discussed here, vector quantization is applied to compress the weights for feedforward neural networks (Gong et al., 2014; Han et al., 2016). Different from ours where all weights are directly constraint to
, vector quantization learns a small codebook by applying kmeans clustering to the weights or conducting product quantization. The weights are then reconstructed by indexing the codebook. It has been shown that by such a technique, the number of parameters can be reduced by an order of magnitude with limited accuracy loss
(Gong et al., 2014). It is possible that the multibit quantized binary weight can be further compressed by using the product quantization.3 Our Alternating Multibit Quantization
Now we introduce our quantization method. We tackle the same minimization problem as (2). For simplicity, we firstly consider the problem with . Suppose that and are known in advance with , then the quantization codes are restricted to . For any entry of in problem (2), its quantization code is determined by the least distance to all codes. Consequently, we can partition the number axis into intervals. And each interval corresponds to one particular quantization code. The common point of two adjacent intervals then becomes the middle point of the two quantization codes, i.e., , , and . Fig. 1 gives an illustration.
For the general bit quantization, suppose that are known and we have all possible codes in ascending order, i.e., . Similarly, we can partition the number axis into intervals, in which the boundaries are determined by the centers of two adjacent codes in , i.e., . However, directly comparing per entry with all the boundaries needs comparisons, which is very inefficient. Instead, we can make use of the ascending property in . Hierarchically, we partition the codes of evenly into two ordered subsets, i.e., and with defined as the length of . If , its feasible codes are then optimally restricted to . And if , its feasible codes become . By recursively evenly partition the ordered feasible codes, we can then efficiently determine the optimal code for per entry by only comparisons. The whole procedure is in fact a binary search tree. We summarize it in Algorithm 1. Note that once getting the quantization code, it is straightforward to map to the binary code . Also, by maintaining a mask vector with the same size as to indicate the partitions, we could operate BST for all entries simultaneously. To give a better illustration, we give a binary tree example for in Fig. 2. Note that for , we can even derive the optimal codes by a closed form solution, i.e., and with .
Under the above observation, let us reconsider the refined greedy approximation (Guo et al., 2017) introduced in Section 2. After modification on the computed as (5), are no longer optimal while the method keeps all of them fixed. To improve the refined greedy approximation, alternating minimizing and becomes a natural choice. Once getting as described above, we can optimize as (5). In real experiments, we find that by greedy initialization as (4), only two alternating cycles is good enough to find high precision quantization. For better illustration, we summarize our alternating minimization in Algorithm 2. For updating , we need binary operations and nonbinary operations. Combining nonbinary operations to determine the binary code, for total alternating cycles, we thus need binary operations and nonbinary operations to quantize into bit, with the extra corresponding to greedy initialization.
4 Apply Alternating Multibit Quantization to RNNs
Implementation. We firstly introduce the implementation details for quantizing RNN. For simplicity, we consider the one layer LSTM for language model. The goal is to predict the next word indexed by in a sequence of onehot word tokens as follows:
(6) 
where
represents the activation function. In the above formulation, the multiplication between the weight matrices and the vectors
and occupy most of the computation. This is also where we apply quantization to. For the weight matrices, We do not apply quantization on the full but rather row by row. During the matrix vector product, we can firstly execute the binary multiplication. Then elementwisely multiply the obtained binary vector with the high precision scaling coefficients. Thus little extra computation results while much more freedom is brought to better approximate the weights. We give an illustration on the left part of Fig. 3. Due to onehot word tokens, corresponds to one specific row in the quantized . It needs no more quantization. Different from the weight matrices, depends on the input, which needs to be quantized online during inference. For consistent notation with existing work, e.g., (Hubara et al., 2016b; Zhou et al., 2017), we also call quantizing on as quantizing on activation.For and , the standard matrixvector product needs operations. For the quantized product between bit and bit , we have binary operations and nonbinary operations, where corresponds to the cost of alternating approximation () and corresponds to the final product with coefficients. As the binary multiplication operates in bit, whereas the full precision multiplication operates in bits, despite the feasible implementations, the acceleration can be in theory. For alternating quantization here, the overall theoretical acceleration is thus computed as . Suppose that LSTM has hidden states , then we have . The acceleration ratio becomes roughly for and for . In addition to binary operations, the acceleration in real implementations can be largely affected by the size of the matrix, where much memory reduce can result in better utilizing in the limited faster cache. We implement the binary multiplication kernel in CPUs. Compared with the much optimized Intel Math Kernel Library (MKL) on full precision matrix vector multiplication, we can roughly achieve for and for . For more details, please refer to Appendix A.
As indicated in the left part of Fig. 3, the binary multiplication can be conducted sequentially by associativity. Although the operation is suitable for parallel computing by synchronously conducting the multiplication, this needs extra effort for parallelization. We instead concatenate the binary codes as shown in the right part of Fig. 3. Under such modification, we are able to make full use of the much optimized inner parallel matrix multiplication, which gives the possibility for further acceleration. The final result is then obtained by adding all partitioned vectors together, which has little extra computation.
Training. As firstly proposed by Courbariaux et al. (2015), during the training of quantized neural network, directly adding the moderately small gradients to quantized weights will result in no change on it. So they maintain a full precision weight to accumulate the gradients then apply quantization in every minibatch. In fact, the whole procedure can be mathematically formulated as a bilevel optimization (Colson et al., 2007) problem:
(7) 
Denote the quantized weight as . In the forward propagation, we derive from the full precision in the lowerlevel problem and apply it to the upperlevel function , i.e., RNN in this paper. During the backward propagation, the derivative is propagated back to through the lowerlevel function. Due to the discreteness of , it is very hard to model the implicit dependence of on
. So we also adopt the “straightthrough estimate” as
(Courbariaux et al., 2015), i.e., . To compute the derivative on the quantized hidden state , the same trick is applied. During the training, we find the same phenomenon as Hubara et al. (2016b) that some entries ofcan grow very large, which become outliers and harm the quantization. Here we simply clip
in the range of .5 Experiments on the Language Models
In this section, we conduct quantization experiments on language models. The two most wellknown recurrent neural networks, i.e., LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014), are evaluated. As they are to predict the next word, the performance is measured by perplexity per word (PPW) metric. For all experiments, we initialize with the pretrained model and using vanilla SGD. The initial learning rate is set to
. Every epoch we evaluate on the validation dataset and record the best value. When the validation error exceeds the best record, we decrease learning rate by a factor of
. Training is terminated once the learning rate less than or reaching the maximum epochs, i.e., . The gradient norm is clipped in the range . We unroll the network fortime steps and regularize it with the standard dropout (probability of dropping out units equals to
) (Zaremba et al., 2014). For simplicity of notation, we denote the methods using uniform, balanced, greedy, refined greedy, and our alternating quantization as Uniform, Balanced, Greedy, Refined, and Alternating, respectively.Relative MSE  Testing PPW  

WBits  FP  
Uniform  
Balanced  
Greedy  
Refined  
Alternating (ours) 
Relative MSE  Testing PPW  

WBits  FP  
Uniform  
Balanced  
Greedy  
Refined  
Alternating (ours) 
LSTM  GRU  

WBits / ABits  FP/FP  FP/FP  
Uniform  
Balanced  
Refined  
Alternating (ours) 
Peen Tree Bank. We first conduct experiments on the Peen Tree Bank (PTB) corpus (Marcus et al., 1993), using the standard preprocessed splits with a K size vocabulary (Mikolov, 2012). The PTB dataset contains K training tokens, K validation tokens, and K test tokens. For fair comparison with existing works, we also use LSTM and GRU with hidden layer of size . To have a glance at the approximation ability of different quantization methods as detailed in Section 2, we firstly conduct experiments by directly quantizing the trained full precision weight (neither quantization on activation nor retraining). Results on LSTM and GRU are shown in Table 1 and Table 2, respectively. The left parts record the relative mean squared error of quantized weight matrices with full precision one. We can see that our proposed Alternating can get much lower error across all varying bit. We also measure the testing PPW for the quantized weight as shown in the right parts of Table 1 and 2. The results are in consistent with the left part, where less errors result in lower testing PPW. Note that Uniform and Balanced quantization are rulebased and not aim at minimizing the error. Thus they can have much worse result by direct approximation. We also repeat the experiment on other datasets. For both LSTM and GRU, the results are very similar to here.
We then conduct experiments by quantizing both weights and activations. We train with the batch size . The final result is shown in Table 3. Besides comparing with the existing works, we also conduct experiment for Refined as a competitive baseline. We do not include Greedy as it is already shown to be much inferior to the refined one, see, e.g., Table 1 and 2. As Table 3 shows, our full precision model can attain lower PPW than the existing works. However, when considering the gap between quantized model with the full precision one, our alternating quantized neural network is still far better than existing works, i.e., Uniform (Hubara et al., 2016b) and Balanced (Zhou et al., 2017). Compared with Refined, our Alternating quantization can achieve compatible performance using bit less quantization on weights or activations. In other words, under the same tolerance of accuracy drop, Alternating executes faster and uses less memory than Refined. We can see that our weights/activations quantized LSTM can achieve even better performance than full precision one. A possible explanation is due to the regularization introduced by quantization (Hubara et al., 2016b).
LSTM  GRU  

WBits / ABits  FP/FP  FP/FP  
Refined  
Alternating (ours) 
LSTM  GRU  

WBits / ABits  FP/FP  FP/FP  
Refined  
Alternating (ours) 
WikiText2 (Merity et al., 2017) is a dataset released recently as an alternative to PTB. It contains K training, K validation, and K test tokens, and has a vocabulary of K words, which is roughly times larger in dataset size, and times larger in vocabulary than PTB. We train with one layer’s hidden state of size and set the batch size to . The result is shown in Table 4. Similar to PTB, our Alternating can use bit less quantization to attain compatible or even lower PPW than Refined.
Text8. In order to determine whether Alternating remains effective with a larger dataset, we perform experiments on the Text8 corpus (Mikolov et al., 2014). Here we follow the same setting as (Xie et al., 2017). The first M characters are used for training, the next M for validation, and the final 5M for testing, resulting in M training tokens, K validation tokens, and K test tokens. We also preprocess the data by mapping all words which appear 10 or fewer times to the unknown token, resulting in a K size vocabulary. We train LSTM and GRU with one hidden layer of size and set the batch size to . The result is shown in Table 5. For LSTM on the left part, Alternating achieves excellent performance. By only bit quantization on weights and activations, it exceeds Refined with bit. The bit result is even better than that reported in (Xie et al., 2017), where LSTM adding noising schemes for regularization can only attain testing PPW. For GRU on the right part, although Alternating is much better than Refined, the bit quantization still has gap with full precision one. We attribute that to the unified setting of hyperparameters across all experiments. With specifically tuned hyperparameters on this dataset, one may make up for that gap.
Note that our alternating quantization is a general technique. It is not only suitable for language models here. For a comprehensive verification, we apply it to image classification tasks. In both RNNs and feedforward neural networks, our alternating quantization also achieves the lowest testing error among all compared methods. Due to space limitation, we deter the results to Appendix B.
6 Conclusions
In this work, we address the limitations of RNNs, i.e., large memory and high latency, by quantization. We formulate the quantization by minimizing the approximation error. Under the key observation that some parameters can be singled out when others fixed, a simple yet effective alternating method is proposed. We apply it to quantize LSTM and GRU on language models. By bit weights and activations, we achieve only a reasonably accuracy loss compared with full precision one, with reduction in memory and real acceleration on CPUs. By bit quantization, we can attain compatible or even better result than the full precision one, with reduction in memory and real acceleration. Both beat existing works with a large margin. We also apply our alternating quantization to image classification tasks. In both RNNs and feedforward neural networks, the method can still achieve very plausible performance.
7 Acknowledgements
We would like to thank the reviewers for their suggestions on the manuscript. Zhouchen Lin is supported by National Basic Research Program of China (973 Program) (grant no. 2015CB352502), National Natural Science Foundation (NSF) of China (grant nos. 61625301 and 61731018), Qualcomm, and Microsoft Research Asia. Hongbin Zha is supported by Natural Science Foundation (NSF) of China (No. 61632003).
References
 Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoderdecoder for statistical machine translation. arXiv:1406.1078, 2014.
 Colson et al. (2007) Benoît Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization. Annals of Operations Research, 153(1):235–256, 2007.

Cooijmans et al. (2017)
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar
Gülçehre, and Aaron Courville.
Recurrent batch normalization.
In ICLR, 2017.  Courbariaux et al. (2015) Matthieu Courbariaux, Yoshua Bengio, and JeanPierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, pp. 3123–3131, 2015.
 Gong et al. (2014) Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv:1412.6115, 2014.
 Graves et al. (2013) Alex Graves, Abdelrahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, pp. 6645–6649. IEEE, 2013.
 Guo et al. (2017) Yiwen Guo, Anbang Yao, Hao Zhao, and Yurong Chen. Network sketching: Exploiting binary structure in deep cnns. In CVPR, 2017.
 Han et al. (2015) Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In NIPS, pp. 1135–1143, 2015.
 Han et al. (2016) Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016.
 Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural Computation, 9(8):1735–1780, 1997.
 Hou et al. (2017) Lu Hou, Quanming Yao, and James T Kwok. Lossaware binarization of deep networks. In ICLR, 2017.
 Hubara et al. (2016a) Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran ElYaniv, and Yoshua Bengio. Binarized neural networks. In NIPS, pp. 4107–4115. 2016a.
 Hubara et al. (2016b) Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran ElYaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv:1609.07061, 2016b.
 Ioffe & Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 448–456, 2015.
 Jaderberg et al. (2014) Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv:1405.3866, 2014.
 Kingma & Ba (2015) Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
 Lebedev et al. (2014) Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. Speedingup convolutional neural networks using finetuned cpdecomposition. arXiv:1412.6553, 2014.
 Li et al. (2016) Fengfu Li, Bo Zhang, and Bin Liu. Ternary weight networks. arXiv:1605.04711, 2016.
 Li et al. (2017) Zefan Li, Bingbing Ni, Wenjun Zhang, Xiaokang Yang, and Wen Gao. Performance guaranteed network acceleration via highorder residual quantization. In ICCV, pp. 2584–2592, 2017.
 Liu et al. (2015) Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional neural networks. In CVPR, pp. 806–814, 2015.
 Marcus et al. (1993) Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330, 1993.
 Merity et al. (2017) Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In ICLR, 2017.
 Mikolov (2012) Tomáš Mikolov. Statistical Language Models Based on Neural Networks. PhD thesis, Brno University of Technology, 2012.
 Mikolov et al. (2010) Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černocký, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, pp. 1045–1048, 2010.
 Mikolov et al. (2014) Tomáš Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc’Aurelio Ranzato. Learning longer memory in recurrent neural networks. arXiv:1412.7753, 2014.
 Rastegari et al. (2016) Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNORNet: Imagenet classification using binary convolutional neural networks. In ECCV, pp. 525–542. Springer, 2016.
 Sainath et al. (2013) Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran. Lowrank matrix factorization for deep neural network training with highdimensional output targets. In ICASSP, pp. 6655–6659. IEEE, 2013.
 Simonyan & Zisserman (2015) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. In ICLR, 2015.
 Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, pp. 3104–3112, 2014.
 Tai et al. (2016) Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, and Weinan E. Convolutional neural networks with lowrank regularization. In ICLR, 2016.
 Vinyals et al. (2015) Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In CVPR, pp. 3156–3164, 2015.
 Wen et al. (2016) Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In NIPS, pp. 2074–2082, 2016.
 Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016.
 Xie et al. (2017) Ziang Xie, Sida I Wang, Jiwei Li, Daniel Lévy, Aiming Nie, Dan Jurafsky, and Andrew Y Ng. Data noising as smoothing in neural network language models. In ICLR, 2017.
 Zaremba et al. (2014) Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv:1409.2329, 2014.
 Zhou et al. (2017) ShuChang Zhou, YuZhi Wang, He Wen, QinYao He, and YuHeng Zou. Balanced quantization: An effective and efficient approach to quantized neural networks. Journal of Computer Science and Technology, 32(4):667–682, 2017.
 Zhou et al. (2016) Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefanet: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv:1606.06160, 2016.
 Zhu et al. (2017) Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. In ICLR, 2017.
Appendix A Binary Matrix Vector Multiplication in CPUs
Weight Size  WBits / ABits  Total (ms)  Quant (ms)  Quant / Total  Acceleration 

FP/FP  
FP/FP 
In this section, we discuss the implementation of the binary multiplication kernel in CPUs. The binary multiplication is divided into two steps: Entrywise XNOR operation (corresponding to entrywise product in the full precision multiplication) and bit count operation for accumulation (corresponding to compute the sum of all multiplied entries in the full precision multiplication). We test it on Intel Xeon E52682 v4 @ 2.50 GHz CPU. For the XNOR operation, we use the Single instruction, multiple data (SIMD) , which can execute bit simultaneously. For the bit count operation, we use the function (Note that this step can further be accelerated by the upcoming instruction , which can execute bits simultaneously. Similarly, the XNOR operation can also be further accelerated by the upcoming instruction to execute bits simultaneously). We compare with the much optimized Intel Math Kernel Library (MKL) on full precision matrix vector multiplication and execute all codes in the singlethread mode. We conduct two scales of experiments: a matrix of size multiplying a vector of size and a matrix of size multiplying a vector of size , which respectively correspond to the hidden state product
and the softmax layer
for Text8 dataset during inference with batch size of (See Eq. (6)). The results are shown in Table 6. We can see that our alternating quantization step only accounts for a small portion of the total executing time, especially for the larger scale matrix vector multiplication. Compared with the full precision one, the binary multiplication can roughly achieve acceleration with bit quantization and acceleration with bit quantization. Note that this is only a simple test on CPU. Our alternating quantization method can also be extended to GPU, ASIC, and FPGA.Appendix B Image Classification
Sequential MNIST. As a simple illustration to show that our alternating quantization is not limited for texts, we conduct experiments on the sequential MNIST classification task (Cooijmans et al., 2017). The dataset consists of a training set of K and a test set of K grayscale images. Here we divide the last training images for validation. In every time, we sequentially use one row of the image as the input (), which results in a total of time steps. We use hidden layer’s LSTM of size and the same optimization hyperparameters as the language models. Besides the weights and activations, the inputs are quantized. The testing error rates for bit input, bit weight, and bit activation are shown in 7, where our alternating quantized method still achieves plausible performance in this task.
Methods  Testing Error Rate 

Full Precision  
Refined (Guo et al., 2017)  
Alternating (ours) 
MLP on MNIST.
The alternating quantization proposed in this work is a general technique. It is not only suitable for RNNs, but also for feedforward neural networks. As an example, we firstly conduct a classification task on MNIST and compare with existing work
(Li et al., 2017). The method proposed in (Li et al., 2017) is intrinsically a greedy multibit quantization method. For fair comparison, we follow the same setting. We use the MLP consisting of hidden layers of units and an L2SVM output layer. No convolution, preprocessing, data augmentation or pretraining is used. We also use ADAM (Kingma & Ba, 2015) with an exponentially decaying learning rate and Batch Normalization (Ioffe & Szegedy, 2015) with a batch size 100. The testing error rates for bit input, bit weight, and bit activation are shown in Table 8. Among all the compared multibit quantization methods, our alternating one achieves the lowest testing error.Methods  Testing Error Rate 

Full Precision  
Greedy (reported in (Li et al., 2017))  
Refined (Guo et al., 2017)  
Alternating (ours) 
CNN on CIFAR10. We then conduct experiments on CIFAR10 and follow the same setting as (Hou et al., 2017). That is, we use 45000 images for training, another 5000 for validation, and the remaining 10000 for testing. The images are preprocessed with global contrast normalization and ZCA whitening. We also use the VGGlike architecture (Simonyan & Zisserman, 2015):
C3)MP2( C3)MP2( C3)MP2( FC)10 SVM
where C3 is a convolution layer, and MP2 is a maxpooling layer. Batch Normalization, with a minibatch size of , and ADAM are used. The maximum number of epochs is . The learning rate starts at and decays by a factor of after every epochs. The testing error rates for bit weight and bit activation are shown in Table 9, where our alternating method again achieves the lowest test error rate among all compared quantization methods.
Comments
There are no comments yet.