Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition

12/14/2017 ∙ by Jinmian Ye, et al. ∙ THE UNIVERSITY OF UTAH 0

Recurrent Neural Networks (RNNs) are powerful sequence modeling tools. However, when dealing with high dimensional inputs, the training of RNNs becomes computational expensive due to the large number of model parameters. This hinders RNNs from solving many important computer vision tasks, such as Action Recognition in Videos and Image Captioning. To overcome this problem, we propose a compact and flexible structure, namely Block-Term tensor decomposition, which greatly reduces the parameters of RNNs and improves their training efficiency. Compared with alternative low-rank approximations, such as tensor-train RNN (TT-RNN), our method, Block-Term RNN (BT-RNN), is not only more concise (when using the same rank), but also able to attain a better approximation to the original RNNs with much fewer parameters. On three challenging tasks, including Action Recognition in Videos, Image Captioning and Image Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of both prediction accuracy and convergence rate. Specifically, BT-LSTM utilizes 17,388 times fewer parameters than the standard LSTM to achieve an accuracy improvement over 15.6% in the Action Recognition task on the UCF11 dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Best known for the sequence-to-sequence learning, the Recurrent Neural Networks (RNNs) belong to a class of neural architectures designed to capture the dynamic temporal behaviors of data. The vanilla fully connected RNN utilizes a feedback loop to memorize previous information, while it is inept to handle long sequences as the gradient exponentially vanishes along the time [13, 2]

. Unlike the vanilla RNNs passing information between layers with direct matrix-vector multiplications, the Long Short-Term Memory (LSTM) introduces a number of gates and passes information with element-wise operations 

[14]

. This improvement drastically alleviates the gradient vanishing issue; therefore LSTM and its variants, e.g. Gated Recurrent Unit (GRU)

[5], are widely used in various Computer Vision (CV) tasks [3, 22, 37] to model the long-term correlations in sequences.

Figure 1: Architecture of BT-LSTM. The redundant dense connections between input and hidden state is replaced by low-rank BT representation.

The current formulation of LSTM, however, suffers from an excess of parameters, making it notoriously difficult to train and susceptible to overfitting. The formulation of LSTM can be described by the following equations:

(1)
(2)
(3)
(4)
(5)
(6)

where denotes the element-wise product,

denotes the sigmoid function and

is the hyperbolic tangent function. The weight matrices and transform the input and the hidden state , respectively, to cell update and three gates , , and . Please note that given an image feature vector

fetch from a Convolutional Neural Network (CNN) network, the shape of

will raise to and w.r.t vgg16 [33] and Inception v4 [35]. If the number of hidden states is , the total number of parameters in calculating the four is , which can up to and , respectively. Therefore, the giant matrix-vector multiplication, i.e., , leads to the major inefficiency – the current parameter-intensive design not only subjects the model difficult to train, but also lead to high computation complexity and memory usage.

In addition, each essentially represents a fully connected operation that transforms the input vector into the hidden state vector. However, extensive research on CNNs has proven that the dense connection is significantly inefficient at extracting the spatially latent local structures and local correlations naturally exhibited in the image [20, 10]. Recent leading CNNs architectures, e.g., DenseNet [15], ResNet [11] and Inception v4 [35], also try to circumvent one huge cumbersome dense layer [36]. But the discussions of improving the dense connections in RNNs are still quite limited [26, 30]. It is imperative to seek a more efficient design to replace .

In this work, we propose to design a sparsely connected tensor representation, i.e., the Block-Term decomposition (BTD) [7], to replace the redundant and densely connected operation in LSTM 111we focus on LSTM in this paper, but the proposed approach also applies for other variants such as GRU.. The Block-Term decomposition is a low-rank approximation method that decomposes a high-order tensor into a sum of multiple Tucker decomposition models [39, 44, 45, 21]. In detail, we represent the four weight matrices (i.e., ) and the input data into a various order of tensor. In the process of RNNs training, the BTD layer automatically learns inter-parameter correlations to implicitly prune redundant dense connections rendered by . By plugging the new BTD layer into current RNNs formulations, we present a new BT-RNN model with a similar representation power but several orders of fewer parameters. The refined LSTM model with the Block-term representation is illustrated in Fig. 1.

The major merits of BT-RNN are shown as follows:

  • The low-rank BTD can compress the dense connections in the input-to-hidden transformation, while still retaining the current design philosophy of LSTM. By reducing several orders of model parameters, BT-LSTM has better convergence rate than the traditional LSTM architecture, significantly enhancing the training speed.

  • Each dimension in the input data can share weights with all the other dimensions as the existence of core tensors, thus BT representation has the strong connection between different dimensions, enhancing the ability to capture sufficient local correlations. Empirical results show that, compared with the Tensor Train model [29], the BT model has a better representation power with the same amount of model parameters.

  • The design of multiple Tucker models can significantly reduce the sensitivity to noisy input data and widen network, leading to a more robust RNN model. In contrary to the Tensor Train based tensor approaches [47, 28], the BT model does not suffer from the difficulty of ranks setting, releasing researchers from intolerable work in choosing hyper-parameters.

In order to demonstrate the performance of the BT-LSTM model, we design three challenging computer vision tasks – Action Recognition in Videos, Image Caption and Image Generation – to quantitatively and qualitatively evaluate the proposed BT-LSTM against the baseline LSTM and other low-rank variants such as the Tensor Train LSTM (TT-LSTM). Experimental results have demonstrated the promising performance of the BT-LSTM model.

2 Related Work

The poor image modeling efficiency of full connections in the perception architecture, i.e., [41], has been widely recognized by the Computer Vision (CV) community. The most prominent example is the great success made by Convolutional Neural Networks (CNNs) for the general image recognition. Instead of using the dense connections in multi-layer perceptions, CNNs relies on sparsely connected convolutional kernels to extract the latent regional features in an image. Hence, going sparse on connections is the key to the success of CNNs [8, 16, 27, 12, 34]. Though extensive discussions toward the efficient CNNs design, the discussions of improving the dense connections in RNNs are still quite limited [26, 30].

Compared with aforementioned explicit structure changes, the low-rank method is one orthogonal approach to implicitly prune the dense connections. Low-rank tensor methods have been successfully applied to address the redundant dense connection problem in CNNs [28, 47, 1, 38, 18]. Since the key operation in one perception is , Sainath et al. [31] decompose

with Singular Value Decomposition (SVD), reducing up to 30% parameters in

, but also demonstrates up to 10% accuracy loss [46]. The accuracy loss majorly results from losing the high-order spatial information, as intermediate data after image convolutions are intrinsically in 4D.

In order to capture the high order spatial correlations, recently, tensor methods were introduced into Neural Networks to approximate . For example, Tensor Train (TT) method was employed to alleviate the large computation and reduce the number of parameters [28, 47, 38]. Yu et al. [48] also used a tensor train representation to forecast long-term information. Since this approach targets in long historic states, it increases additional parameters, leading to a difficulty in training. Other tensor decomposition methods also applied in Deep Neural Networks (DNNs) for various purposes [19, 49, 18].

Although TT decomposition has obtained a great success in addressing dense connections problem, there are some limitations which block TT method to achieve better performance: 1) The optimal setting of TT-ranks is that they are small in the border cores and large in middle cores, e.g., like an olive [50]. However, in most applications, TT-ranks are set equally, which will hinder TT’s representation ability. 2) TT-ranks has a strong constraint that the rank in border tensors must set to 1 (), leading to a seriously limited representation ability and flexibility [47, 50].

Instead of difficultly finding the optimal TT-ranks setting, BTD has these advantages: 1) Tucker decomposition introduces a core tensor to represent the correlations between different dimensions, achieving better weight sharing. 2) ranks in core tensor can be set to equal, avoiding unbalance weight sharing in different dimensions, leading to a robust model toward different permutations of input data. 3) BTD uses a sum of multiple Tucker models to approximate a high-order tensor, breaking a large Tucker decomposition to several smaller models, widening network and increasing representation ability. Meanwhile, multiple Tucker models also lead to a more robust RNN model to noisy input data.

3 Tensorizing Recurrent Neural Networks

The core concept of this work is to approximate with much fewer parameters, while still preserving the memorization mechanism in existing RNN formulations. The technique we use for the approximation is Block Term Decomposition (BTD), which represents as a series of light-weighted small tensor products. In the process of RNN training, the BTD layer automatically learns inter-parameter correlations to implicitly prune redundant dense connections rendered by . By plugging the new BTD layer into current RNN formulations, we present a new BT-RNN model with several orders of magnitude fewer parameters while maintaining the representation power.

This section elaborates the details of the proposed methodology. It starts with exploring the background of tensor representations and BTD, before delving into the transformation of a regular RNN model to the BT-RNN; then we present the back propagation procedures for the BT-RNN; finally, we analyze the time and memory complexity of the BT-RNN compared with the regular one.

3.1 Preliminaries and Background

Tensor Representation

We use the boldface Euler script letter, e.g., , to denote a tensor. A -order tensor represents a dimensional multiway array; thereby a vector and a matrix is a 1-order tensor and a 2-order tensor, respectively. An element in a -order tensor is denoted as .

Tensor Product and Contraction

Two tensors can perform product on a order if their dimension matches. Let’s denote as the tensor-tensor product on order [17]. Given two -order tensor and , the tensor product on order is:

(7)

To simplify, we use denotes indices , while denotes . The whole indices can be denoted as . As we can see that each tensor product will be calculated along dimension, which is consistent with matrix product.

Contraction is an extension of tensor product [6]; it conducts tensor products on multiple orders at the same time. For example, if , we can conduct a tensor product according the and order:

(8)
Block Term Decomposition (BTD)

Block Term decomposition is a combination of CP decomposition [4] and Tucker decomposition [39]. Given a -order tensor , BTD decomposes it into block terms; And each term conducts between a core tensor and factor matrices on ’s dimension, where and [7]. The formulation of BTD is as follows:

(9)

We call the the CP-rank, the Tucker-rank and the Core-order. Fig. 2 demonstrates an example of how 3-order tensor being decomposed into block terms.

Figure 2: Block Term decomposition for a 3-order case tensor. A 3-order tensor can be approximated by Tucker decompositions. We call the the CP-rank, the Tucker-rank and the Core-order.

3.2 BT-RNN model

This section demonstrates the core steps of BT-RNN model. 1) We transform and into tensor representations, and ; 2) then we decompose into several low-rank core tensors and their corresponding factor tensors using BTD; 3) subsequently, the original product is approximated by the tensor contraction between decomposed weight tensor and input tensor ; 4) finally, we present the gradient calculations amid Back Propagation Through Time (BPTT) [43, 42] to demonstrate the learning procedures of BT-RNN model.

(a) vector to tensor
(b) matrix to tensor
Figure 3: Tensorization operation in a case of 3-order tensors. (a) Tensorizing a vector with shape to a tensor with shape ; (b) Tensorizing a matrix with shape to a tensor with shape .
Tensorizing and

we tensorize the input vector to a high-order tensor to capture spatial information of the input data, while we tensorize the weight matrix to decomposed weight tensor with BTD.

Formally, given an input vector , we define the notation to denote the tensorization operation. It can be either a stack operation or a reshape operation. We use reshape operation for tensorization as it does not need to duplicate the element of the data. Essentially reshaping is regrouping the data. Fig. 3 outlines how we reshape a vector and a matrix into 3-order tensors.

Decomposing with BTD

Given a 2 dimensions weight matrix , we can tensorize it as a dimensions tensor , where and . Following BTD in Eq. (9), we can decomposes into:

(10)

where denotes the core tensor, denotes the factor tensor, is the CP-rank and is the Core-order. From the mathematical property of BT’s ranks [17], we have (and ), . If (or ), it is difficult for the model to obtain bonus in performance. What’s more, to obtain a robust model, in practice, we set each Tucker-rank to be equal, e.g., , , to avoid unbalanced weight sharing in different dimensions and to alleviate the difficulty in hyper-parameters setting.

Computation between and

After substituting the matrix-vector product by BT representation and tensorized input vector, we replace the input-to-hidden matrix-vector product with the following form:

(11)

where the tensor contraction operation will be computed along all dimensions in and , yielding the same size in the element-wise form as the original one. Fig. 4 demonstrates the substitution intuitively.

Figure 4: Diagrams of BT representation for matrix-vector product . We substitute the weight matrix by the BT representation, then tensorize the input vector to a tensor with shape . After operating the tensor contraction between BT representation and input tensor, we get the result tensor in shape . With the reverse tensorize operation, we get the output vector .
Training BT-RNN

The gradient of RNN is computed by Back Propagation Through Time (BPTT) [43]. We derive the gradients amid the framework of BPTT for the proposed BT-RNN model.

Following the regular LSTM back-propagation procedure, the gradient can be computed by the original BPTT algorithm, where . Using the tensorization operation same to , we can obtain the tensorized gradient . For a more intuitive understanding, we rewrite Eq. (11) in element-wise case:

(12)

Here, for simplified writing, we use , and to denote the indices , and , respectively. Since the right hand side of Eq. (12) is a scalar, the element-wise gradient for parameters in BT-RNN is as follows:

(13)
(14)

3.3 Hyper-Parameters and Complexity Analysis

3.3.1 Hyper-Parameters Analysis

Total #Params

BTD decomposes into block terms and each block term is a tucker representation [18, 17], therefore the total amount of parameters is as follows:

(15)

By comparison, the original weight matrix contains parameters, which is several orders of magnitude larger than it in the BTD representation.

Figure 5: The number of parameters w.r.t Core-order and Tucker-rank , in the setting of . While the vanilla RNN contains parameters. Refer to Eq. (15), when is small, the first part does the main contribution to parameters. While is large, the second part does. So we can see the number of parameters will go down sharply at first, but rise up gradually as grows up (except for the case of ).
#Params w.r.t Core-order ()

Core-order is the most significant factor affecting the total amount of parameters as term in Eq. (15). It determines the total dimensions of core tensors, the number of factor tensors, and the total dimensions of input and output tensors. If we set , the model degenerates to the original matrix-vector product with the largest number of parameters and the highest complexity. Fig. 5 demonstrates how total amount of parameters vary w.r.t different Core-order . If the Tucker-rank , the total amount of parameters first decreases with increasing until reaches the minimum, then starts increasing afterwards. This mainly results from the non-linear characteristic of in Eq. (15).

Hence, a proper choice of is particularly important. Enlarging the parameter is the simplest way to reduce the number of parameters. But due to the second term in Eq. (15), enlarging will also increase the amount of parameters in the core tensors, resulting in the high computational complexity and memory usage. With the Core-order increasing, each dimensions of the input tensor decreases logarithmically. However, this will result in the loss of important spatial information in an extremely high order BT model. In practice, Core-order is recommended.

#Params w.r.t Tucker-rank ()

The Tucker-rank controls the complexity of Tucker decomposition. This hyper-parameter is conceptually similar to the number of singular values in Singular Value Decomposition (SVD). Eq. (15) and Fig. 5 also suggest the total amount of parameters is sensitive to . Particularly, BTD degenerates to a CP decomposition if we set it as . Since (and ), the choice of is limited in a small value range, releasing researchers from heavily hyper-parameters setting.

#Params w.r.t CP-rank ()

The CP-rank controls the number of block terms. If , BTD degenerates to a Tucker decomposition. As we can see from Table 1 that does not affect the memory usage in forward and backward passes, so if we need a more memory saving model, we can enlarge while decreasing and at the same time.

3.3.2 Computational Complexity Analysis

Complexity in Forward Process

Eq. (10) raises the computation peak, , at the last tensor product , according to left-to-right computational order. However, we can reorder the computations to further reduce the total model complexity . The reordering is:

(16)

The main difference is each tensor product will be first computed along all dimensions in Eq. (11), while in Eq. (16) along all dimensions. Since BTD is a low-rank decomposition method, e.g., and , the new computation order can significantly reduce the complexity of the last tensor product from to , where , , . And then the total complexity of our model reduces from to . If we decrease Tucker-rank , the computation complexity decreases logarithmically in Eq. (16) while linearly in Eq. (11).

Complexity in Backward Process

To derive the computational complexity in the backward process, we present gradients in the tensor product form. The gradients of factor tensors and core tensors are:

(17)
(18)

Since Eq. (17) and Eq. (18) follow the same form of Eq. (11), the backward computational complexity is same as the forward pass . Therefore, the factor tensors demonstrate a total complexity of .

Method Time Memory
RNN forward
RNN backward
TT-RNN forward
TT-RNN backward
BT-RNN forward
BT-RNN backward
Table 1: Comparison of complexity and memory usage of vanilla RNN, Tensor-Train representation RNN (TT-RNN) [28, 47] and our BT representation RNN (BT-RNN). In this table, the weight matrix’s shape is . The input and hidden tensors’ shapes are and , respectively. Here, . Both TT-RNN and BT-RNN are set in same rank .
Complexity Comparisons

We analyze the time complexity and memory usage of RNN, Tensor Train RNN, and BT-RNN. The statistics are shown in Table 1. In our observation, both TT-RNN and BT-RNN hold lower computation complexity and memory usage than the vanilla RNN, since the extra hyper-parameters are several orders smaller than or . As we claim that the suggested choice of Core-order is , the complexity of TT-RNN and BT-RNN should be comparable.

4 Experiments

RNN is a versatile and powerful modeling tool widely used in various computer vision tasks. We design three challenging computer vision tasks-Action Recognition in Videos, Image Caption and Image Generation-to quantitatively and qualitatively evaluate proposed BT-LSTM against baseline LSTM and other low-rank variants such as Tensor Train LSTM (TT-LSTM). Finally, we design a control experiment to elucidate the effects of different hyper-parameters.

4.1 Implementations

Since operations in , , and follow the same computation pattern, we merge them together by concatenating , , and into one giant , and so does . This observation leads to the following simplified LSTM formulations:

(19)
(20)

We implemented BT-LSTM on the top of simplified LSTM formulation with Keras and TensorFlow. The initialization of baseline LSTM models use the default settings in Keras and TensorFlow, while we use Adam optimizer with the same learning rate (lr) across different tasks.

4.2 Quantitative Evaluations of BT-LSTM on the Task of Action Recognition in Videos

(a) Training loss of baseline LSTM, TT-LSTM and BT-LSTM.
(b) Validation Accuracy of baseline LSTM, TT-LSTM and BT-LSTM.
Figure 6: Performance of different RNN models on the Action Recognition task trained with UCF11. CR stands for Compression Ratio; R is Tucker-rank, and Top is the highest validation accuracy observed in the training. Though BT-LSTM utilizes 17388 times less parameters than the vanilla LSTM (58.9 Millons), BT-LSTM demonstrates a 15.6% higher accuracy improvement than LSTM. BT-LSTM also demonstrates and extra 7.2% improvement over the TT-LSTM with comparable parameters.
Method Accuracy
Orthogonal Approaches Original [25] 0.712
Spatial-temporal [24] 0.761
Visual Attention [32] 0.850
RNN Approaches LSTM 0.697
TT-LSTM [47] 0.796
BT-LSTM 0.853
Table 2: State-of-the-art results on UCF11 dataset reported in literature, in comparison with our best model.

We use UCF11 YouTube Action dataset [25] for action recognition in videos. The dataset contains 1600 video clips, falling into 11 action categories. Each category contains 25 video groups, within each contains at least 4 clips. All video clips are converted to 29.97fps MPG222http://crcv.ucf.edu/data/UCF11_updated_mpg.rar. We scale down original frames from to , then we sample 6 random frames in ascending order from each video clip as the input data. For more details on the preprocessing, please refer to [47].

We use a single LSTM cell as the model architecture to evaluate BT-LSTM against LSTM and TT-LSTM in Fig. 6. Please note there are other orthogonal approaches aiming at improving the model such as visual attention [32] and spatial-temporal [24]. Since our discussion is limited to a single LSTM cell, we can always replace the LSTM cells in those high-level models with BT-LSTM to acquire better accuracies. We set the hyper-parameters of BT-LSTM and TT-LSTM as follows: the factor tensor counts is ; the shape of input tensor is ; and the hidden shape is ; the rank of TT-LSTM is , while BT-LSTM is set to various Tucker-ranks.

Fig. 6 demonstrates the training loss and validation accuracy of BT-LSTM against LSTM and TT-LSTM under different settings. Table 2 demonstrates the top accuracies of different models. From these experiments, we claim that:

1) times parameter reductions: The vanilla LSTM has 58.9 millons parameters in , while BT-LSTM deliveries better accuracies even with several orders of less parameters. The total parameters in BT-LSTM follows Eq. (15). At Tucker-rank 1, 2, 4, BT-LSTM uses 721, 1470, and 3387 parameters, demonstrating compression ratios of 81693x, 40069x and 17388x, respectively.

2) faster convergence: BT-LSTM demonstrates significant convergence improvement over the vanilla LSTM based on training losses and validation accuracies in Fig. 6(a) and Fig. 6(b)

. In terms of validation accuracies, BT-LSTM reaches 60% accuracies at epoch-16 while LSTM takes 230 epochs. The data demonstrates 14x convergence speedup. It is widely acknowledged that the model with few parameters is easier to train. Therefore, the convergence speedup majorly results from the drastic parameter reductions. At nearly same parameters, the training loss of BT-LSTM-4 also decreases faster than TTLSTM-4 ( epoches[0, 50] ), substantiating that BT model captures better spatial information than the Tensor Train model.

3) better model efficiency: Though several orders of parameter reductions, BT-LSTM demonstrates extra 15.6% accuracies than LSTM. In addition, BT-LSTM also demonstrates extra 7.2% accuraies than TT-LSTM with comparable parameters. In different Tucker-ranks, BT-LSTM converges to identical losses; but increasing Tucker ranks also improves the accuracy. This is consistent with the intuition since the high rank models capture additional relevant information.

(a) LSTM, #Params:1.8M
(b) BT-LSTM, #Params:1184
Figure 7: Image Generation: generating MNIST style digits with LSTM and BT-LSTM based model. The results are merely identical, while the parameters of BT-LSTM is 1577 times less.
(a) LSTM: A train traveling down tracks next to a forest.
TT-LSTM: A train traveling down train tracks next to a forest.
BT-LSTM: A train traveling through a lush green forest.
(b) LSTM: A group of people standing next to each other.
TT-LSTM: A group of men standing next to each other.
BT-LSTM: A group of people posing for a photo.
(c) LSTM: A man and a dog are standing in the snow.
TT-LSTM: A man and a dog are in the snow.
BT-LSTM: A man and a dog playing with a frisbee.
(d) LSTM: A large elephant standing next to a baby elephant.
TT-LSTM: An elephant walking down a dirt road near trees.
BT-LSTM: A large elephant walking down a road with cars.
Figure 8: Results of image caption in MSCOCO dataset.

4.3 Qualitative Evaluations of BT-LSTM on Tasks of Image Generation and Image Captioning

We also conduct experiments on Image Generation and Image Captioning to further substantiate the effciency of BT-LSTM.

Task 1: Image Generation

Image generation intends to learn latent representation from images, then it tires to generate new image of same style from the learned model. The model for this task is Deep Recurrent Attentive Writer (DRAW) [9]. It uses an encoder RNN network to encode images into latent representations; then an decoder RNN network decodes the latent representations to construct an image. we substitute LSTM in encoder network with our BT-LSTM.

In this task, encoder network must capture sufficient correlations and visual features from raw images to generate high quality of feature vectors. As shown in Fig. 7, both LSTM and BT-LSTM model generate comparable images.

Task 2: Image Captioning

Image Captioning intends to describe the content of an image. We use the model in Neural Image Caption[40] to evaluate the performance of BT-LSTM by replacing the LSTM cells.

The training dataset is MSCOCO [23], a large-scale dataset for the object detection, segmentation, and captioning. Each image is scaled to in RGB channels and subtract the channel means as the input to a pretrained Inception-v3 model.

Fig. 8 demonstrates the image captions generated by BT-LSTM and LSTM. It is obvious that both BT-LSTM, TT-LSTM and LSTM can generate proper sentences to describe the content of an image, but with little improvement in BT-LSTM. Since the input data of BT model is a compact feature vector merged with the embedding images features from Inception-v3 and language features from a word embedding network, our model demonstrates the qualitative improvement in captioning. The results also demonstrate that BT-LSTM captures local correlations missed by traditional LSTM.

(a) Truth: P=4096
(b) , P=4096
(c) d=2, R=1, N=1, P=129
(d) d=2, R=4, N=1, P=528
(e) d=2, R=1, N=2, P=258
(f) d=4, R=4, N=1, P=384
Figure 9: The trained for different BT-LSTM settings. The closer to (a), the better is.

4.4 Sensitivity Analysis on Hyper-Parameters

There are 3 key hyper-parameters in BT-LSTM, which are core-order , Tucker-rank and CP-rank . In order to scrutinize the impacts of these hyper-parameters, we design a control experiment illustrate their effects.

We try to sample y from the distribution of , where . Each

is generated from a Gaussian distribution

. We also add a small noise into to avoid overfitting. is generated by plugging back to . Given and , we randomly initlize , and start training. Eventually, should be similar to since and drawn from the distribution of . Please note that the purpose of this experiment is to evaluate the impact of the BT model on different parameter settings, despite these are many other good methods such as L1 regularization and Lasso regularization, to recover the weight matrix.

Core-order (): Parameters goes down if grows and . Parameters reduce about 1.3 times from Fig. 9(d) to Fig. 9(f); and increase from 2 to 4. With less parameters, the reconstructed deteriorates quickly. We claim that high Core-order loses important spatial information, as tensor becomes too small to capture enough latent correlations. This result is consistent with our declaration.

Tucker-rank (): the rank take effectiveness exponentially to the parameters. By comparing Fig. 9(c) and Fig. 9(d), When increases from 1 to 4, BT model has more parameters to capture sufficient information from input data, obtaining a more robust model.

CP-rank (): CP-rank contributes to the number of parameters linearly, playing an important role when is small. By comparing Fig. 9(c) and Fig. 9(e), we can see that the latter result has less noise in figure, showing that a proper CP-rank setting will lead to a more robust model, since we use multiple Tucker models to capture information from input data.

5 Conclusion

We proposed a Block-Term RNN architecture to address the redundancy problem in RNNs. By using a Block Term tensor decomposition to prune connections in the input-to-hidden weight matrix of RNNs, we provide a new RNN model with a less number of parameters and stronger correlation modeling between feature dimensions, leading to easy model training and improved performance. Experiment results on a video action recognition data set show that our BT-RNN architecture can not only consume several orders fewer parameters but also improve the model performance over standard traditional LSTM and the TT-LSTM. The next works are to 1) explore the sparsity in factor tensors and core tensors of BT model, further reducing the number of model parameters; 2) concatenate hidden states and input data for a period of time, respectively, extracting the temporal features via tensor methods; 3) quantify factor tensors and core tensors to reduce memory usage.

Acknowledgment

This paper was in part supported by a grant from the Natural Science Foundation of China (No.61572111), 1000-Talent Program Startup Funding (A1098531023601041,G05QNQR004) and a Fundamental Research Fund for the Central Universities of China (No. A03017023701). Zenglin Xu is the major corresponding author.

References