Feedforward Sequential Memory Networks: A New Structure to Learn Long-term Dependency

12/28/2015 ∙ by Shiliang Zhang, et al. ∙ York University USTC 0

In this paper, we propose a novel neural network structure, namely feedforward sequential memory networks (FSMN), to model long-term dependency in time series without using recurrent feedback. The proposed FSMN is a standard fully-connected feedforward neural network equipped with some learnable memory blocks in its hidden layers. The memory blocks use a tapped-delay line structure to encode the long context information into a fixed-size representation as short-term memory mechanism. We have evaluated the proposed FSMNs in several standard benchmark tasks, including speech recognition and language modelling. Experimental results have shown FSMNs significantly outperform the conventional recurrent neural networks (RNN), including LSTMs, in modeling sequential signals like speech or language. Moreover, FSMNs can be learned much more reliably and faster than RNNs or LSTMs due to the inherent non-recurrent model structure.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

iNCML-DNNLM

A CUDA-C implementation of FOFE and FSMN


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For a long time, artificial neural networks (ANN) have been widely regarded as an effective learning machine for self-learning feature representations from data to perform pattern classification and regression tasks. In recent years, as more powerful computing resources (e.g., GPUs) become readily available and more and more real-world data are being generated, deep learning (LeCun et al., 2015; Schmidhuber, 2015)

is reviving as an active research area in machine learning during the past few years. The surge of deep learning aims to learn neural networks with a deep architecture consisting of many hidden layers between input and output layers, and thousands of nodes in each layer. The deep network architecture can build hierarchical representations with highly non-linear transformations to extract complex structures, which is similar to the human information processing mechanism (e.g., vision and speech). Depending on how the networks are connected, there exist various types of deep neural networks, such as feedforward neural networks (FNN) and recurrent neural networks (RNN).

FNNs are organized as a layered structure, including an input layer, multiple hidden layers and an output layer. The outputs of a hidden layer are a weighted sum of its inputs coming from the previous layer, then followed by a non-linear transformation. Traditionally, the sigmoidal nonlinearity, i.e.,

, has been widely used. Recently, the most popular non-linear function is the so-called rectified linear unit (ReLU), i.e.,

(Jarrett et al., 2009; Nair & Hinton, 2010)

. In many real-word applications, it is experimentally shown that ReLUs can learn deep networks more efficiently, allowing to train a deep supervised network without any unsupervised pre-training. The two popular FNN architectures are fully-connected deep neural networks (DNN) and convolutional neural networks (CNN). The structure of DNNs is a conventional multi-layer perceptron with many hidden layers, where units from two adjacent layers are fully connected, but no connection exists among the units in the same layer. On the other hand, inspired by the classic notions of simple cells and complex cells in visual neuroscience

(Hubel & Wiesel, 1962), CNNs are designed to hierarchically process the data represented in the form of multiple location-sensitive arrays, such as images. The uses of local connection, weight sharing and pooling make CNNs insensitive to small shifts and distortions in raw data. Therefore, CNNs are widely used in a variety of real applications, including document recognition (LeCun et al., 1998), image classification (Ciresan et al., 2011; Krizhevsky et al., 2012)

, face recognition

(Lawrence et al., 1997; Taigman et al., 2014), speech recognition (Abdel-Hamid et al., 2012; Sainath et al., 2013).

When neural networks are applied to sequential data such as language, speech and video, it is crucial to model the long term dependency in time series. Recurrent neural networks (RNN) (Elman, 1990) are designed to capture long-term dependency within the sequential data using a simple mechanism of recurrent feedback. Moreover, the bidirectional RNNs (Schuster & Paliwal, 1997) have also been proposed to incorporate the context information from both directions (the past and future) in a sequence. RNNs can learn to model sequential data over an extended period of time and store the memory in the network weights, then carry out rather complicated transformations on the sequential data. RNNs are theoretically proved to be a turing-complete machine (Siegelmann & Sontag, 1995). As opposed to FNNs that can only learn to map a fixed-size input to a fixed-size output, RNNs can in principle learn to map from one variable-length sequence to another. While RNNs are theoretically powerful, the learning of RNNs relies on the so-called back-propagation through time (BPTT) (Werbos, 1990) due to the internal recurrent cycles. The BPTT significantly increases the computational complexity of the learning, and even worse, it may cause many problems in learning, such as gradient vanishing and exploding (Bengio et al., 1994)

. Therefore, some new architectures have been proposed to alleviate these problems. For example, the long short term memory (LSTM) model

(Hochreiter & Schmidhuber, 1997; Gers et al., 2000) is an enhanced RNN architecture to implement the recurrent feedbacks using various learnable gates, which ensure that the gradients can flow back to the past more effectively. LSTMs have yielded promising results in many applications, such as sequence modeling (Graves, 2013), machine translation (Cho et al., 2014), speech recognition (Graves et al., 2013; Sak et al., 2014)

and many others. More recently, a simplified model called gated recurrent unit (GRU)

(Cho et al., 2014) is proposed and reported to achieve similar performance as LSTMs (Chung et al., 2014). Finally, in the past year, there are some latest research effort to use various forms of explicit memory units to construct neural computing models that can have longer-term memory (Graves et al., 2014; Weston et al., 2014)

. For example, the so-called neural turing machines (NTM)

(Graves et al., 2014) are proposed to improve the memory of neural networks by coupling with external memory resources, which can learn to sort a small set of numbers as well as other symbolic manipulation tasks. Similarly, the memory networks (Weston et al., 2014) employ a memory component that supports some learnable read and write operations.

Compared with FNNs, an RNN is deep in time so that it is able to capture the long-term dependency in sequences. Unfortunately, the high computational complexity of learning makes it difficult to scale RNN or LSTM based models to larger tasks. Because the learning of FNN is much easier and faster, it is somehow preferable to use a feedforward structure to learn the long-term dependency in sequences. A straightforward attempt is the so-called unfolded RNN (Saon et al., 2014), where an RNN is unfolded in time for a fixed number of time steps. The unfolded RNN only needs comparable training time as the standard FNNs while achieving better performance than FNNs. However, the context information learned by the unfolded RNNs is still very limited due to the limited number of unfolding steps in time. Moreover, it seems quite difficult to derive an unfolded version for more complex recurrent architectures, such as LSTM.

In this work, we propose a simple structure, namely feedforward sequential memory networks (FSMN), which can effectively model long-term dependency in sequential data without using any recurrent feedback. The proposed FSMN is inspired by the filter design knowledge in digital signal processing (Oppenheim et al., 1989) that any infinite impulse response (IIR) filter can be well approximated using a high-order finite impulse response (FIR) filter. Because the recurrent layer in RNNs can be conceptually viewed as a first-order IIR filter, it should be precisely approximated by a high-order FIR filter. Therefore, we extend the standard feedforward fully connected neural networks by augmenting some memory blocks, which adopt a tapped-delay line structure as in FIR filters, into the hidden layers. As a result, the overall FSMN remains as a pure feedforward structure so that it can be learned in a much more efficient and stable way than RNNs. The learnable FIR-like memory blocks in FSMNs may be used to encode long context information into a fixed-size representation, which helps the model to capture long-term dependency. We have evaluated FSMNs on several benchmark tasks in the areas of speech recognition and language modeling, where RNNs or LSTMs currently excel at. For language modeling tasks, the proposed FSMN based language models can significantly overtake not only the standard FNNs but also the popular RNNs and LSTMs by a significant margin. As for the speech recognition, experiments on the standard Switchboard (SWB) task show that FSMNs can even outperform the state-of-the-art bidirectional LSTMs (Sak et al., 2014) in accuracy and meanwhile the training process may be accelerated by more than 3 times. Furthermore, the proposed FSMNs introduce much smaller latency than the bidirectional LSTMs, making it suitable for many real-time applications.

The rest of this paper is organized as follows. In section 2, we introduce the architecture of the proposed FSMN model and compare it with the conventional RNNs. In section 3, we present the learning algorithm for FSMNs and an efficient implementation on GPUs. Experimental results on speech recognition and language modelling are given and discussed in section 4. Finally, the paper is concluded with our findings and future work.

2 Feedforward Sequential Memory Networks

In this section, we will introduce the architecture of feedforward sequential memory networks (FSMN), see (Zhang et al., 2015b) for an earlier short description.

2.1 Model Description of FSMNs

Figure 1: Illustration of a feedforward sequential memory network (FSMN) and its tapped-delay memory block. (Each block stands for a delay or memory unit)

The FSMN is essentially a standard feedforward fully connected neural network with some memory blocks appeneded to the hidden layers. For instance, Figure 1 (a) shows an FSMN with one memory block added into its -th hidden layer. The memory block, as shown in Figure 1 (b), is used to encode previous activities of the hidden layer into a fixed-size representation (called an -th order FSMN), which is fed into the next hidden layer along with the current hidden activity. Depending on the encoding method to be used, we have proposed two different variants: i) scalar FSMNs using scalar encoding coefficients (sFSMN

for short); ii) vectorized FSMNs using vector encoding coefficients (

vFSMN for short).

Given an input sequence, denoted as , where each represents the input data at time instance . We further denote the corresponding outputs of the -th hidden layer for the whole sequence as , with . For an -th order scalar FSMN, at each time instant , we use a set of scalar coefficients, , to encode and its previous terms at the -th hidden layer into a fixed-sized representation, , as the output from the memory block at time :

(1)

where denote all time-invariant coefficients. It is possible to use other nonlinear encoding functions for the memory blocks. In this work, we only consider linear functions for simplicity.

As for the vectorized FSMN (vFSMN), we instead use a group of vectors to encode the history as follows:

(2)

where denotes element-wise multiplication of two equally-sized vectors and all coefficient vectors are denoted as: .

Obviously, all hidden nodes share the same group of encoding coefficients in a scalar FSMN while a vectorized FSMN adopts different encoding coefficients for different hidden nodes, which may significantly improve the model capacity. However, a scalar FSMN has the advantage that it only introduces very few new parameters to the model and thus it can be expanded to a very high order almost without any extra cost.

In the above FSMN definitions in eq.(1) and eq.(2), we call them unidirectional FSMNs since we only consider the past information in a sequence. These unidirectional FSMNs are suitable for some applications where only the past information is available, such as language modeling. However, in many other applications, it is possible to integrate both the history information in the past as well as certain future information within a look-ahead window from the current location of the sequence. Therefore, we may extend the above unidirectional FSMNs to the following bidirectional versions:

(3)
(4)

where is called the lookback order, denoting the number of historical items looking back to the past, and the lookahead order, representing the size of the look-ahead window into the future. 111In eqs.(1) to (4

), for notational simplicity, we simply assume zero-padded vectors are used whenever the subscript index is out of range.

The output from the memory block, , may be regarded as a fixed-size representation of the long surrounding context at time instance . As shown in Figure 1 (a), can be fed into the next hidden layer in the same way as . As a result, we can calculate the activation of the units in the next hidden layer as follows:

(5)

where and

represent the standard weight matrix and bias vector for layer

, and denotes the weight matrix between the memory block and the next layer.

2.2 Analysis of FSMNs

Here we analyse the properties of FSMNs and RNNs from the viewpoint of filtering in digital signal processing. Firstly, let us choose the simple recurrent neural networks (RNN) (Elman, 1990) as example. As shown in Figure 2 (a), it adopts a time-delayed feedback in the hidden layer to recursively encode the history into a fixed-size representation to capture the long term dependency in a sequence. The directed cycle allows RNNs to exhibit some dynamic temporal behaviours. Obviously, the activations of the recurrent layer in RNNs can be denoted as follows:

(6)

Secondly, as for FSMN, we choose the unidirectional scale FSMN in eq.(1) as example. An FSMN uses a group of learnable coefficients to encode the past context within a lookback window into a fixed-size representation. The resultant representation is computed as a weighted sum of the hidden activations of all previous time instances, shown as a tapped-delay structure in Figure 1 (b).

Figure 2: Illustration of recurrent neural networks and IIR-filter-like recurrent layer.

From the viewpoint of signal processing, each memory block in FSMNs may be viewed as an -th order finite impulse response (FIR) filter. Similarly, each recurrent layer in RNNs may be roughly regarded as a first-order infinite impulse response (IIR) filter, as in Figure 2 (b). It is well-known that IIR filters are more compact than FIR filters but IIR filters may be difficult to implement. In some cases, IIR filters may become unstable while FIR filters are always stable. The learning of the IIR-like RNNs is difficult since it requires to use the so-called back-propagation through time (BPTT), which significantly increases the computational complexity and may also cause the notorious problem of gradient vanishing and exploding (Bengio et al., 1994)

. However, the proposed FIR-like FSMN is an overall feedforward structure that can be efficiently learned using the standard back-propagation (BP) with stochastic gradient descent (SGD). As a result, the learning of FSMNs may be more stable and easier than that of RNNs. More importantly, it is well known that any IIR filter can be approximated by a high-order FIR filter up to sufficient precision

(Oppenheim et al., 1989). In spite of the nonlinearity of RNNs in eq.(6), we believe FSMNs provide a good alternative to capture the long-term dependency in sequential signals. If we set proper orders, FSMNs may work equally well as RNNs or perhaps even better.

2.3 Attention-based FSMN

For sFSMN and vFSMN we use context-independent coefficients to encode the long surrounding context into a fixed-size representation. In this work, we also try to use context-dependent coefficients, which we called attention-based FSMN. We use the following attention function (Bahdanau et al., 2014) to calculate the context-dependent coefficients:

(7)

where, and and , , are the parameters of the attention function. and denote the lookback and lookahead orders respectively. As a result, is a group of context-dependent coefficients with respect to , which are used to encode the long surrounding context at time instance as follow:

(8)

The same to sFSMN and vFSMN, is fed into the next hidden layer.

3 Efficient Learning of FSMNs

Here we consider how to learn FSMNs using mini-batch based stochastic gradient descent (SGD). In the following, we present an efficient implementation and show that the entire learning algorithm can be formulated as matrix multiplications suitable for GPUs.

For the scalar FSMNs in eq. (1) and eq. (3), each output from the memory block is a sum of the hidden activations weighted by a group of coefficients to be learned. We first demonstrate the forward pass of FSMNs can be conducted as some sequence-by-sequence matrix multiplications. Take a unidirectional -th order scalar FSMN in eq. (1) as example, all coefficients in the memory block are assumed to be , given an input sequence consisting of instances, we may construct a upper band matrix as follows:

(9)

As for the bidirectional scalar FSMNs in eq. (3), we can construct the following band matrix :

(10)

Obviously, the sequential memory operations in eq.(1) and eq.(3) for the whole sequence can be computed as one matrix multiplication as follows:

(11)

where the matrix is composed of all hidden activations of the whole sequence 222Obviously, can also be computed altogether in parallel for the whole sequence., and is the corresponding outputs from the memory block for the entire sequence. Furthermore, we can easily extend the above formula to a mini-batch consisting of sequences, i.e., . In this case, we can compute the memory outputs for all sequences in the mini-batch as follows:

(12)

where each is constructed in the same way as eq.(9) or (10) based on the length of each sequence.

During the backward procedure, except the regular weights in the neural nets, we also need to calculate the gradients with respect to , to update the filter coefficients. Since FSMNs remain as a pure feedforward network structure, we can calculate the gradients using the standard back-propagation (BP) algorithm. Denote the error signal with respect to as , which is back-propagated from the upper layers, the gradients with respect to can be easily derived as:

(13)

Furthermore, the error signal w. r. t. is computed as:

(14)

This error signal is further back-propagated downstream to the lower layers. As shown above, all computations in a scalar FSMN can be formulated as matrix multiplications, which can be efficiently conducted in GPUs. As a result, scalar FSMNs have low computational complexity in training, comparable with the standard DNNs.

Similarly, for unidirectional and bidirectional vectorized FSMNs, we can calculate the outputs from the memory block as a weighted sum of the hidden layer’s activations using eqs. (2) and (4), respectively. Therefore, for the unidirectional vectorized FSMNs, we can calculate the gradients with respect to the encoding coefficients as well as the error signals with respect to the hidden activation as the following forms:

(15)
(16)

And for the bidirectional vFSMNs, the corresponding gradient and error signals are computed as follows:

(17)
(18)

where is the error signal with respect to . Note that these can also be computed efficiently on GPUs using CUDA kernel functions with element-wise multiplications and additions.

4 Experiments

In this section, we evaluate the effectiveness and efficiency of the proposed FSMNs on several standard benchmark tasks in speech recognition and language modelling and compare with the popular LSTMs in terms of modeling performance and learning efficiency.

4.1 Speech Recognition

For the speech recognition task, we use the popular Switchboard (SWB) dataset. The training data set consists of 309-hour Switchboard-I training data and 20-hour Call Home English data. We divide the whole training data into two sets: training set and cross validation set. The training set contains 99.5% of training data, and the cross validation set contains the other 0.5%. Evaluation is performed in terms of word error rate (WER) on the Switchboard part of the standard NIST 2000 Hub5 evaluation set (containing 1831 utterances), denoted as Hub5e00.

4.1.1 Baseline Systems

For the baseline GMM-HMMs system, we train a standard tied-state cross-word tri-phone system using the 39-dimension PLPs (static, first and second derivatives) as input features. The baseline is estimated with the maximum likelihood estimation (MLE) and then discriminatively trained based on the minimum phone error (MPE) criterion. Before the model training, all PLP features are pre-processed with the cepstral mean and variance normalization (CMVN) per conversation side. The final hidden Markov model (HMM) consists of 8,991 tied states and 40 Gaussian components per state. In the decoding, we use a trigram language model (LM) that is trained on 3 million words from the training transcripts and another 11 million words of the Fisher English Part 1 transcripts. The performance of the baseline MLE and MPE trained GMM-HMMs systems are 28.7% and 24.7% in WER respectively.

As for the DNN-HMM baseline system, we follow the same training procedure as described in (Dahl et al., 2012; Zhang et al., 2015c)

to train the conventional context dependent DNN-HMMs using the tied-state alignment obtained from the above MLE trained GMM-HMMs baseline system. We have trained standard feedforward fully connected neural networks (DNN) using either sigmoid or ReLU activation functions. The DNN contains 6 hidden layers with 2,048 units per layer. The input to the DNN is the 123-dimensional log filter-bank (FBK) features concatenated from all consecutive frames within a long context window of (5+1+5). The sigmoid DNN system is first pre-trained using the RBM-based layer-wise pre-training while the ReLU DNN is randomly initialized. In the fine-tuning, we use the mini-batch SGD algorithm to optimize the frame-level cross-entropy (CE) criterion. The performance of baseline DNN-HMMs systems is listed in Table

2 (denoted as DNN-1 and DNN-2).

Recently, the hybrid long short term memory (LSTM) recurrent neural networks and hidden Markov models (LSTM-HMM) are applied to acoustic modeling (Abdel-Hamid et al., 2012; Sainath et al., 2013; Sak et al., 2014) and they have achieved the state-of-the-art performance for large scale speech recognition. In (Sainath et al., 2013; Sak et al., 2014), it also introduced a projected LSTM-RNN architecture, where each LSTM layer is followed by a low-rank linear recurrent projection layer that helps to reduce the model parameters as well as accelerate the training speed. In this experiment, we rebuild the deep LSTM-HMM baseline systems by following the same configurations introduced in (Sak et al., 2014). The baseline LSTM-HMM contains three LSTM layers with 2048 memory cells per layer and each LSTM layer followed by a low-rank linear recurrent projection layer of 512 units. Each input to the LSTM is 123-dimensional FBK features calculated from a 25ms speech segment. Since the information from the future frames is helpful for making a better decision for the current frame, we delay the output state label by 5 frames (equivalent to using a look-ahead window of 5 frames). The model is trained with the truncated BPTT algorithm (Werbos, 1990) with a time step of 16 and a mini-batch size of 64 sequences.

Moreover, we have also trained a deep bidirectional LSTM-HMMs baseline system. Bidirectional LSTM (BLSTM) can operate on each input sequence from both directions, one LSTM for the forward direction and the other for the backward direction. As a result, it can take both the past and future information into account to make a decision for each time instance. In our work, we have trained a deep BLSTM consisting of three hidden layers and 2048 memory cells per layer (1024 for forward layer and 1024 for backward layer). Similar to the unidirectional LSTM, each BLSTM layer is also followed by a low-rank linear recurrent projection layer of 512 units. The model is trained using the standard BPTT with a mini-batch of 16 sequences.

The performance of the LSTM and BLSTM models is listed in the fourth and fifth rows of Table 2 respectively (denoted as LSTM and BLSTM). Using BLSTM, we can achieve a low word error rate of 13.5% in the test set. This is a very strong baseline in this task.333The previously reported best results (in WER) in the Switchboard task under the same training condition include: 15.6% in (Su et al., 2013) using a large DNN model with 7 hidden layers plus data re-alignement; 13.5% in (Saon et al., 2014) using a deep unfolded RNN with front-end feature adaptation; and 14.8% in (Chen et al., 2015) using a Bidirectional LSTM.

4.1.2 FSMN Results

In speech recognition, it is better to take bidirectional information into account to make a decision for current frame. Therefore, we use the bidirectional FSMNs in eq. (3) and eq. (4) for this task. Firstly, we have trained a scalar FSMN with 6 hidden layer and 2048 units per layer. The hidden units adopt the rectified linear (ReLU) activation function. The input to FSMNs is the 123-dimensional FBK features concatenated from three consecutive frames within a context window of (1+1+1). Different from DNNs, which need to use a long sliding window of acoustic frames as input, FSMNs do not need to concatenate too many consecutive frames due to its inherent memory mechanism. In our work, we have found that it is enough to just concatenate three consecutive frames as input. The learning schedule of FSMNs is the same as the baseline DNNs.

WER(%)
20 10 13.7
20 20 13.6
40 40 13.4
50 50 13.2
100 100 13.3
Table 1: Performance comparison (in WER) of vectorized FSMNs (vFSMN) with various lookback orders () and lookahead orders () in the Switchboard task.

In the first experiment, we have investigated the influence of the various lookback and lookahead orders of bidirectional FSMNs on the final speech recognition performance. We have trained several vectorized FSMNs with various lookback and lookahead order configurations. Experimental results are shown in Table 1, from which we can see that vFSMN can achieve a WER of 13.2% when the lookback and lookahead orders are both set to be 50. To our best knowledge, this is the best performance reported on this task for speaker-independent training (no speaker-specific adaptation and normalization) using the frame-level cross entropy error criterion. In real-time speech recognition applications, we need to consider the latency. In these cases, the bidirectional LSTMs are not suitable since the backward pass can not start until the full sequence is received, which normally cause an unacceptable time delay. However, the latency of bidirectional FSMNs can be easily adjusted by reducing the lookahead order. For instance, we can still achieve a very competitive performance (13.7% in WER) when setting the lookahead order to 10. In this case, the total latency per sequence is normally tolerable in real-time speech recognition tasks. Therefore, FSMNs are better suited for low-latency speech recognition than bidirectional LSTMs.

4.1.3 Model Comparison

In Table 2

, we have summarized experimental results of various systems on the SWB task. Results have shown that those models utilizing the long-term dependency of speech signals, such as LSTMs and FSMNs, perform much better than others. Among them, the bidirectional LSTM can significantly outperform the unidirectional LSTM since it can take the future context into account. More importantly, the proposed vectorized FSMN can slightly outperform BLSTM, being simpler in model structure and faster in learning speed. For one epoch of learning, BLSTMs take about 22.6 hours while the vFSMN only need about 7.1 hours, over 3 times speedup in training.

model time (hr) WER(%)
DNN-1 5.0 15.6
DNN-2 4.8 14.6
LSTM 9.4 14.2
BLSTM 22.6 13.5
sFSMN 6.7 14.2
vFSMN 7.1 13.2
Table 2: Comparison (training time per epoch in hour, recognition performance in WER) of various acoustic models in the Switchboard task. DNN-1 and DNN-2 denote the standard 6 layers of fully connected neural networks using sigmoid and ReLU activation functions. FSMN and vFSMN denote the scalar FSMN and vectorized FSMN respectively.

Moreover, experimental results in the last two lines of Table 2 also show that the vectorized FSMN perform much better than the scalar FSMN. We have investigated these results by virtualizing the learned coefficient vectors in the vectorized FSMN. In Figure 3, we have shown the learned filter vectors in the first memory layer in the vFSMN model. We can see that different dimensions in the memory block have learned quite different filters for speech signals. As a result, vectorized FSMNs perform much better than the scalar FSMNs for speech recognition.

Figure 3: An illustration of the first 100 dimensions of the learned lookback filters (left) and lookahead filters (right) in a vectorized FSMN. Both the lookback and lookahead filters are set to be 40th order.

4.1.4 Attention-based FSMN

In this section, we will compare the performance of attention-based FSMN with DNN and vFSMN. we used the 39 dimension PLP feature as inputs. All models contain 6 hidden layers with 2048 units per layer and use ReLU as the activation functions. The lookback and lookahead orders are 40 for both attention-based FSMN and vFSMN. Experimental results are shown in Table 3. Attention-based FSMN can achieve a significant improvement in FACC (67.42% to 48.64%). However, the improvement of the word error rate (WER) is small over the DNN baseline (15.3% to 15.6%). Moreover, this experiment shows that the attention-based FSMN performs significantly worse than the regular vFSMN without using the attention mechanism.

model FACC(%) WER(%)
RL-DNN 48.64 15.6
vFSMN 67.42 13.8
Attention-FSMN 65.16 15.3
Table 3: Performance (Frame classification accuracy in FACC, recognition performance in WER) of the attention-based FSMN model in the Switchboard task.

4.2 Language Modeling

A statistical language model (LM) is a probability distribution over sequences of words. Recently, neural networks have been successfully applied to language modeling

(Bengio et al., 2003; Mikolov et al., 2010), yielding the state-of-the-art performance. The basic idea of neural network language models is to use a projection layer to map discrete words into a continuous space and estimate word conditional probabilities in this space, which may be smoother to better generalized to unseen contexts. In language modeling tasks, it is quite important to take advantage of the long-term dependency of a language. Therefore, it is widely reported that RNN based LMs can outperform FNN based LMs in language modeling tasks. The so-called FOFE (Zhang et al., 2015d) based method provides another choice to model long-term dependency for languages.

Since the goal in language modeling is to predict next word in a text sequence given all previous words. Therefore, different from speech recognition, we can only use the unidirectional FSMNs in eq.(1) and eq.(2) to evaluate their capacity in learning long-term dependency of language. We have evaluated the FSMN based language models (FSMN-LMs) on two tasks: i) the Penn Treebank (PTB) corpus of about 1M words. The vocabulary size is limited to 10k. The preprocessing method and the way to split data into training/validation/test sets are the same as (Mikolov et al., 2011). ii) The English wiki9 dataset, which is composed of the first bytes of English wiki data as in (Mahoney, 2011). We split it into three parts: training (153M), validation (8.9M) and test (8.9M) sets. The vocabulary size is limited to 80k for wiki9 and replace all out-of-vocabulary words by UNK. Details of the two datasets can be found in Table 4.

Corpus train valid test
PTB 930k 74k 82k
wiki9 153M 8.9M 8.9M
Table 4: The sizes of the PTB and English wiki9 corpora are given in number of words.

4.2.1 Training Details

For the FSMNs, all hidden units adopt the rectified linear activation function. In all experiments, the networks are randomly initialized, without using any pre-training method. We use SGD with a mini-batch size of 200 and 500 for PTB and English wiki9 tasks respectively. The initial learning rate is set to 0.4, which is kept fixed as long as the perplexity on the validation set decreases by at least 1. After that, we continue six more epochs of training, where the learning rate is halved after each epoch. Because PTB is a very small task, we also use momentum (0.9) and weight decay (0.00004) to avoid overfitting. For the wiki9 task, we do not use the momentum or weight decay.

4.2.2 Performance Comparison

Model Test PPL
KN 5-gram (Mikolov et al., 2011) 141
3-gram FNN-LM (Zhang et al., 2015d) 131
RNN-LM (Mikolov et al., 2011) 123
LSTM-LM (Graves, 2013) 117
MemN2N-LM (Sukhbaatar et al., 2015) 111
FOFE-LM (Zhang et al., 2015d) 108
Deep RNN (Pascanu et al., 2013) 107.5
Sum-Prod Net (Cheng et al., 2014) 100
LSTM-LM (1-layer) 114
LSTM-LM (2-layer) 105
sFSMN-LM 102
vFSMN-LM 101
Table 5: Perplexities on the PTB database for various LMs.

For the PTB task, we have trained both scalar and vector based FSMNs with an input context window of two, where the previous two words are sent to the model at each time instance to predict the next word. Both models contain a linear projection layer (of 200 units), two hidden layers (of 400 units pre layer) and a memory block in the first hidden layer. We use a 20th order FIR filter in the first hidden layer for both scalar FSMNs and vectorized FSMNs in the PTB task. These models can be trained in 10 minutes on a single GTX780 GPU. For comparison, we have also builded two LSTM based LMs with Theano

(Bergstra et al., 2011), one using one recurrent layer and the other using two recurrent layers. In Table 5, we have summarized the perplexities on the PTB test set for various language models.444All the models in Table 5 do not use the dropout regularization, which is somehow equivalent to data augmentation. In (Zaremba et al., 2014; Kim et al., 2015), the proposed LSTM-LMs (word level or character level) achieves much lower perplexity but they both use the dropout regularization and take days to train a model.

For the wiki9 task, we have trained several baseline systems: traditional n-gram LM, RNN-LM, standard FNN-LM and the FOFE-LM introduced in

(Zhang et al., 2015d). Firstly, we have trained two n-gram LMs (3-gram and 5-gram) using the modified Kneser-Ney smoothing without count cutoffs. As for RNN-LMs, we have trained a simple RNN with one hidden layer of 600 units using the toolkit in (Mikolov et al., 2010). We have further used the spliced sentence method in (Chen et al., 2014) to speed up the training of RNN-LM on GPUs. The architectures of FNN-LM and FOFE-LM are the same, it contains a linear projection layer of 200 units and three hidden layer with 600 units per layer and hidden units adopt the ReLU activation. The only difference is that FNN-LM uses the one-hot vectors as input while FOFE-LM uses the so-called FOFE codes as input. In both models, the input window size is set to two words. The performance of the baseline systems is listed in Table 6. For the FSMN based language models, we use the same architecture as the baseline FNN-LM. Both the scalar and vector based FSMN adopt a 30th order FIR filter in the memory block for the wiki9 task. In these experiments, we have also evaluated several FSMN-LMs with memory blocks added to different hidden layers. Experimental results on the wiki9 test set are listed in Table 6 for various LMs.

Model Architecture PPL
KN 3-gram - 156
KN 5-gram - 132
FNN-LM [2*200]-3*600-80k 155
RNN-LM [1*600]-80k 112
FOFE-LM [2*200]-3*600-80k 104
sFSMN-LM [2*200]-600(M)-600-600-80k 95
[2*200]-600-600(M)-600-80k 96
[2*200]-600(M)-600(M)-600-80k 92
vFSMN-LM [2*200]-600(M)-600-600-80k 95
[2*200]-600(M)-600(M)-600-80k 90
Table 6: Perplexities on the English wiki9 test set for various language models ( denotes a hidden layer with memory block).
Figure 4: The learning curves of various models on the English wiki9 task.

From the experimental results in Table 5 and Table 6, we can see that the proposed FSMN based LMs can significantly outperform not only the traditionally FNN-LM but also the RNN-LM and FOFE-LM. For example, in the English wiki9 task, the proposed FSMN-LM can achieve a perplexity of 90 while the well-trained RNN-LM and FOFE-LM obtain 112 and 104 respectively. This is the state-of-the-art performance for this task. Moreover, the learning curves in Figure 4 have shown that the FSMN based models converge much faster than RNNs. It only takes about 5 epochs of learning for FSMNs while RNN-LMs normally need more than 15 epochs. Therefore, training an FSMN-LM is much faster than an RNN-LM. Overall, experimental results indicate that FSMNs can effectively encode long context into a compressed fixed-size representation, being able to explore the long-term dependency in text sequences.

Figure 5: Illustration of the learned filters in FSMNs on the PTB task: left) the coefficients of the learned filters in vectorized FSMN; right) the average coefficients of filters in vFSMN and the learned filters in the scalar based FSMN.

Another interesting finding is that the scalar and vector based FSMN-LMs achieve similar performance on both PTB and wiki9 tasks. This is very different from the experimental results on the speech recognition task (see Table 2), where vectorized FSMN model significantly outperforms the scalar FSMN models. Here we have investigated the learned coefficients of the FIR filter in two FSMN-LMs. We choose the well-trained scalar and vector based FSMN models with a memory block in the first hidden layer. In Figure 5, we have plotted the learned filter coefficients in both vector and scalar based FSMNs. The motivation to use a vectorized FSMN is to learn different filters for various data dimensions. However, from the left figure in Figure 5, we can see that the learned filters of all dimension are very similar in the vectorized FSMN. Moreover, the averaged (across all dimensions) filter coefficients of vectorized FSMN match very well with the filter learned by the scalar FSMN, as shown in the right figure in Figure 5. This explains why the scalar and vector based FSMN-LMs achieve similar performance in language modeling tasks. Finally, we can see that the learned filter coefficients reflect the property of nature language that nearby contexts generally play more important role in prediction than far-away ones.

5 Conclusions and Future Work

In summary, we have proposed a novel neural network architecture, namely feedforward sequential memory networks (FSMN) for modeling long-term dependency in sequential data. The memory blocks in FSMNs use a tapped-delay line structure to encode long context information into a fixed-size representation in a pure feedforward way without using the expensive recurrent feedbacks. We have evaluated the performance of FSMNs on several speech recognition and language modeling tasks. In all examined tasks, the proposed FSMN based models can significantly outperform the popular RNN or LSTM based models. More importantly, the learning of FSMNs is much easier and faster than that of RNNs or LSTMs. As a strong alternative, we expect the proposed FSMN models may replace RNN and LSTM in a variety of tasks. As the future work, we will try to use more complex encoding coefficients, such as matrix. We will also try to apply FSMNs to other machine learning tasks under the currently popular sequence-to-sequence framework, such as question answering and machine translation. Moreover, the unsupervised learning method in

(Zhang & Jiang, 2015; Zhang et al., 2015a) may be applied to FSMNs to conduct unsupervised learning for sequential data.

References