End-To-End Speech Recognition Using A High Rank LSTM-CTC Based Model

03/12/2019 ∙ by Yangyang Shi, et al. ∙ Mobvoi, Inc. 0

Long Short Term Memory Connectionist Temporal Classification (LSTM-CTC) based end-to-end models are widely used in speech recognition due to its simplicity in training and efficiency in decoding. In conventional LSTM-CTC based models, a bottleneck projection matrix maps the hidden feature vectors obtained from LSTM to softmax output layer. In this paper, we propose to use a high rank projection layer to replace the projection matrix. The output from the high rank projection layer is a weighted combination of vectors that are projected from the hidden feature vectors via different projection matrices and non-linear activation function. The high rank projection layer is able to improve the expressiveness of LSTM-CTC models. The experimental results show that on Wall Street Journal (WSJ) corpus and LibriSpeech data set, the proposed method achieves 4 baseline CTC system. They outperform other published CTC based end-to-end (E2E) models under the condition that no external data or data augmentation is applied. Code has been made available at https://github.com/mobvoi/lstm_ctc.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Conventional deep neural network HMM hybrid speech recognition systems

[11, 22]

usually require two steps in the training stage. First, a prior acoustic model such as Gaussian mixture models (GMM) is used to generate HMM state alignments for the speech training data. Based on the acoustic features and one-hot training targets generated from the state alignments, neural networks are trained to predict the frame-level state posterior probabilities. This separated two-step training process makes the acoustic model performance optimization less efficient.

Recently, various end-to-end (E2E) models [6, 13, 14, 8, 7, 4, 2, 3, 20, 15, 18, 10] are proposed to bypass the label alignment stage to directly learn the transducer of a sequence of acoustic features to a sequence of probabilities over output tokens. These E2E systems can be categorized into CTC based models [7, 8, 1, 15], sequence to sequence attention based models [5, 3, 4, 9] and the combination of CTC together with sequence to sequence attention based models [24, 13, 12, 6].

Among the aforementioned E2E models, the CTC based models are widely investigated in the speech community [19, 15, 6, 14]

, due to its simplicity in training and efficiency in decoding. In CTC based models, a special blank label is introduced to identify the less informative frames. In addition, CTC based systems allow repetition of labels. In this way, CTC based models automatically infer the speech frame and label alignment (usually by a delay in time), which removes the state alignment step in training. Using highly efficient greedy decoding with no involvement of lexicon and language model, the CTC based model

[19] gives competitive results. In greedy decoding, the predictions are the concatenation of tokens that correspond to the spikes in posterior distribution.

CTC loss is often used together with LSTM [19, 15, 8]

. CTC loss function imposes the conditional independence constraints for the output tokens given the whole input feature sequence. So it relies on the hidden feature vector of the current frame to make predictions. Armed with the memory mechanism, current frame’s hidden feature vector from LSTM is able to capture the information from previous frames. In other words, the current frame label is not predicted based on exclusively current frame features.

In LSTM-CTC based models, to get the posterior probability of the output labels, a projection matrix maps the hidden feature vector to the final output layer. The hidden feature vector is the output from the last layer in the multi-layer LSTMs or bidirectional LSTMs. The output layer has the same dimension as the training labels. The phonemes or the characters are usually used as labels which have smaller dimensions than the LSTM output. So the projection matrix becomes the bottleneck that limits the expressive capability of the LSTM-CTC based models. To address similar issues in language modeling, a mixture of softmaxes method [23] is used to improve the performance. In this paper, we propose to use a high rank projection layer to replace the single projection matrix to improve the expressiveness of the LSTM-CTC based E2E models.

In the high rank projection layer, one hidden feature vector is first mapped to multiple vectors by a set of projection matrices together with non-linear activation function. A weighted combination of these vectors is used as the output of the high rank projection layer. The non-linear activation function breaks the potential linear correlation among the output vectors that are obtained by mapping one feature vector via several projection matrices. So the proposed projection layer has higher rank than mapping feature vectors with one single projection matrix.

One simple approach to decode with CTC based models is to concatenate the non-blank labels corresponding to the posterior spikes and to remove the continuously repeated output labels. However, such a simple greedy decoding method lacks the lexicon and language model information that could be leveraged to constrain the search path in decoding. In EESEN [15], a WFST based method is applied to integrate the CTC frame labels, lexicons and language models into one search graph. In this work, we follow EESEN’s way of doing decoding with CTC based models.

In CTC training, the actual label sequence is obtained by inserting blank labels at the beginning, at the end and between every consistent labels in the original label sequence. The blank label has a very high prior probability. That is one reason why for the trained CTC model, the majority of frames would take blank as labels and the non-blank labels only happen in a very narrow region with peaky distribution. To address this issue, similar to EESEN

[15]

, we apply the label distribution of the augmented label sequence used in CTC training as prior to normalize the posterior probability distribution.

We evaluate the proposed high rank LSTM-CTC based end-to-end speech recognition on Wall Street Journal (WSJ) [17] and LibriSpeech corpus [16]. For both experiments, no external data or data augmentation is applied. On both data sets, the proposed models outperform the baseline model. For easy comparison and results reproduction, the source code for this study is released as an open source project111https://github.com/mobvoi/lstm_ctc.

The rest of the paper is organized as follows. In Section 2, we briefly review the LSTM-CTC based models in the E2E speech recognition system. Then we describe the proposed high rank LSTM-CTC based models. In Section 3, we present the experiments on WSJ and LibriSpeech benchmark data set. Finally, we give our conclusions.

2 A High Rank LSTM-CTC Based Model

2.1 Lstm-Ctc

Let denote the input sequence of acoustic feature vectors with sequence length , where . Given , the E2E ASR system gives a sequence of posterior probability vectors of the output labels, where is a posterior probability vector of the output labels at position . The dimension of the posterior probability vector is that is the number of the target labels. The target labels usually are the phonemes or the characters. In this paper, we only use the phonemes as output labels.

One typical problem for E2E speech recognition is that the length of output labels is often shorter than the length of input speech frames . To deal with this issue in training, CTC introduces a special blank label that is inserted between two consecutive labels and allowing for repetitions of labels. So the label sequence is expanded to that has the same length as input sequence. To get the posterior probability of a label sequence , CTC needs to compute and sum the posterior probabilities of all the possible path in . Under the constraint that given the input sequence, the posterior probability of each label in a output sequence is conditionally independent of each other, the CTC loss is formulated as follows:

(1)

More specifically, in LSTM-CTC models, the sequence hidden feature vectors is obtained by feeding multiple layers of LSTM or bidirectional LSTM with input acoustic feature . A projection matrix

shared across over the whole sequence is used to map the hidden feature vectors to logit vectors of which each has

nodes corresponding to labels including blank label . The projection can be formulated as follows:

(2)

Softmax activation function is then applied on each logit vector to get the posterior probability vector . Normally the number of output labels is relatively small. For example, there are 71 stressed phones in WSJ data set and 43 unstressed phones in LibriSpeech data set. This projection matrix becomes the bottleneck for the expressiveness of the LSTM-CTC models. To address this issue, we proposed a high rank projection layer to replace the single projection matrix.

2.2 A High Rank Projection Layer

Figure 1: A high rank projection layer. A set of logit vectors are obtained by transducing the hidden feature vector via a set of projection matrices where and Tanh activation function. These

vectors are then interpolated via a latent weight

. The output vector is obtained by scaling the weighted interpolation vector with temperature factor .

As illustrated in Fig. 1, in the high rank projection layer, a set of projection matrices are used to map the input hidden feature vector (of dimension ) at frame to a set of logit vectors (each of dimension ).

(3)

where is the predefined number of projection matrices in this layer. is the concatenation of a set of projection matrices. Each is of dimension . The logit vector at speech frame is represented as an interpolation of the set of logit vectors as follows:

(4)

where is a predefined scale factor to control the smoothness of the posterior probabilities. is the combination weight computed at time stamp for the -th logit vector. It is the softmax after mapping the hidden feature vector to an -dimensional vector via .

(5)
(6)

The projection matrices, , and the weight matrix are all trained jointly with the rest of network parameters.

2.2.1 Non-linear activation function and temperature factor

To get a high rank projection, the non-linear activation needs to be used to break the potential linear correlation among the projection matrices in the projection layer. Without the non-linear activation, the logit vector at speech frame can be formulated as follows:

(7)

which is essentially the same as equation (2). The temperature factor controls the smoothness of the label output distribution. The weighted interpolation usually smooths the output probability distribution. To make the output probability distribution more discriminative, in this study, we use to sharp the output distribution.

3 Experiments

3.1 Data Sets

We carry out experiments on Wall Street Journal (WSJ) corpus [17] and LibriSpeech corpus [16] to verify the performance of the proposed method. The WSJ corpus is a combination of LDC93S6B and LDC94S13B data sets obtained from LDC. After data preparation, we get 81 hours of transcribed speech audio, from which is selected as training data, the rest is used as validation data. The development data (dev93) consists of 503 utterance. And the evaluation data (eval92) contains 333 utterances. LibriSpeech is an open source speech corpus222http://www.openslr.org/12/ that has almost 1000 hours read speech based on public domain audio books. Similar to WSJ data preparation, among the 960 hours’ train data, we select 95 of the data for model training and the rest 5 for validation. In LibriSpeech, the development data and evaluation data are split into ”clean” and ”other” subsets.

In decoding, we use WSJ provided trigram language model. In LibriSpeech experiment, to be consistent with previous studies [25], the provided standard unpruned four-gram language model333http://www.openslr.org/resources/11/4-gram.arpa.gz is used in decoding.

In our experiments, the phonemes are used as CTC labels. For WSJ experiment, the CMU dictionary444http://www.speech.cs.cmu.edu/cgi-bin/cmudict is used as the lexicon for WFST graph building. Including the blank label, we extract 72 labels in total from CMU dictionary. In LibriSpeech experiment, we use the unstressed phonemes based lexicon555http://www.openslr.org/resources/11/librispeech-lexicon.txt from which 44 labels are extracted as CTC labels. Due to the lack of forced alignment, CTC training can not deal with the same word with multiple pronunciations. For every word, only the first pronunciation is applied to form the lexicon. We did not use other existing models to find the best pronunciation per occurrence.

3.2 Model Structure and Hyper-parameter Setup

For both experiments, 120-dimensional feature vector that consists of 40-dimensional filter bank together with its first and second order derivations are calculated at each speech frame. The features are normalized via mean subtraction and variance normalization per speaker. The splice of the feature vectors from left, current and right frame (in total 360-dimensional feature vector) is used as the input to bidirectional LSTM. To speed up training, frame skipping is used. Two out of three frames are skipped during training. Four layers of bidirectional LSTMs are used to get the hidden feature vectors. There are 320 hidden neurons in each LSTM layer with peephole connections. The forget gate bias is set to be 5. Batch size is set to 64 for experiments on LibriSpeech and 32 for experiments on WSJ. Adam based adaptive learning rate method is used. The initial learning rate is set to 0.001 for WSJ experiments and 0.0004 for LibriSpeech experiments, respectively. The learning rate gets decayed by a factor of 0.7 for WSJ experiments and 0.5 for LibriSpeech experiments when the model does not improve over validation data. For the proposed high rank LSTM-CTC based models, we set

the same as output lable size to achieve the highest rank of the projection layer.

Due to the fact that some GPU operations are non-deterministic in tensorflow, the models trained with the same setting up multiple times would be different. For fair comparison, we use the average word error rate of five different models that are trained with the same setting up.

3.3 Results

Table. 1 gives the WER comparison for different models on WSJ corpus. Comparing with our baseline model (our-LSTM-CTC), the proposed model (our-HR-LSTM-CTC) gets and relative WER reduction on dev93 and eval92, respectively. We showed in Eq. (7), our-MOM-LSTM-CTC is similar to baseline model except that it has more weight parameters. The results in Table. 1 confirms that removing non-linear activation function and temperature factor, the simple mixture of different projection matrices does not improve over the baseline model.

method lm dev93 eval92
ESPNET[21] c-lstm 12.4 8.9
EESEN[15] 3gram 10.9 7.3
CTC-PL[25]* 3gram 9.2 5.5
DS2[1]* 4gram 5.0 3.6
our-LSTM-CTC 3gram 11.0 7.5
our-MOM-LSTM-CTC 3gram 11.1 7.5
our-HR-LSTM-CTC 3gram 10.3 7.2
Table 1: WER() comparison for different models on WSJ dev93 and eval92 data sets. DS2 used 11940 hours audio with additional data augmentation. CTC-PL also used data augmentation. Our-LSTM-CTC is our baseline model. Our-MOM-LSTM-CTC is the mixture-of-matrices model that removes the non-linear activation function and the temperature factor in the high rank projection layer. Our-HR-LSTM-CTC is the proposed high rank LSTM-CTC model. The WER for ”our-” models is the average WER of 5 models trained with the same parameter setting up.

Table. 2 shows the WER comparison of different models on the LibriSpeech corpus. The proposed model (our-HR-LSTM-CTC) shows consistent behavior on both WSJ and LibriSpeech.

Table. 1 and Table. 2

compare the results from other models using CTC loss. Due to the lack of open-sourced data, script and code, to test our models on the exact same settings as published results is difficult. To present the status of CTC loss on these two data sets, we only refer the published results here. Note some of the comparisons are not fair, as they are not trained based on the exact same data. CTC-PL is the model trained by CTC loss together with policy learning to optimize WER. In CTC-PL, the training data is augmented through random perturbations of tempo, pitch, volume, temporal alignment, along with adding random noise. In DS2, it uses all the public available English corpus together with data augmentation as training data. E2E-att combines sequence attention modeling together with CTC loss. It use additional 800M words for language model training. When LSTM based LM is used in decoding, E2E-att gets the state-of-the-art result on LibriSpeech. ESPNET in Table. 

1 uses a combination of CTC loss with sequence to sequence loss. However, it does not use any effective method to leverage the language model and lexicon information in decoding.

method lm dev test
clean other clean other
CTC-PL[25]* 4gram 5.1 14.3 5.4 14.7
DS2[1]* 4gram - - 5.3 13.3
E2E-att[24]* 4gram 5.0 14.3 4.8 15.3
E2E-att[24]* LSTM 3.5 11.5 3.8 12.8
our-LSTM-CTC 4gram 5.0 13.4 5.4 13.9
our-MOM-LSTM-CTC 4gram 5.0 13.3 5.5 14.0
our-HR-LSTM-CTC 4gram 4.8 12.9 5.1 13.3
Table 2: WER() comparison for different models on LibriSpeech dev and clean data sets. DS2 is the identical system as in Table. 1. CTC-PL applies the same algorithm as the CTC-PL in Table. 1, but on LibriSpeech data with data augmentation. E2E-att used external data for language model training. Our-LSTM-CTC is our baseline CTC model trained on LibriSpeech. Our-MOM-LSTM-CTC is the model that removes the non-linear activation function and the temperature factor. Our-HR-LSTM-CTC is the proposed high rank LSTM-CTC model. The WER for ”our-” models is the average WER of 5 models trained with the same parameter setting up.

4 Conclusions

In this paper, a high rank projection layer is proposed to replace the bottleneck projection matrix in conventional LSTM-CTC based models for E2E speech recognition. The output of the high rank projection layer is a weighted combination of multiple vectors that are obtained by feeding the hidden feature vector to a set of projection matrices and going through a non-linear activation function. On two benchmark corpora, WSJ and LibriSpeech, the proposed high rank LSTM-CTC model outperformed the baseline CTC model. On WSJ corpus, compared with baseline model, the proposed model got nearly relative WER reduction on dev93 and reduction on eval92. On LibriSpeech corpus, the proposed model improved the baseline model by relative WER reduction on test-clean and on test-other, dev-clean and dev-other subsets.

References

  • [1] D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski, and etc (2015) Deep Speech 2: end-to-end speech recognition in English and Mandarin. CoRR abs/1512.0. External Links: 1512.02595, Link Cited by: §1, Table 1, Table 2.
  • [2] E. Battenberg, J. Chen, R. Child, A. Coates, Y. Li, H. Liu, S. Satheesh, A. Sriram, and Z. Zhu (2018) Exploring neural transducers for end-to-end speech recognition. In Proceedings of ASRU, External Links: Document, 1707.07413, ISBN 9781509047888 Cited by: §1.
  • [3] W. Chan, N. Jaitly, Q. Le, and O. Vinyals (2016) Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Proceedings of ICASSP, External Links: Document, 1508.01211, ISBN 9781479999880, ISSN 15206149 Cited by: §1.
  • [4] C. Chiu, T. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. Weiss, K. Rao, K. Gonina, N. Jaitly, B. Li, J. Chorowski, and M. Bacchiani (2017) State-of-the-art speech recognition with sequence-to-sequence models. CoRR abs/1712.0. External Links: 1712.01769, Link Cited by: §1.
  • [5] J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio (2015) Attention-based models for speech recognition. In Proceedings of NIPS, NIPS’15, pp. 577–585. Cited by: §1.
  • [6] A. Das, J. Li, R. Zhao, and Y. Gong (2018) Advancing connectionist temporal classification with attention modeling. CoRR abs/1803.0. External Links: 1803.05563, Link Cited by: §1, §1.
  • [7] A. Graves, S. Fernandez, F. Gomez, and J. Schmidhuber (2006)

    Connectionist temporal classification : labelling unsegmented sequence data with recurrent neural networks

    .
    Proceedings of ICML. External Links: Document, 1512.02595, ISBN 1595933832, ISSN 10987576 Cited by: §1.
  • [8] A. Graves and N. Jaitly (2014) Towards end-to-end speech recognition with recurrent neural networks. JMLR Workshop and Conference Proceedings. External Links: Document, 1512.02595, ISBN 1595933832, ISSN 10987576 Cited by: §1, §1.
  • [9] A. Graves (2012) Sequence transduction with recurrent neural networks. CoRR abs/1211.3. External Links: 1211.3711, Link Cited by: §1.
  • [10] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, and A. Ng (2014) DeepSpeech: Scaling up end-to-end speech recognition. arXiv:1412.5567. External Links: Document, 1412.5567, ISBN 9781479983391, ISSN 19909772 Cited by: §1.
  • [11] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury (2012) Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine. External Links: Document, 1207.0580, ISBN 1053-5888, ISSN 10535888 Cited by: §1.
  • [12] T. Hori, S. Watanabe, and J. Hershey (2017) Joint CTC/attention decoding for end-to-end speech recognition. In Proceedings of ACL, External Links: Document, ISBN 9781945626753 Cited by: §1.
  • [13] S. Kim, T. Hori, and S. Watanabe (2017) Joint CTC-attention based end-to-end speech recognition using multi-task learning. In Proceedings of ICASSP, External Links: Document, 1609.06773, ISBN 9781509041176, ISSN 15206149 Cited by: §1.
  • [14] S. Kim, M. Seltzer, J. Li, and R. Zhao (2017) Improved training for online end-to-end speech recognition systems. CoRR abs/1711.0. External Links: 1711.02212, Link Cited by: §1, §1.
  • [15] Y. Miao, M. Gowayyed, and F. Metze (2016) EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. In Proceedings of ASRU, External Links: Document, 1507.08240, ISBN 9781479972913 Cited by: §1, §1, §1, §1, §1, Table 1.
  • [16] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur (2015) Librispeech: An ASR corpus based on public domain audio books. In Processings of ICASSP, External Links: Document, ISBN 9781467369978, ISSN 15206149 Cited by: §1, §3.1.
  • [17] D. Paul and J. Baker (1992) The design for the wall street journal-based CSR corpus. In Proceedings of the workshop on Speech and Natural Language - HLT ’91, External Links: Document, ISBN 1558602720, ISSN 1558602720 Cited by: §1, §3.1.
  • [18] T. Sainath, C. Chiu, R. Prabhavalkar, A. Kannan, Y. Wu, P. Nguyen, and Z. Chen (2017) Improving the performance of online neural transducer models. CoRR abs/1712.0. External Links: 1712.01807, Link Cited by: §1.
  • [19] H. Sak, A. Senior, K. Rao, and F. Beaufays (2015) Fast and accurate recurrent neural network acoustic models for speech recognition. arXiv. External Links: 1507.06947 Cited by: §1, §1.
  • [20] H. Sak, M. Shannon, K. Rao, and F. Beaufays (2017) Recurrent neural aligner: An encoder-decoder neural network model for sequence to sequence mapping. In Proceedings of Interspeech, External Links: Document, ISSN 19909772 Cited by: §1.
  • [21] S. Watanabe, T. Hori, S. Karita, T. Hayashi, J. Nishitoba, Y. Unno, N. Soplin, J. Heymann, M. Wiesner, N. Chen, A. Renduchintala, and T. Ochiai (2018) ESPnet: end-to-end speech processing toolkit. Proceedings of Interspeech abs/1804.0. External Links: 1804.00015, Link Cited by: Table 1.
  • [22] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, and G. Zweig (2017) The Microsoft 2016 conversational speech recognition system. In Proceedings of ICASSP, External Links: Document, 1609.03528, ISBN 9781509041176, ISSN 15206149 Cited by: §1.
  • [23] Z. Yang, Z. Dai, R. Salakhutdinov, and W.~W. Cohen (2017) Breaking the softmax bottleneck: A high-rank RNN language model. CoRR arXiv:1711, pp. 1–18. External Links: 1711.03953, Link Cited by: §1.
  • [24] A. Zeyer, K. Irie, R. Schlüter, and H. Ney (2018) Improved training of end-to-end attention models for speech recognition. CoRR abs/1805.0. External Links: 1805.03294, Link Cited by: §1, Table 2.
  • [25] Y. Zhou, C. Xiong, and R. Socher (2017) Improving end-to-end speech recognition with policy learning. Proceedings of ICASSP abs/1712.0. External Links: 1712.07101, Link Cited by: §3.1, Table 1, Table 2.