Towards Online End-to-end Transformer Automatic Speech Recognition

10/25/2019 ∙ by Emiru Tsunoo, et al. ∙ 0

The Transformer self-attention network has recently shown promising performance as an alternative to recurrent neural networks in end-to-end (E2E) automatic speech recognition (ASR) systems. However, Transformer has a drawback in that the entire input sequence is required to compute self-attention. We have proposed a block processing method for the Transformer encoder by introducing a context-aware inheritance mechanism. An additional context embedding vector handed over from the previously processed block helps to encode not only local acoustic information but also global linguistic, channel, and speaker attributes. In this paper, we extend it towards an entire online E2E ASR system by introducing an online decoding process inspired by monotonic chunkwise attention (MoChA) into the Transformer decoder. Our novel MoChA training and inference algorithms exploit the unique properties of Transformer, whose attentions are not always monotonic or peaky, and have multiple heads and residual connections of the decoder layers. Evaluations of the Wall Street Journal (WSJ) and AISHELL-1 show that our proposed online Transformer decoder outperforms conventional chunkwise approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

End-to-end (E2E) automatic speech recognition (ASR) has been attracting attention as a method of directly integrating acoustic models (AMs) and language models (LMs) because of the simple training and efficient decoding procedures. In recent years, various models have been studied, including connectionist temporal classification (CTC) [1, 2, 3, 4], attention-based encoder–decoder models [5, 6, 7, 8, 9], their hybrid models [10, 11], and the RNN-transducer [12, 13, 14]. Transformer [15] has been successfully introduced into E2E ASR by replacing RNNs [16, 17, 18, 19, 20], and it outperforms bidirectional RNN models in most tasks [21]. Transformer has multihead self-attention network (SAN) layers, which can leverage a combination of information from completely different positions of the input.

However, similarly to bidirectional RNN models [22], Transformer has a drawback in that the entire utterance is required to compute self-attention, making it difficult to utilize in online recognition systems. Also, the memory and computational requirements of Transformer grow quadratically with the input sequence length, which makes it difficult to apply to longer speech utterances. A simple solution to these problems is block processing as in [17, 19, 23]. However, it loses global context information and its performance is degraded in general.

We have proposed a block processing method for the encoder–decoder Transformer model by introducing a context-aware inheritance mechanism, where an additional context embedding vector handed over from the previously processed block helps to encode not only local acoustic information but also global linguistic, channel, and speaker attributes [24]. Although it outperforms naive blockwise encoders, the block processing method can only be applied to the encoder because it is difficult to apply to the decoder without knowing the optimal chunk step, which depends on the token unit granularity and the language.

For the attention decoder, various online processes have been proposed. In [5, 25, 26], the chunk window is shifted from an input position determined by the median or maximum of the attention distribution. Monotonic chunkwise attention (MoChA) uses a trainable monotonic energy function to shift the chunk window [27]. MoChA has also been extended to make it stable while training [28] and to be able to change the chunk size adaptively to the circumstances [29]. [30] proposed a unique approach that uses a trigger mechanism to notify the timing of the attention computation. However, to the best of our knowledge, such monotonic chunkwise approaches have not yet been applied to Transformer.

In this paper, we extend our previous context block approach towards an entire online E2E ASR system by introducing an online decoding process inspired by MoChA into the Transformer decoder. Our contributions are as follows. 1) Triggers for shifting chunks are estimated from the source–target attention (STA), which uses queries and keys, 2) all the past information is utilized according to the characteristics of the Transformer attentions that are not always monotonic or locally peaky, and 3) a novel training algorithm of MoChA is proposed, which extends to train the trigger function by dealing with multiple attention heads and residual connections of the decoder layers. Evaluations of the Wall Street Journal (WSJ) and AISHELL-1 show that our proposed online Transformer decoder outperforms conventional chunkwise approaches.

2 Transformer ASR

The baseline Transformer ASR follows that in [21], which is based on the encoder–decoder architecture. An encoder transforms a -length speech feature sequence to an -length intermediate representation , where due to downsampling. Given and previously emitted character outputs , a decoder estimates the next character .

The encoder consists of two convolutional layers with stride

for downsampling, a linear projection layer, positional encoding, followed by encoder layers and layer normalization. Each encoder layer has a multihead SAN followed by a position-wise feedforward network, both of which have residual connections. Layer normalization is also applied before each module. In the SAN, attention weights are formed from queries () and keys (), and applied to values () as

(1)

where typically for the number of heads . We utilized multihead attention denoted, as the function, as follows:

(2)
(3)

In (2) and (3), the th layer is computed with the projection matrices , , , and . For all the SANs in the encoder, , , and are the same matrices, which are the inputs of the SAN. The position-wise feedforward network is a stack of linear layers.

The decoder predicts the probability of the following character from previous output characters

and the encoder output , i.e., . The character history sequence is converted to character embeddings. Then, decoder layers are applied, followed by the linear projection and Softmax function. The decoder layer consists of a SAN and a STA, followed by a position-wise feedforward network. The first SAN in each decoder layer applies attention weights to the input character sequence, where the input sequence of the SAN is set as , , and . Then, the following STA attends to the entire encoder output sequence by setting and to be the encoder output .

The SAN can leverage a combination of information from completely different positions of the input. This is due to the multiple heads and residual connections of the layers that complement each other, i.e., some attend monotonically and locally while others attend globally. Transformer requires the entire speech utterance for both the encoder and decoder; thus, they are processed only after the end of the utterance, which causes a huge delay. To realize an online ASR system, both the encoder and decoder are processed online.

Figure 1: Context inheritance mechanism of the encoder.

3 Contextual Block Processing of Encoder

A simple way to process the encoder online is blockwise computation, as in [17, 19, 23]. However, the global channel, speaker, and linguistic context are also important for local phoneme classification. We have proposed a context inheritance mechanism for block processing by introducing an additional context embedding vector[24]. As shown in the tilted arrows in Fig. 1, the context embedding vector is computed in each layer of each block and handed over to the upper layer of the following block. Thus, the SAN in each layer is applied to the block input sequence using the context embedding vector.

The context embedding vector is introduced into the original formulation in Sec. 2. Denoting the context embedding vector as , the augmented variables satisfy and , where the context embedding vector of the previous block of the previous layer is used. is the output of the th encoder layer of block , which is computed simultaneously with the context embedding vector as

(4)
(5)

where , , , and are trainable matrices and biases. The output of the SAN does not only encode input acoustic features but also delivers the context information to the succeeding layer as shown by the tilted red arrows in Fig. 1.

4 Online Process for Decoder

4.1 Online Transformer Decoder based on MoChA

The decoder of Transformer ASR is incremental at test time, especially for the first SAN of each decoder layer. However, the second STA requires the entire sequence of the encoded features . Blockwise attention mechanisms cannot be simply applied with a fixed step size, because the step size depends on the output token granularity (grapheme, character, (sub-)word, and so forth) and language. In addition, not all the STAs are monotonic, because the other heads and layers complement each other. Typically, in the lower layer of the Transformer decoder, some heads attend wider areas, and some attend a certain area constantly, as shown in Fig. 2. Therefore, chunk shifting and the chunk size should be adaptive.

For RNN models, the median or maximum of the attention distribution is used as a cue for shifting a fixed-length chunk, where the parameters of the original batch models are reused [5, 25, 26]

. MoChA further introduces the probability distribution of chunking to train the monotonic chunking mechanism. In this paper, we propose a novel online decoding method inspired by MoChA.

Figure 2: Examples of attentions in a Transformer decoder layer. (a) is a head having wider attentions, and (b) is a head attending a certain area of .

MoChA [27] splits the input sequence into small chunks over which soft attention is computed. It learns a monotonic alignment between the encoder features and the output sequence , with

-length chunking. “Soft” attention is efficiently utilized with backpropagation to train chunking parameters. At the test time, online “hard” chunking is used to realize online ASR, which achieves almost the same performance as the soft attention model.

0:  encoder features , length , chunk size
1:  Initialize: , ,
2:  while  do
3:     for  to  do
4:        for  to  do
5:           
6:           if  then
7:              
8:              break
9:           end if
10:        end for
11:        if  then
12:           
13:        end if
14:           // or
15:        for  to  do
16:           
17:        end for
18:        
19:     end for
20:     ,
21:  end while
Algorithm 1 MoChA Inference for -th Transformer Decoder Layer

Since Transformer has unique properties, the conventional MoChA cannot be simply applied. One property is that the STA is computed using queries and keys, while MoChA is formulated on the basis of the attention using a hidden vector of the RNN and . Another property is that not all the STAs are monotonic, because the other heads and layers complement each other, as examples shown in Fig. 2. We modify the training algorithm of MoChA to deal with these characteristics.

4.2 Inference Algorithm

The inference process for decoder layer is shown in Algorithm 1. The differences from the original MoChA are highlighted in red color. In our case, MoChA decoding is introduced into the second STA of each decoder layer; the vector in Algorithm 1 is the output of the first SAN in the decoder layer. in line 20 concatenates and computes an output of the STA network, , in each decoder layer, as in (2). MoChA can be applied independently to each head; thus, we added line 3. In line 18, the attention weight is applied to the selected values to compute in (3), and the chunk of selection shifts monotonically.

in line 5 is regarded as a trigger function at head to move the computing chunk, which is estimated from an function. For the and (in line 16) functions, the original MoChA utilizes because it is used as a nonlinear function in RNNs. However, in Transformer, attentions are computed using queries and keys as in (1). Therefore, we modify them for the head as

(6)
(7)

where and are trainable scalar parameters, , and as in (3).

Note that, the exception in lines 11–13, where the trigger never ignites in frame , sets as in the original MoChA. However, we compute using the previous (line 12) because the exception often occurs in Transformer. Also, for online processing, all the past frames of encoded features are also available without any latency, while the original MoChA computes attentions within the fixed-length chunk. Taking into account the property that Transformer attentions tend to be distributed widely and are not always monotonic, we also consider utilizing the past frames. We optionally modify line 14 by setting and test both cases in Sec. 5.

4.3 Training Algorithm

MoChA strongly relies on the monotonicity of the attentions, and it also forces attentions to be monotonic, while Transformer has a flexible attention mechanism that may be able to integrate information of various positions without the monotonicity. Further more, the Transformer decoder has both multihead and residual connections. Therefore, typically, not all the attentions become monotonic, as in Fig. 2.

0:  encoder features , length , chunk size , Gauss. noise
1:  Initialize: , , ,
2:  while  do
3:     for  to  do
4:        for  to  do
5:           
6:           
7:                                
8:        end for
9:        for  to  do
10:           
11:           
12:        end for
13:        
14:     end for
15:     ,
16:  end while
Algorithm 2 MoChA Training for -th Transformer Decoder Layer

The original MoChA training computes a variable , which is a cumulative probability of computing the local chunk attention at , defined as

(8)

When for all , which occurs frequently in Transformer because the other heads and layers complement each other for this frame, rapidly decays after . An example is shown in Fig. 3. The top left shows in Algorithm 1, which has monotonicity. The top right is the original in (8), in which the value decreases immediately after around frame 50 of the target and does not recover.

Therefore, we introduce a probability of the trigger not igniting as into computation of . Thus, the new training algorithm for Transformer is shown in Algorithm 2, which encourages MoChA to exploit the flexibility of the SAN in Transformer (colored lines are new to the original MoChA). An example of our modified is shown in the bottom left of Fig. 3, which maintains the monotonicity. The bottom right shows the expected attention .

Figure 3: Example of expected attention in the Transformer decoder. Top left: in Algorithm 2; top right: original in (8); bottom left: our modified in Algorithm 2; bottom right: expected attention . Head index is omitted for simplicity.

5 Experiments

5.1 Experimental Setup

We carried out experiments using the WSJ English and AISHELL-1 Mandarin data [31]

. The input acoustic features were 80-dimensional filter banks and the pitch, extracted with a hop size of 10 ms and a window size of 25 ms, which were normalized with the global mean and variance. For the WSJ English setup, the number of output classes was 52, including symbols. We used 4,231 character classes for the AISHELL-1 Mandarin setup.

For the training, we utilized multitask learning with CTC loss as in [11, 21] with a weight of 0.1. A linear layer was added onto the encoder to project

to the character probability for the CTC. The Transformer models were trained over 100 epochs for WSJ and 50 epochs for AISHELL-1, with the Adam optimizer and Noam learning rate decay as in

[15]. The learning rate was set to 5.0 and the minibatch size to 20. SpecAugment [park19] was applied to only WSJ.

The parameters of the last 10 epochs were averaged and used for inference. The encoder had layers with 2048 units and the decoder had layers with 2048 units, with both having a dropout rate of 0.1. We set and for the multihead attentions. We trained three types of Transformer, namely, baseline Transformer [21], Transformer with the contextual block processing encoder (CBP Enc. + Batch Dec.) [24], and the proposed entire online model with the online decoder (CBP Enc. + Proposed Dec.). The training was carried out using ESPNet [32]

with the PyTorch backend. The median based chunk shifting

[5] with a window of 16 frames was also applied to the Batch Dec. with and without past frames for the fair comparison (CBP Enc. + Median Dec.).

For the CBP Enc. models, we set the parameters as and . For the initialization of context embedding, we utilized the average of the input features to simplify the implementation. The decoder was trained with the proposed MoChA architecture using . The STA were computed within each chunk, or using all the past frames of encoded features as described in Sec. 4.2.

The decoding was performed alongside the CTC, whose probabilities were added with weights of 0.3 for WSJ and 0.7 for AISHELL-1 to those of Transformer. We performed decoding using a beam search with a beam size of 10. An external word-level LM, which was a single-layer LSTM with 1000 units, was used for rescoring using shallow fusion [33] with a weight of for WSJ. A character-level LM with the same structure was fused with a weight of for AISHELL-1.

For comparison, unidirectional and bidirectional LSTM models were also trained as in [11]. The models consisted of an encoder with a VGG layer, followed by LSTM layers and a decoder. The numbers of encoder layers were six and three, with 320 and 1024 units for WSJ and AISHELL-1, respectively. The decoders were an LSTM layer with 300 units for WSJ and two LSTM layers with 1024 units for AISHELL-1.

WSJ (WER) AISHELL-1 (CER)
Batch processing
biLSTM [11] 6.7 9.2
uniLSTM 8.4 11.8
Transformer [21] 4.9 6.7
CBP Enc. + Batch Dec. [24] 6.0 7.6
Online processing
CBP Enc. + median Dec. [5] 9.9 25.0
  —with past frames 7.9 24.2
CBP Enc. + Proposed Dec. 8.8 18.7
  —with past frames 6.6 9.7
Table 1: Word error rates (WERs) in the WSJ and AISHELL-1 evaluation task.

5.2 Results

Experimental results are summarized in Table 1. The chunk hopping using the median of attention worked well in the English task but poorly in the Chinese task. This was because Chinese requires a wider area of the encoded features to emit each character. On the other hand, our proposed decoder prevented the degradation of performance. In particular, using all the past frames of encoded features, our proposed decoder achieved the highest accuracy among the online processing methods. This indicated that the new decoding algorithm was able to exploit the wider attentions of Transformer.

6 Conclusion

We extended our previous Transformer, which adopted a contextual block processing encoder, towards an entirely online E2E ASR system by introducing an online decoding process inspired by MoChA into the Transformer decoder. The MoChA training and inference algorithms were extended to cope with the unique properties of Transformer whose attentions are not always monotonic or peaky and have multiple heads and residual connections of the decoder layers. Evaluations of WSJ and AISHELL-1 showed that our proposed online Transformer decoder outperformed conventional chunkwise approaches. Thus, we realize the entire online processing of Transformer ASR with reasonable performance.

References

  • [1] Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in

    Proc. of the 23rd International Conference on Machine Learning

    , 2006, pp. 369–376.
  • [2] Alex Graves and Navdeep Jaitly, “Towards end-to-end speech recognition with recurrent neural networks,” in International Conference on Machine Learning, 2014, pp. 1764–1772.
  • [3] Yajie Miao, Mohammad Gowayyed, and Florian Metze, “EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding,” in Proc. of IEEE Automatic Speech Recognition and Understanding (ASRU) Workshop, 2015, pp. 167–174.
  • [4] Dario Amodei et al., “Deep Speech 2: End-to-end speech recognition in English and Mandarin,” in Proc. of the 33rd International Conference on Machine Learning, 2016, vol. 48, pp. 173–182.
  • [5] Jan K. Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio, “Attention-based models for speech recognition,” in Advances in Neural Information Processing Systems, 2015, pp. 577–585.
  • [6] William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 4960–4964.
  • [7] Liang Lu, Xingxing Zhang, and Steve Renais, “On training the recurrent neural network encoder-decoder for large vocabulary end-to-end speech recognition,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 5060–5064.
  • [8] Albert Zeyer, Kazuki Irie, Ralf Schlüter, and Hermann Ney, “Improved training of end-to-end attention models for speech recognition,” in Proc. of Interspeech 2018, 2018, pp. 7–11.
  • [9] Chung-Cheng Chiu, Tara N. Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J. Weiss, Kanishka Rao, Ekaterina Gonina, et al., “State-of-the-art speech recognition with sequence-to-sequence models,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 4774–4778.
  • [10] Suyoun Kim, Takaaki Hori, and Shinji Watanabe, “Joint CTC-attention based end-to-end speech recognition using multi-task learning,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 4835–4839.
  • [11] Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey, and Tomoki Hayashi, “Hybrid CTC/attention architecture for end-to-end speech recognition,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1240–1253, 2017.
  • [12] Alex Graves, “Sequence transduction with recurrent neural networks,” in ICML Representation Learning Workshop, 2012.
  • [13] Alex Graves, Abdel-Rahman Mohamed, and Geoffrey Hinton, “Speech recognition with deep recurrent neural networks,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013, pp. 6645–6649.
  • [14] Kanishka Rao, Haşim Sak, and Rohit Prabhavalkar, “Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer,” in Proc. of IEEE Automatic Speech Recognition and Understanding (ASRU) Workshop, 2017, pp. 193–199.
  • [15] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017, pp. 5998–6008.
  • [16] Linhao Dong, Shuang Xu, and Bo Xu, “Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 5884–5888.
  • [17] Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian Stüker, and Alex Waibel, “Self-attentional acoustic models,” in Proc. of Interspeech, 2018, pp. 3723–3727.
  • [18] Julian Salazar, Katrin Kirchhoff, and Zhiheng Huang, “Self-attention networks for connectionist temporal classification in speech recognition,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 7115–7119.
  • [19] Linhao Dong, Feng Wang, and Bo Xu, “Self-attention aligner: A latency-control end-to-end model for ASR using self-attention network and chunk-hopping,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 5656–5660.
  • [20] Yuanyuan Zhao, Jie Li, Xiaorui Wang, and Yan Li, “The Speechtransformer for large-scale Mandarin Chinese speech recognition,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 7095–7099.
  • [21] Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, et al., “A comparative study on transformer vs RNN in speech applications,” arXiv preprint arXiv:1909.06317, 2019.
  • [22] Mike Schuster and Kuldip K. Paliwal,

    Bidirectional recurrent neural networks,”

    IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673–2681, 1997.
  • [23] Navdeep Jaitly, David Sussillo, Quoc V. Le, Oriol Vinyals, Ilya Sutskever, and Samy Bengio, “A neural transducer,” arXiv preprint arXiv:1511.04868, 2015.
  • [24] Emiru Tsunoo, Yosuke Kashiwagi, Toshiyuki Kumakura, and Shinji Watanabe, “Transformer ASR with contextual block processing,” arXiv preprint arXiv:1910.07204, 2019.
  • [25] William Chan and Ian Lane, “On online attention-based speech recognition and joint Mandarin character-Pinyin training,” in Proc. of Interspeech, 2016, pp. 3404–3408.
  • [26] André Merboldt, Albert Zeyer, Ralf Schlüter, and Hermann Ney, “An analysis of local monotonic attention variants,” Proc. of Interspeech 2019, pp. 1398–1402, 2019.
  • [27] Chung-Cheng Chiu and Colin Raffel, “Monotonic chunkwise attention,” arXiv preprint arXiv:1712.05382, 2017.
  • [28] Haoran Miao, Gaofeng Cheng, Pengyuan Zhang, Ta Li, and Yonghong Yan, “Online hybrid CTC/attention architecture for end-to-end speech recognition,” Proc. of Interspeech 2019, pp. 2623–2627, 2019.
  • [29] Ruchao Fan, Pan Zhou, Wei Chen, Jia Jia, and Gang Liu, “An online attention-based model for speech recognition,” Proc. of Interspeech 2019, pp. 4390–4394, 2019.
  • [30] Niko Moritz, Takaaki Hori, and Jonathan Le Roux, “Triggered attention for end-to-end speech recognition,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 5666–5670.
  • [31] Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng,

    “AIShell-1: An open-source Mandarin speech corpus and a speech recognition baseline,”

    in Oriental COCOSDA, 2017, pp. 1–5.
  • [32] Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, et al., “ESPnet: End-to-end speech processing toolkit,” in Proc. of Interspeech, 2019, pp. 2207–2211.
  • [33] Anjuli Kannan, Yonghui Wu, Patrick Nguyen, Tara N Sainath, ZhiJeng Chen, and Rohit Prabhavalkar, “An analysis of incorporating an external language model into a sequence-to-sequence model,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 5824–5828.