Experts have shown significant interest in the area of sequence-to-sequence modeling with attention [1, 2, 3, 4] on ASR tasks in recent years. Sequence-to-sequence attention-based models integrate separate acoustic, pronunciation and language models of a conventional ASR system into a single neural network  and do not make the conditional independence assumptions as in standard hidden Markov based model .
Sequence-to-sequence attention-based models are commonly comprised of an encoder
, which consists of multiple recurrent neural network (RNN) layers that model the acoustics, and adecoder, which consists of one or more RNN layers that predict the output sub-word sequence. An attention layer acts as the interface between the encoder and the decoder: it selects frames in the encoder representation that the decoder should attend to in order to predict the next sub-word unit . However, RNNs maintain a hidden state of the entire past that prevents parallel computation within a sequence. In order to reduce sequential computation, the model architecture of the Transformer has been proposed in . This model architecture eschews recurrence and instead relies entirely on an attention mechanism to draw global dependencies between input and output, which allows for significantly more parallelization and achieves a new single-model state-of-the-art BLEU on NMT tasks . Since the outstanding performance of the Transformer, this paper focuses on it as the basic architecture of sequence-to-sequence attention-based model on Mandarin Chinese ASR tasks.
Recently various modeling units of sequence-to-sequence attention-based models have been studied on English ASR tasks, such as graphemes, CI-phonemes, context-dependent phonemes and word piece models [1, 5, 8]. However, few related works have been explored by sequence-to-sequence attention-based models on Mandarin Chinese ASR tasks. As we known, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation. These syllables have a fixed number (around 1400 pinyins with tones are used in this work) and each written character corresponds to a syllable. In addition, syllables are a longer linguistic unit, which reduces the difficulty of syllable choices in the decoder by sequence-to-sequence attention-based models. Moreover, syllables have the advantage of avoiding out-of-vocabulary (OOV) problem.
Due to these advantages of syllables, we are concerned with syllables as the modeling unit in this paper and investigate a comparison between CI-phoneme based model and syllable based model with the Transformer on Mandarin Chinese ASR tasks. Moreover, Since we investigate the comparison between CI-phonemes and syllables, these CI-phoneme sequences or syllable sequences from the Transformer have to be converted into word sequences for the performance comparison in terms of CER. The conversion from CI-phoneme sequences or syllable sequences to word sequences can be regarded as a sequence-to-sequence task, which is modeled by the Transformer in this paper. Then we propose a greedy cascading decoder with the Transformer to maximize the posterior probabilityapproximately. Experiments on HKUST datasets reveal that the Transformer performs very well on Mandarin Chinese ASR tasks. Moreover, we experimentally confirm that syllable based model with the Transformer can outperform CI-phoneme based counterpart, and achieve a CER of , which is competitive to the state-of-the-art CER of by the joint CTC-attention based encoder-decoder network .
2 Related work
Sequence-to-sequence attention-based models have shown very encouraging results on English ASR tasks [1, 8, 10]. However, it is quite difficult to apply it to Mandarin Chinese ASR tasks. In , Chan et al. proposed Character-Pinyin sequence-to-sequence attention-based model on Mandarin Chinese ASR tasks. The Pinyin information was only used during training for improving the performance of the character model. Instead of using joint Character-Pinyin model, 
directly used Chinese characters as network output by mapping the one-hot character representation to an embedding vector via a neural network layer.
In this paper, we are concerned with syllables as the modeling unit. Acoustic models using syllables as the modeling unit have been investigated for a long time [13, 14, 15]. Ganapathiraju et al. have first shown that syllable based acoustic models can outperform context dependent phone based acoustic models with GMM . Wu et al. experimented on syllable based context dependent Chinese acoustic model and discovered that context dependent syllable based acoustic models can show promising performance . Qu et al.  explored the CTC-SMBR-LSTM using syllables as outputs and verified that syllable based CTC model can perform better than CI-phoneme based CTC model on Mandarin Chinese ASR tasks. Inspired by , we extend their work from CTC based models to sequence-to-sequence attention-based models.
Using syllables as the modeling unit, it is natural to consider the conversion from Chinese syllable sequences to Chinese word sequences as a task of labelling unsegmented sequence data. Liu et al.  proposed RNN based supervised sequence labelling method with CTC algorithm to achieve a direct conversion from syllable sequences to word sequences.
3 System overview
3.1 Transformer model
The Transformer model architecture is the same as sequence-to-sequence attention-based models except relying entirely on self-attention and position-wise, fully connected layers for both the encoder and decoder . The encoder maps an input sequence of symbol representations x = to a sequence of continuous representations z = . Given z, the decoder then generates an output sequence y = of symbols one element at a time.
3.1.1 Multi-head attention
An attention function maps a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key . Scaled dot-product attention is adopted as the basic attention function in the Transformer, which describes (1):
Where the dimension of query Q and key K are the same, which are d, and dimension of value V is d.
Instead of performing a single attention function, the Transformer employs the multi-head attention (MHA) which projects the queries, keys and values times with different, learned linear projections to , and dimensions. On each of these projected versions of queries, keys and values, the basic attention function is performed in parallel, yielding -dimensional output values. These are concatenated and projected again, resulting in the final values. The equations can be represented as follows :
Where the projections are parameter matrices , , , , is the number of heads, and is the model dimension.
MHA behaves like ensembles of relatively small attentions to allow the model to jointly attend to information from different representation subspaces at different positions, which is beneficial to learn complicated alignments between the encoder and decoder.
3.1.2 Transformer model architecture
The architecture of the ASR Transformer is shown in Figure 1, which stacks MHA and position-wise, fully connected layers for both the encode and decoder. The encoder is composed of a stack of
identical layers. Each layer has two sub-layers. The first is a MHA, and the second is a position-wise fully connected feed-forward network. Residual connections are employed around each of the two sub-layers, followed by a layer normalization. The decoder is similar to the encoder except inserting a third sub-layer to perform a MHA over the output of the encoder stack. To prevent leftward information flow and preserve the auto-regressive property in the decoder, the self-attention sub-layers in the decoder mask out all values corresponding to illegal connections. In addition, positional encodings are added to the input at the bottoms of these encoder and decoder stacks, which inject some information about the relative or absolute position in the sequence to make use of the order of the sequence.
3.2 Greedy cascading decoder with the Transformer
Due to syllables and CI-phonemes are investigated in this paper, the CI-phoneme sequences or syllable sequences have to be converted into word sequences using a lexicon during beam-search decoding.
The speech recognition problem can be defined as the problem of finding word sequence W that maximizes posterior probability given observation , and can transform as follows .
Here, is the probability from observation X to sub-word unit sequence , is the the probability from sub-word unit sequence to word sequence .
According to equation (6), we propose that both and can be regarded as sequence-to-sequence transformations, which can be modeled by sequence-to-sequence attention-based models, specifically the Transformer is used in the paper.
Then, the greedy cascading decoder with the Transformer is proposed to directly estimate equation (6). First, the best sub-word unit sequence is calculated by the Transformer from observation to sub-word unit sequence with beam size . And then, the best word sequence is chosen by the Transformer from sub-word unit sequence to word sequence with beam size . Through cascading two sequence-to-sequence attention-based models, we assume that equation (6) can be approximated.
In this work we employ and .
The HKUST corpus (LDC2005S15, LDC2005T32), a corpus of Mandarin Chinese conversational telephone speech, is collected and transcribed by Hong Kong University of Science and Technology (HKUST) 
, which contains 150-hour speech, and 873 calls in the training set and 24 calls in the test set. All experiments are conducted using 80-dimensional log-Mel filterbank features, computed with a 25ms window and shifted every 10ms. The features are normalized via mean subtraction and variance normalization on the speaker basis. Similar to[19, 20], at the current frame , these features are stacked with 3 frames to the left and downsampled to a 30ms frame rate.
We perform our experiments on the base model and big model (i.e. D512-H8 and D1024-H16 respectively) of the Transformer from . The basic architecture of these two models is the same but different parameters setting. Table 1 lists the experimental parameters between these two models. The Adam algorithm 
with gradient clipping and warmup is used for optimization. During training, label smoothing of valueis employed .
First, for the Transformer from observation to sub-word unit sequence, CI-phonemes without silence (phonemes with tones) are employed in the CI-phoneme based experiments and
syllables (pinyins with tones) in the syllable based experiments. Extra tokens (i.e. an unknown token (
/)) are appended to the outputs, making the total number of outputs
Standard tied-state cross-word triphone GMM-HMMs are first trained with maximum likelihood estimation to generate CI-phoneme alignments on training set and test set for handling multiple pronunciations of the same word in Mandarin Chinese. we then generate syllable alignments through these CI-phoneme alignments according to the lexicon. Finally, we proceed to train the Transformer with these alignments.
In order to verify the effectiveness of the greedy cascading decoder proposed in this paper, the CI-phoneme and syllable alignments on test data are converted into word sequences using the trained models. We can get a CER of on the CI-phoneme based model and on the syllable based model respectively, which are the lower bounds of our experiments. If sub-word unit sequences, calculated by the Transformer from observation to sub-word unit sequence , can approximate to these corresponding alignments, our experimental results can approach the lower bounds using the greedy cascading decoder.
visualizes the self-attention alignments in the encoder layer and the vanilla attention alignments in the encoder-decoder layer by Tensorflow. As can be seen in the figure, both self-attention matrix and vanilla attention matrix appear very localized, which let us to understand how changing the attention window influences the CER.
4.3 Results of CI-phoneme and syllable based model
Our results are summarized in Table 2. As can be seen in the table, CI-phoneme and syllable based model with the Transformer can achieve competitive results on HKUST datasets in terms of CER. It reveals that the Transformer is very suitable for the ASR task since its powerful sequence modeling capability, although it relies entirely on self-attention without using RNNs or convolutions. Furthermore, we note here that the CER of syllable based model outperforms that of corresponding CI-phoneme based model. The results suggest that the sub-word unit of syllables is a better modeling unit in sequence-to-sequence attention-based models on Mandarin Chinese ASR tasks compared to the sub-word unit of CI-phonemes. It validates the conclusion proposed on CTC based model . Finally, it is obvious that the big model always performs better than the base model no matter on the CI-phoneme based model or syllable based model. Therefore, our further experiments are conducted on the big model.
We further generate more training data by linearly scaling the audio lengths by factors of and (speed perturb.) . It can be observed that syllable based model with speed perturb becomes better and achieves the best CER of compared to without it. However, CI-phoneme based model with speed perturb becomes very slightly worse than without it. The interpretation of this phenomenon is that syllables have a longer duration and more invariance than CI-phonemes, so small speed perturb would not affect the pronuciation of syllables too much, instead of providing more useful and various training data. However, small speed perturb might have more impact on the pronuciation of CI-phonemes due to the short duration.
4.4 Comparison with previous works
In Table 3, we compare our experimental results to other model architectures from the literature on HKUST datasets. First, we can find that the result of CI-phoneme based model with the Transformer is comparable to the best result by the deep multidimensional residual learning with 9 LSTM layers in hybrid system , and the syllable based model with the Transformer provides over a relative improvement in CER compared to it. Moreover, the CER of syllable based model with the Transformer is comparable to the CER by the joint CTC-attention based encoder-decoder network  when no external language model is used, but slightly worse than the CER by the joint CTC-attention based encoder-decoder network with separate RNN-LM, which is the state-of-the-art on HKUST datasets to the best of our knowledge.
4.5 Comparison of different frame rates
Finally, table 4 compares different frame rates on CI-phoneme and syllable based model with the Transformer. It indicates that the performance of CI-phoneme and syllable based model with the Transformer decreases as the frame rate increases. The decreasing rate is relatively slow from ms to ms, but deteriorates rapidly from ms to ms. Thus, it shows that frame rate between ms and ms performs relatively well on CI-phoneme and syllable based model with the Transformer.
In this paper we applied the Transformer, a new sequence transduction model based entirely on self-attention without using RNNs or convolutions, to Mandarin Chinese ASR tasks and verified its effectiveness on HKUST datasets. Furthermore, we compared syllables and CI-phonemes as the modeling unit in sequence-to-sequence attention-based models with the Transformer in Mandarin Chinese. Our experimental results demonstrated that syllable based model with the Transformer performs better than CI-phoneme based counterpart on HKUST datasets. What is more, a greedy cascading decoder with the Transformer is proposed to maximize and then posterior probability can be maximized. Experimental results on CI-phoneme and syllable based model verified the effectiveness of the greedy cascading decoder.
The authors would like to thank Chunqi Wang for insightful discussions on training and tuning the Transformer.
-  C.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, K. Gonina et al., “State-of-the-art speech recognition with sequence-to-sequence models,” arXiv preprint arXiv:1712.01769, 2017.
-  J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-based models for speech recognition,” in Advances in neural information processing systems, 2015, pp. 577–585.
-  D. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, and Y. Bengio, “End-to-end attention-based large vocabulary speech recognition,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 4945–4949.
-  W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, “Listen, attend and spell. arxiv preprint,” arXiv preprint arXiv:1508.01211, vol. 1, no. 2, p. 3, 2015.
-  R. Prabhavalkar, T. N. Sainath, B. Li, K. Rao, and N. Jaitly, “An analysis of attention in sequence-to-sequence models, ,” in Proc. of Interspeech, 2017.
-  H. A. Bourlard and N. Morgan, Connectionist speech recognition: a hybrid approach. Springer Science & Business Media, 2012, vol. 247.
-  A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017, pp. 6000–6010.
-  R. Prabhavalkar, K. Rao, T. N. Sainath, B. Li, L. Johnson, and N. Jaitly, “A comparison of sequence-to-sequence models for speech recognition,” in Proc. Interspeech, 2017, pp. 939–943.
-  T. Hori, S. Watanabe, Y. Zhang, and W. Chan, “Advances in joint ctc-attention based end-to-end speech recognition with a deep cnn encoder and rnn-lm,” arXiv preprint arXiv:1706.02737, 2017.
-  Y. Zhang, W. Chan, and N. Jaitly, “Very deep convolutional networks for end-to-end speech recognition,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 4845–4849.
-  W. Chan and I. Lane, “On online attention-based speech recognition and joint mandarin character-pinyin training.” in INTERSPEECH, 2016, pp. 3404–3408.
-  C. Shan, J. Zhang, Y. Wang, and L. Xie, “Attention-based end-to-end speech recognition on voice search.”
-  Z. Qu, P. Haghani, E. Weinstein, and P. Moreno, “Syllable-based acoustic modeling with ctc-smbr-lstm,” 2017.
-  A. Ganapathiraju, J. Hamaker, J. Picone, M. Ordowski, and G. R. Doddington, “Syllable-based large vocabulary continuous speech recognition,” IEEE Transactions on speech and audio processing, vol. 9, no. 4, pp. 358–366, 2001.
-  H. Wu and X. Wu, “Context dependent syllable acoustic model for continuous chinese speech recognition,” in Eighth Annual Conference of the International Speech Communication Association, 2007.
-  Y. Liu, J. Hua, X. Li, T. Fu, and X. Wu, “Chinese syllable-to-character conversion with recurrent neural network based supervised sequence labelling,” in Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2015 Asia-Pacific. IEEE, 2015, pp. 350–353.
-  N. Kanda, X. Lu, and H. Kawai, “Maximum a posteriori based decoding for ctc acoustic models.” in Interspeech, 2016, pp. 1868–1872.
-  Y. Liu, P. Fung, Y. Yang, C. Cieri, S. Huang, and D. Graff, “Hkust/mts: A very large scale mandarin telephone speech corpus,” in Chinese Spoken Language Processing. Springer, 2006, pp. 724–735.
-  H. Sak, A. Senior, K. Rao, and F. Beaufays, “Fast and accurate recurrent neural network acoustic models for speech recognition,” arXiv preprint arXiv:1507.06947, 2015.
-  A. Kannan, Y. Wu, P. Nguyen, T. N. Sainath, Z. Chen, and R. Prabhavalkar, “An analysis of incorporating an external language model into a sequence-to-sequence model,” arXiv preprint arXiv:1712.01996, 2017.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826.
-  M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
-  Y. Zhao, S. Xu, and B. Xu, “Multidimensional residual learning based on recurrent neural networks for acoustic modeling,” Interspeech 2016, pp. 3419–3423, 2016.