Conformer-based End-to-end Speech Recognition With Rotary Position Embedding

07/13/2021 ∙ by Shengqiang Li, et al. ∙ 0

Transformer-based end-to-end speech recognition models have received considerable attention in recent years due to their high training speed and ability to model a long-range global context. Position embedding in the transformer architecture is indispensable because it provides supervision for dependency modeling between elements at different positions in the input sequence. To make use of the time order of the input sequence, many works inject some information about the relative or absolute position of the element into the input sequence. In this work, we investigate various position embedding methods in the convolution-augmented transformer (conformer) and adopt a novel implementation named rotary position embedding (RoPE). RoPE encodes absolute positional information into the input sequence by a rotation matrix, and then naturally incorporates explicit relative position information into a self-attention module. To evaluate the effectiveness of the RoPE method, we conducted experiments on AISHELL-1 and LibriSpeech corpora. Results show that the conformer enhanced with RoPE achieves superior performance in the speech recognition task. Specifically, our model achieves a relative word error rate reduction of 8.70 test-other sets of the LibriSpeech corpus respectively.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The sequential order of an input sequence plays a vital role in many sequence learning tasks, particularly in speech recognition. Recurrent neural networks (RNNs) based models can learn the sequential order by recursively computing their hidden states along the time dimension. Convolutional neural networks (CNNs) based models can implicitly learn the position information of an input sequence by a padding operator

[10]. In recent years, transformer-based models have shown great superiority in various sequence learning tasks, such as machine translation [24], language modeling [18] and speech recognition [5]. The transformer-based models utilize a self-attention mechanism to model the dependency among different elements in the input sequence, which provides more efficient parallel computing than RNNs and can model longer context-dependency among elements than CNNs.

The transformer-based models dispense with recurrence, and instead rely solely on a self-attention mechanism to draw global dependencies among elements in the input sequence. However, the self-attention mechanism cannot model the sequential order inherently [27]. There are various works injecting some information about the relative or absolute position of the elements of the input sequence into the transformer-based models.

One line of works focuses on absolute position embedding methods. The position embedding is added to the input embeddings usually. The original work [24]

injected absolute position information to the input embeddings via a trigonometric position embedding. Specifically, the absolute position of each element in the input sequence is encoded into a vector, whose dimension is equal to the dimension of the input embeddings. Another work

[7] added absolute position information via the learned positional embedding instead of the pre-defined function, the learned position embedding can achieve competitive performance with the trigonometric position embedding. However, it cannot be extrapolated to a sequence length longer than the maximum sequence length of training utterances.

The other line of works focuses on relative position embedding, which typically inject relative position information into the attention calculation. Originally proposed by [20], the relative position embedding method replaced absolute positions by taking into account the distance between sequence elements. It demonstrates significant improvement in two machine translation tasks. The method has also been generalized to language modeling [4], which helps the language model capture very long dependency between paragraphs. Some works also utilized relative position embedding to acoustic modeling in the speech recognition task [8, 16], which help the self-attention module deals with different input lengths better than the absolute position embedding methods.

In addition to these approaches, [25] proposed to model the position information in a complex space. [13]

proposed to model the dependency of position embedding from the perspective of Neural ordinary differential equations

[2]. [21] proposed to encode relative position by multiplying the context representations with a rotation matrix.

In this paper, we investigated various position embedding methods in the convolution-augmented transformer (conformer) for speech recognition. Motivated by [21], we adopt a novel implementation named rotary position embedding (RoPE). RoPE formulates the relative position naturally by an inner product of the input vectors of the self-attention module in the conformer, right after the absolution position information being encoded through the rotation matrix. Experiments were conducted on AISHELL-1 and LibriSpeech corpora. Results show that the conformer enhanced with the RoPE performs superior over the original conformer. It achieves a character error rate of on the test set of AISHELL-1 dataset, and a word error rate of of and

on the ‘test-clean’ and ‘test-other’ sets of LibriSpeech dataset respectively.

The remainder of this paper is organized as follows. Section 3 describes the RoPE method and the architecture of our model. Section 4 presents experiments. Conclusion is given in Section 5.

2 Related Work

The core module of transformer-based models is the self-attention module, assuming that denotes the input sequence, where is the sequence length, is the dimension, the self-attention module first incorporates the position information to the input sequence and transforms them into queries, keys, values vectors respectively:

(1)

where incorporate -th and -th position information via the function respectively. The attention weights calculated using the query and key vectors, and the output is the weighted sum of the value vector:

(2)

2.1 Absolute position embedding

Assuming that is the -th element in the input sequence. The implementation of absolute position embedding can be formulated as:

(3)

where is the weight matrix of the linear projection layer of query, key and value vectors respectively, is the hidden size of the attention module, is a vector depending of the position information of . In [11, 3], is a set of trainable vectors. Ref. [24] has proposed to generate using the sinusoidal function:

(4)

where is the position and is the dimension.

2.2 Relative position embedding

In [4], the relative distance between elements in the input sequence was taken into account. Specifically, keeping the form of (3), the term can be decomposed to:

(5)

In [4], (5) was modified to:

(6)

where are trainable parameters, and is the relative position embedding. Comparing the (6) and (5), we can see that there are three main changes:

  • Firstly, the absolute position embedding for computing key representation is replaced with relative counterpart .

  • Secondly, the query is replaced by two trainable parameters .

  • Finally, the weight matrix of the linear projection layer of key vector is separated to two matrices for producing the content-based key vector and location-based key vector respectively.

3 Method

In this section, we describe the rotary position embedding (RoPE) and illustrate how we apply it to the self-attention module in transformer-based models.

3.1 Formulation

Considering the dot-product attention does not preserve absolute positional information, so that if we encode the position information via absolute position embeddings, we will lose a significant amount of information. On the other hand, the dot-product attention does preserve relative position, so if we can encode the positional information into the input sequence in a way that only leverages relative positional information, that will be preserved by the attention function.

To incorporate relative position information, we hope the inner product encodes position information in the relative form only:

(7)

where the inner product of and is formulated by a function , which takes only , and their relative position as input variables. Actually, finding such an encoding mechanism is equivalent to solving the function and that conform (7).

3.2 Rotary position embedding

We start with a simple case with dimension , the RoPE provides a solution to (7):

(8)

where denotes the real part of a complex number and represents the conjugate complex number of , is a non-zero constant.

Considering the merit of the linearity of the inner product, we can generalize the solution to any dimension when is even, we divide the -dimension space to sub-spaces and combine them:

(9)

where

(10)
(11)
(12)

The illustration of rotary position embedding is shown in Figure 1.

Figure 1: Illustration of rotary position embedding (RoPE). is the input sequence without position embedding and is the sequence encoded with position information.

3.3 Enhanced conformer with RoPE

In this work, we adopt conformer [8] as the speech recognition model, which is a state-of-the-art transformer-based model. The architecture of the conformer is given in Figure 2. The audio encoder of conformer first processed the input with a convolution subsampling module and then with conformer encoder blocks. Each conformer encoder block contains two feed-forward (FFN) modules sandwiching the multi-head self-attention (MHSA) module and the convolution (Conv) module, as shown in Figure 3. Because the decoder of conformer is identical with transformer [24], we will not describe the decoder anymore.

Figure 2: The architecture of conformer.
Figure 3: The architecture of conformer encoder blocks.

In contrast to the additive position embedding method used by other works [24], we adopt the multiplicative position embedding method in the encoder. Moreover, we do not add the position embedding at the beginning of the encoder, but rather, we add the position embedding to the query and key vectors at each self-attention layer. The position embedding in the decoder is absolute position embedding, which is identical with the one in transformer [24].

4 Experiments

4.1 Datasets

Our experiments were conducted on a Mandarin speech corpus AISHELL-1 [1] and an English speech corpus LibriSpeech [14]. The former has 170 hours labeled speech, while the latter consists of 970 hours labeled speech and an additional 800M word token text-only corpus for building language model.

4.2 Setup

We used 80-channel log-mel filterbank coefficients (Fbank) features computed on a 25ms window with a 10ms shift. The features for each speaker were rescaled to have zero mean and unit variance. The token vocabulary of AISHELL-1 contains 4231 characters. We used a 5000 token vocabulary based on the byte pair encoding algorithm

[19] for LibriSpeech. Moreover, the vocabularies of AISHELL-1 and LibriSpeech have a padding symbol ’’ , an unknown symbol ’’, and an end-of-sentence symbol ’’.

Our model contains 12 encoder blocks and 6 decoder blocks. There are 4 heads in both the self-attention and the encoder-decoder attention. The 2D-CNN frontend utilizes two

convolution layers with 256 channels. The rectified linear units were used as the activation. The stride was set to 2. The hidden dimension of the attention layer is 256. The hidden dimension and output dimension of the feed-forward layer are 256 and 2048 respectively. We used the Adam optimizer and a transformer learning rate schedule

[24] with 30000 warm-up steps and a peak learning rate of 0.0005. We used SpecAugment [15]

for data augmentation. We set the CTC weight to 0.3 for the joint training with the attention model. In the test stage, we set CTC weight to 0.6 for the joint decoding. We used a transformer-based language model to refine the results.

To evaluate the effectiveness of our model, we compare our model with 9 representative speech recognition models, which are TDNN-Chain (kaldi) [17], LAS [15], SA-Transducer [22], Speech-Transformer [23], LDSA [26], GSA-Transformer [12], Conformer [9], Dynamic convolution (DC) [6] and Self-attention dynamic convolution 2D (SA-DC2D) [6]. There are 4 state-of-the-art transformer-based models in the comparison methods. Speech-Transformer uses the transformer architecture for both acoustic modeling and language modeling. LDSA uses a local dense synthesizer attention module in the transformer encoder as an alternative of the self-attention module. GSA-Transformer replaces the self-attention module with a gaussion-based attention module. Conformer combines the transformer architecture with a convolution module.

4.3 Main results

Table 1 lists the comparison result on LibriSpeech dataset. From the table, we can see that the proposed conformer enhanced with RoPE achieves the best performance among these methods. Our model achieves a WER of and on the on ‘test-clean’ and ‘test-other’ sets respectively, which gets a relative WER reduction of and over the conformer.

Table 2 lists the comparison result on AISHELL-1 dataset. From the table, we see that the proposed model achieves a CER of on the development set and on the test set respectively, which gets a relative CER reduction of and on the development set and test set over the conformer. Moreover, the proposed model significantly outperforms the other comparison methods.


Model
WER(%)
Dev Test
Clean Other Clean Other

LAS [15]
- - 2.5 5.8

DC [6]
3.5 10.5 3.6 10.8

SA-DC2D [6]
3.5 9.6 3.9 9.6

Conformer [9]
2.1 5.5 2.3 5.5

Conformer (RoPE)
1.9 5.0 2.1 5.1


Table 1: Comparison results on LibriSpeech.

Model
CER(%)
Dev set Test set

TDNN-Chain (kaldi) [17]
- 7.45

SA-Transducer [22]
8.30 9.30

Speech-Transformer [23]
6.57 7.37

LDSA [26]
5.79 6.49

GSA-Transformer [12]
5.41 5.94

Conformer [9]
4.52 4.88

Conformer (RoPE)
4.34 4.69


Table 2: Comparison results on AISHELL-1.

4.4 Comparison of different position embedding methods

We also compare the rotary position embedding with other position embedding in the conformer architecture, i.e. absolute position embedding and relative position embedding. Table 3 lists the result on LibriSpeech dataset and Table 4 lists the result on AISHELL-1. From Table 3 and Table 4, we can see that the relative position embedding performs better than the absolute position embedding, and the rotary position embedding achieves the best performance among these position embedding methods on both LibriSpeech and AISHELL-1 dataset.

Model WER(%)
Dev Test
Clean Other Clean Other

Conformer (APE)
2.1 5.5 2.3 5.5

Conformer (RPE)
2.0 5.2 2.2 5.5

Conformer (RoPE)
1.9 5.0 2.1 5.1
Table 3: Comparison between position embedding methods on the LibriSpeech dataset. APE denotes absolute position embedding, RPE denotes relative position embedding respectively.
Model CER(%)
Dev set Test set

Conformer (APE)
4.52 4.88

Conformer (RPE)
4.49 4.82

Conformer (RoPE)
4.34 4.69


Table 4: Comparison between position embedding methods on the AISHELL-1 dataset.

5 Conclusions

Transformer-based models have received great popularity in the speech recognition task. Position embedding of the input sequence plays a significant role in transformer-based models. In this paper, we propose to apply the rotary position embedding into the conformer. The rotary position embedding incorporates explicit relative position information in the self-attention module to enhance the performance of the conformer architecture. Our experimental results on the AISHELL-1 and LibriSpeech corpora demonstrate that the enhanced conformer with rotary position embedding performs superior over the vanilla conformer and several representative models.

References

  • [1] H. Bu, J. Du, X. Na, B. Wu, and H. Zheng (2017)

    Aishell-1: an open-source mandarin speech corpus and a speech recognition baseline

    .
    In 2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA), pp. 1–5. Cited by: §4.1.
  • [2] R. T. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud (2018) Neural ordinary differential equations. arXiv preprint arXiv:1806.07366. Cited by: §1.
  • [3] K. Clark, M. Luong, Q. V. Le, and C. D. Manning (2020) Electra: pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Cited by: §2.1.
  • [4] Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. V. Le, and R. Salakhutdinov (2019)

    Transformer-xl: attentive language models beyond a fixed-length context

    .
    arXiv preprint arXiv:1901.02860. Cited by: §1, §2.2, §2.2.
  • [5] L. Dong, S. Xu, and B. Xu (2018) Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5884–5888. Cited by: §1.
  • [6] Y. Fujita, A. S. Subramanian, M. Omachi, and S. Watanabe (2020) Attention-based asr with lightweight and dynamic convolutions. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7034–7038. Cited by: §4.2, Table 1.
  • [7] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin (2017) Convolutional sequence to sequence learning. In

    International Conference on Machine Learning

    ,
    pp. 1243–1252. Cited by: §1.
  • [8] A. Gulati, J. Qin, C. Chiu, N. Parmar, Y. Zhang, J. Yu, W. Han, S. Wang, Z. Zhang, Y. Wu, et al. (2020) Conformer: convolution-augmented transformer for speech recognition. arXiv preprint arXiv:2005.08100. Cited by: §1, §3.3.
  • [9] P. Guo, F. Boyer, X. Chang, T. Hayashi, Y. Higuchi, H. Inaguma, N. Kamo, C. Li, D. Garcia-Romero, J. Shi, et al. (2020) Recent developments on espnet toolkit boosted by conformer. arXiv preprint arXiv:2010.13956. Cited by: §4.2, Table 1, Table 2.
  • [10] M. A. Islam, S. Jia, and N. D. Bruce (2020)

    How much position information do convolutional neural networks encode?

    .
    arXiv preprint arXiv:2001.08248. Cited by: §1.
  • [11] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut (2019)

    Albert: a lite bert for self-supervised learning of language representations

    .
    arXiv preprint arXiv:1909.11942. Cited by: §2.1.
  • [12] C. Liang, M. Xu, and X. Zhang (2021) Transformer-based end-to-end speech recognition with residual gaussian-based self-attention. arXiv preprint arXiv:2103.15722. Cited by: §4.2, Table 2.
  • [13] X. Liu, H. Yu, I. Dhillon, and C. Hsieh (2020) Learning to encode position for transformer with continuous dynamical model. In International Conference on Machine Learning, pp. 6327–6335. Cited by: §1.
  • [14] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur (2015) Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 5206–5210. Cited by: §4.1.
  • [15] D. S. Park, W. Chan, Y. Zhang, C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le (2019)

    SpecAugment: a simple data augmentation method for automatic speech recognition

    .
    Proc. Interspeech 2019, pp. 2613–2617. Cited by: §4.2, §4.2, Table 1.
  • [16] N. Pham, T. Ha, T. Nguyen, T. Nguyen, E. Salesky, S. Stueker, J. Niehues, and A. Waibel (2020) Relative positional encoding for speech recognition and direct translation. arXiv preprint arXiv:2005.09940. Cited by: §1.
  • [17] D. Povey, V. Peddinti, D. Galvez, P. Ghahremani, V. Manohar, X. Na, Y. Wang, and S. Khudanpur (2016) Purely sequence-trained neural networks for asr based on lattice-free mmi.. In Interspeech, pp. 2751–2755. Cited by: §4.2, Table 2.
  • [18] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. OpenAI blog 1 (8), pp. 9. Cited by: §1.
  • [19] R. Sennrich, B. Haddow, and A. Birch (2015) Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Cited by: §4.2.
  • [20] P. Shaw, J. Uszkoreit, and A. Vaswani (2018) Self-attention with relative position representations. arXiv preprint arXiv:1803.02155. Cited by: §1.
  • [21] J. Su, Y. Lu, S. Pan, B. Wen, and Y. Liu (2021) RoFormer: enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864. Cited by: §1, §1.
  • [22] Z. Tian, J. Yi, J. Tao, Y. Bai, and Z. Wen (2019) Self-attention transducers for end-to-end speech recognition. arXiv preprint arXiv:1909.13037. Cited by: §4.2, Table 2.
  • [23] Z. Tian, J. Yi, J. Tao, Y. Bai, S. Zhang, and Z. Wen (2020) Spike-triggered non-autoregressive transformer for end-to-end speech recognition. arXiv preprint arXiv:2005.07903. Cited by: §4.2, Table 2.
  • [24] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §1, §1, §2.1, §3.3, §3.3, §4.2.
  • [25] B. Wang, D. Zhao, C. Lioma, Q. Li, P. Zhang, and J. G. Simonsen (2019) Encoding word order in complex embeddings. arXiv preprint arXiv:1912.12333. Cited by: §1.
  • [26] M. Xu, S. Li, and X. Zhang (2020) Transformer-based end-to-end speech recognition with local dense synthesizer attention. arXiv preprint arXiv:2010.12155. Cited by: §4.2, Table 2.
  • [27] C. Yun, S. Bhojanapalli, A. S. Rawat, S. J. Reddi, and S. Kumar (2019) Are transformers universal approximators of sequence-to-sequence functions?. arXiv preprint arXiv:1912.10077. Cited by: §1.