Phrase-Level Class based Language Model for Mandarin Smart Speaker Query Recognition

09/02/2019
by   Yiheng Huang, et al.
Tencent
0

The success of speech assistants requires precise recognition of a number of entities on particular contexts. A common solution is to train a class-based n-gram language model and then expand the classes into specific words or phrases. However, when the class has a huge list, e.g., more than 20 million songs, a fully expansion will cause memory explosion. Worse still, the list items in the class need to be updated frequently, which requires a dynamic model updating technique. In this work, we propose to train pruned language models for the word classes to replace the slots in the root n-gram. We further propose to use a novel technique, named Difference Language Model (DLM), to correct the bias from the pruned language models. Once the decoding graph is built, we only need to recalculate the DLM when the entities in word classes are updated. Results show that the proposed method consistently and significantly outperforms the conventional approaches on all datasets, esp. for large lists, which the conventional approaches cannot handle.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

06/23/2017

Comparison of Modified Kneser-Ney and Witten-Bell Smoothing Techniques in Statistical Language Model of Bahasa Indonesia

Smoothing is one technique to overcome data sparsity in statistical lang...
01/21/2015

Phrase Based Language Model for Statistical Machine Translation: Empirical Study

Reordering is a challenge to machine translation (MT) systems. In MT, th...
01/28/2022

Neural-FST Class Language Model for End-to-End Speech Recognition

We propose Neural-FST Class Language Model (NFCLM) for end-to-end speech...
08/05/2020

Efficient MDI Adaptation for n-gram Language Models

This paper presents an efficient algorithm for n-gram language model ada...
05/16/2019

Effective Sentence Scoring Method using Bidirectional Language Model for Speech Recognition

In automatic speech recognition, many studies have shown performance imp...
03/21/2022

Better Language Model with Hypernym Class Prediction

Class-based language models (LMs) have been long devised to address cont...
04/20/2018

Lightweight Adaptive Mixture of Neural and N-gram Language Models

It is often the case that the best performing language model is an ensem...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Speech Assistants have recently gained its popularity in human’s daily lives. Representative product in current market include the Amazon’s Echos, Apple’s Siri, Google Assistant, etc. In the according scenarios, the user queries usually contain explicit patterns such as “Play $SONG NAME$”, “I want to watch $VIDEO NAME$”Recognizing these names as the entities is challenging, and these semantic patterns are usually not well modelled in the general language model. On the other hand, new songs and videos arrive everyday, and it is expensive and impractical to frequently update the general language models daily to capture these new entities. Worse still, incorrectly recognized entities might raise strong negative impacts to the user experience. We refer this problem to as the “hot” word problem. One natural solution to this problem is [2], in which a class based language model was first trained and was then converted to a weighted finite state transducer (WFST) [9] as the root-grammar. The entity names in the root-grammar were then replaced by the WFST generated from contact names. However, if the number of entities is large, directly replacing them into the root-grammar will lead to memory problems and is impossible to be compiled into a decoder graph. For example, if there are n-gram items containing the $SONG-SLOT$ in the root grammar and the size of the $SONG$ language model is , replacing these n-grams will result in a language model with size . In our application, and are typically and , respectively. Thus, the resulted language model will be of magnitude . This problem is referred to as the “size exploding” problem.

For Mandarin speech assistants, an additional challenge is that the word segmentation quality has a strong impact on the recognition performance, as the language models trained on the word-level are superior than the ones trained on the character-level. Generally speaking, the segmentation quality relies on the vocabulary. Thus, the vocabulary of a single general language model may not work well to special domains, such as song names, singer names and video names. This makes it necessary to use different vocabularies to model the queries related to these special domains. Combining multiple vocabularies is another major motivation of our work.

To this end, a phrase-level class-based language model is proposed in this work. The entities, for example, song names, video names and singer names are trained as n-gram models with different vocabularies which are suitable for their own tasks. In addition, these language models can be heavily pruned so that replacing the root-grammar will not blow up the language model. To address the “size exploding” problem, for decoding, we adopted the on-the-fly re-scoring in [8] via the proposed difference language model to get more accurate language model scores. The root-grammar is a general language model with entities in classes to be replaced by their class names, which is trained from a large text corpus. For the sub-grammars such as song names, they can be trained very efficiently since the size is smaller than the root grammar in orders of magnitudes. By using the proposed method, once the decoder graph is built, we only need to recalculate the DLMs when the entities in word classes are updated while leaving the decoder graph stays unchanged. Because the calculations of DLMs are very fast, therefore, our system can be easily updated with the new items at minutely basis, thus addressing the “hot” word problem.

The proposed phrase-level class based language model is evaluated on our own dataset, namely, ‘speaker_201812’, ‘speaker_201901’. They are voice search to the internal intelligent speaker song retrieval task, which consists of real user song queries containing both the wake-up words and the query utterances. Experiment results show that after interpolating with the general language models, significant gains can be obtained on the song retrieval task compared to the general language model with no loss of performance on the general recognition task such as reading speech, etc. Our work bears some similarities with some recent works

[3, 4, 5, 7]. In [3, 4], a biasing language model trained with the latest text inputs or queries is used to bias the scores of the general language model to better handle the recent trendy search queries. Furthermore, in [7], semantic information was augmented to the language model by re-scoring the lattices from the first-pass decoding. With a powerful semantic model, significant gains can be obtained as shown in their work. The main differences of our work is that different classes of entities have their own vocabularies, and on the fly re-scoring is performed on the corresponding root-grammar or sub-grammar individually for every word. However, in [3] they only re-score phrases in a pre-defined set. What is more, the decoder graph need to be built only once, when new entities are updated, only the DLMs correspond to sub grammars need to be rebuilt, that is a small size problem. e.g., in a magnitude of .

2 Difference Language Model

In this section, we introduce the phrase-level class based language model and the difference language models.

2.1 Phrase-Level Class based Language Model

In the pioneer paper [1], the authors describe the class based language model on the word level. Instead, we propose phrase-level class based language model.

2.1.1 Formulation

Assume there is a vocabulary V and a set of classes {}, and each word in V can only belongs to one of these classes. Given a word sequence w, w, …, w generated from the vocabulary, there exists a partition that separates a word sequence into a sequence of continuous phrases . We assume that the words appeared in one phrase are from the same class, and the words in any two neighbored phrases are from distinct classes. Denote the class label for phrase as , where

. The probability of the word sequence P(w

w…w) can be rewritten to P() which can be decomposed as:

(1)

Followed the definition of word class n-gram models as defined in [1], we give our new definition.

Definition 1

A language model is a phrase level class based language model, if , , where is the class of phrase .

In our model, each phrase can only belong to exactly one class , so if . The following theorem gives a sufficient condition for phrase level class based language model.

Theorem 1

Assume there are classes and Eqs. (2) and (3) hold, then the language model is a phrase level class based language model in Definition 1.

(2)
(3)
Proof 1

We have

(4)

and thus if Eqs. (2) and (3) hold, obviously, , which completes the proof.

The condition in Eq. (2) is a mild assumption under our cases. For example, can be chosen as the song class SONG-SLOT, and can be chosen as commands such as ‘listen to’ or ‘play’. The probabilities and are equal to each other, regardless of which command is used.

2.2 The Difference Language Model

In the pioneer work [8], the authors introduce an on-the-fly re-scoring framework. A small language model is used to build the decoder graph, and a larger language model is used to re-score the LM scores in the decoding progress simultaneously. In our work, following their idea, a Difference Language Model (DLM) is devised for re-scoring.

Definition 2

A language model is a DLM of and , if given arbitrary history and word , Eq. (5) holds as below:

(5)
Theorem 2

Denote and as two back-off n-gram language models with the same vocabulary V. The set of n-gram entries (without probabilities and back-off coefficients) in , denoted as , is a subset of , i.e., , if a back-off n-gram language model satisfies . In addition, for each n-gram ,

(6)
(7)

where is the back-off parameter of history , and is zero if is not contained in . Then, the language model is a DLM of and .

Proof 2

We prove the theorem by deduction. For uni-grams , Eq. (5) holds trivially. Assume Eq. (5) holds for any sequence with length no longer than , and is a sequence of length of . Supposing , denote as the suffix of . If , Eq. (5) holds by definition. If , we have since . According to the definition of the back-off language model, . Since by the assumption and (when , ), thus . So, Eq. (5) holds for with length , and we complete the proof by deduction.

From Theorem 2, we can directly compute the DLM.

3 Dynamic Decoding

In this section, we briefly introduce the DLMs, decoder graph, and the on-the-fly re-scoring progress.

3.1 The Language Model

The proposed phrase-level class based language model consists of a root grammar model and a sub-grammar model:

  • the root grammar is the language model characterized by ;

  • the sub-grammar is responsible for modeling the emitting probabilities of entities given the class .

We refer the classes to as “slots” denoted by SLOT-NAME. The root grammar is trained by SRILM [10] from various corpora containing training sentences such as ‘play SONG-SLOT’, with the concrete entities substituted by their class names. The corpora are from manual transcripts of online search queries. Each word in these sentences is treated as an individual class. Each word in the root grammar vocabulary is concatenated with a prefix ‘class_’, e.g., word ‘play’ is replaced by ‘class_play’. The root grammar vocabulary contains 213893 ordinary Chinese words plus 3 extra words correspond to the sub classes (i.e., song, singer and video). Furthermore, the root grammar can be interpolated with the general language model trained from other sources such as news and conversations, etc. Finally, the model is pruned and the corresponding DLM is built using the method in section 2.

We build three sub-grammar models based on their distinct databases. Once the sub-grammars are built, they are directly pruned to arbitrary smaller n-grams (at the extreme cases, that is the uni-gram), and the DLMs are built accordingly. In our databases, more than 20 million songs and 1 million videos are available, and there is a list of more than 200 thousands of singers. The vocabulary sizes corresponding to the classes SONG, VIDEO and SINGER are 50687, 43797 and 17717, respectively. The names of the words are denoted by appending the class name as a prefix, e.g., the word in class SONG-SLOT is denoted by SONG-SLOT_.

Finally, the DLMs of both the root grammar and sub-grammars can be obtained according to Theorem 2. These n-grams are then converted to a tree structure as described in [14], where the language model state is encoded as a 64-bit value. The state contains the information of the depth of the n-gram tree and n-gram history. Based on the information, it is straightforward to look up the n-gram scores for the incoming words in the n-gram tables.

3.2 The Decoder Graph

The language models are converted to WFST format using Kaldi [11] and the replacements of sub-grammars are conducted using OpenFst [12]. Some example WFSTs are shown in Figs. 1, 2 and 3.

Figure 1: WFST of root grammar
Figure 2: WFST of sub grammar
Figure 3: The replaced WFST

The root grammar WFST shown in Fig. 1 is converted from a bi-gram trained from a single sentence indicating ‘Play SONG-SLOT’ in Chinese. The bi-gram corresponding to the sub-grammar WFST in Fig. 2 is trained from a single song name indicating ‘white bird’ in Chinese. The symbol ‘#0’ is a disambiguation symbol in the root grammar WFST, and ‘#SONG-SLOT-wd0’ is a disambiguation symbol in sub-grammar WFST. Furthermore, if ‘SONG-SLOT’ is the input over an arc, it is replaced by ‘#SONG-SLOT’ to be a disambiguation symbol, which creates the final HCLG.fst [9].

3.3 Dynamic Decoding

Our decoder structure is similar to what is applied in [13]. During decoding, an on-the-fly re-scoring strategy similar to [8] is performed. Since we are using the class-based DLMs, some key information that we keep in the decoding progress include:

  • states corresponding to the first pass WFST;

  • the language model ID being searched during the on-the-fly re-scoring;

  • states of the current language model;

  • a back-up state to backup the original state in the root DLM.

We provide more details when the decoding progress switches between the root grammar and sub-grammars. When token enters the sub-grammar, (e.g, an output label ‘SONG-SLOT’ is observed), the decoder switches to the corresponding sub-grammar, initializes the DLM state, backs-up the state in the root grammar, and then precedes the decoding progress. On ther other side, when the decoder leaves the sub-grammars, (e.g., an output label ‘#SONG-SLOT’ is observed), the DLM score corresponds to the end-of-sentence is added, and then the decoder switches to the backed up state of the root grammar and continues the decoding. A quadruple is used to record all these information. During decoding, when a word-emitting arc is traversed, we look up the extension of the previous state in the corresponding DLM, and if the word is found, we combine this new DLM state with the current decoder state, DLM ID and the backup state to form a new quadruple.

4 Experiments

In this section, we first conduct extensive experiments by comparing with the conventional approaches, such as [2], with small model size. For models with large size, we compare the performance of our method with common language model that is trained on a very large number of common corpora.

4.1 Experimental setup

The TDNN-LSTMP-LFMMI [15]

acoustic model used in the study was trained with six thousands of hours of general purpose Mandarin data consisting of mostly read speech. The model is then fine-tuned with two thousands of hours of internal data from the Tencent TingTing. The neural network has 7 TDNN layers interleaved with 4 LSTMP layers. The output target contains 9782 bi-phone senones obtained from a Mandarin syllable lexicon. For evaluation, two test sets are used: ‘voice_speaker_201812’ contains 735 recent user queries to the Tencent TingTing intelligent speaker, while ‘voice_speaker_201901’ contains 1000 similar queries. To demonstrate that our method does not harm the general purpose recognition task, another testing set ‘AI_Lab_test_600’ containing 600 regular read speech utterances is used. The language model is built as described in section

3.1. All the n-gram language models are trained using SRILM [10].

4.2 Performance Consistency

To demonstrate that our method does not loss any precision, we should use the original sub-grammars without pruning to build the baseline results. However, since the original sub-grammars are too big to be used directly, (e.g, very large sub-grammars trained over 20 million songs). These sub-grammars are first pruned moderately, following the similar pipeline in [2], to build the baseline system. To build our on-the-fly system, the slightly pruned sub-grammars are used as the formal grammars. Then, these grammars are further pruned to be inserted into the root grammar, and we then use the method described in section 2 to build the DLMs accordingly. Finally, the algorithm proposed in section 3.3 is used to perform the on-the-fly re-scoring.

Baseline[2] rescore(sub) rescore(root+sub)
spk_201812
spk_201901
Table 1: Consistency results

Table 1 reports the results showing the performance consistency between our method and the baseline method. The last two columns of Table 1 stand for the on-the-fly re-scoring results. Column 2 indicates the results where only the sub-grammars are pruned, while column 3 shows the results when all grammars are pruned. Results show that our method achieves the same accuracy as that as baseline on the test set ‘spearker_201812’ and even better results in ‘speaker_201901’. A reasonable explanation for this is that the on-the-fly approach has a much smaller decoder graph compared with the baseline, so it might be more straightforward to reach the correct LM score, and the unreliable decoding path can be cut more efficiently, leading to better results.

4.3 In-Domain and Out-Domain Testing

The experimental results in Table 2 show the performance after interpolating with common language model. The first column records the result of decoding with only root grammar, the second column records the result of decoding with common language model, while the last column stands for the result after interpolating. As clearly observed, we achieve significant performance improvements, approaching a rate of relatively, on in-domain sets, while the performance on out-domain sets decays slightly.

root common root + common
speaker_201812
speaker_201901
ai_lab_test600
Table 2: On-line voice search results

To compare the decoding time cost, rtfs on these test sets are reported. The on-the-fly method has similar decoding speed on the in-domain test sets compared to the common language model. On the out-domain test sets, the common language model shows faster decoding speed mainly because of paths with less confusion.

root common root + common
speaker_201812
speaker_201901
ai_lab_test600
Table 3: Real time factor

4.4 Size Reduction and Efficient Updating

As mentioned in the above sections, when the original sub-grammars without pruning are inserted directly into the root grammar, the resulting G.fst becomes extremely large, i.e., with a size of . However, after pruning, the size of G.fst is reduced to . This implies that the proposed method is able to add considerable large items into the system by pruning the sub-grammars. The remaining task is to recalculate the DLMs of new sub-grammars, leaving the decoder graph unchanged. All of the computations can be done in a few minutes, and thus we can easily catch up with newly arrived items.

5 Conclusions and Future Work

In this paper, we have proposed a phrase-level class based language model and an on-the-fly re-scoring method to address the ‘hot’ word problem. Using this framework, we can catch up with new entities precisely and efficiently. In addition, experimental results show that we achieve significant performance improvements compared to the common language model. In our future work, we are interested to remove the replacement procedure to make the pipeline simpler.

References

  • [1] P. F. Brown, V. J. D. Pietra, P. V. deSouza, J. C. Lai and R. L. Mercer, “Class-based n-gram models of natural language,” Computational Linguistics, vol. 18, no. 4, pp. 467–479, 1992.
  • [2] P. Aleksic, C. Allauzen, D. Elson, A. Kracun, D. M. Casado and P. J. Moreno, “Improved recognition of contact names in voice commands,” Proceedings of ICASSP, pp. 5172–5175, 2015.
  • [3] K. B. Hall, E. Cho, C. Allauzen, F. Beaufays, N. Coccaro, K. Nakajima, M. Riley, B. Roark, D. Rybach, and L. Zhang “Composition-based on-the-fly rescoring for salient n-gram biasing,” Interspeech 2015 – 16th Annual Conference of the International Speech Communication Association, Dresden, Germany, Proceedings, 2015.
  • [4] P. Aleksic, M. Ghodsi, A. Michaely, C. Allauzen, K. Hall, B. Roark, D. Rybach, and P. Moreno “Bringing Contextual Information to Google Speech Recognition,” Interspeech 2015 – 16th Annual Conference of the International Speech Communication Association, Dresden, Germany, Proceedings, 2015.
  • [5] L. Vasserman, B. Haynor, and P. Aleksic “Contextual Language Model Adaptation Using Dynamic Classes,” IEEE Spoken Language Technology Workshop (SLT), pp. 441-446, 2016
  • [6] H. Axel, C. Kaufhold, and E. Nöth. “How to add word classes to the kaldi speech recognition toolkit.” International Conference on Text, Speech, and Dialogue. Cham: Springer, pp. 486-494, 2016.
  • [7] L. Velikovich, I. Williams, J. Scheiner, P. Aleksic, P. Moreno, and M. Riley “Semantic Lattice Processing in Contextual Automatic Speech Recognition for Google Assistant,” Interspeech 2018 – 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, Proceedings, 2018, pp. 2222–2226.
  • [8] T. Hori, C. Hori, Y. Minami, and A. Nakamura, “Efficient WFST based one-pass decoding with on-the-fly hypothesis rescoring in extremely large vocabulary continuous speech recognition,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 15, no. 4, pp. 1352–-1365, 2007.
  • [9] M. Mohri, F. Pereira, and M. Riley, “Weighted finite-state transducers in speech recognition,” Computer Speech and Language, vol. 16, pp. 88–-69, 2002.
  • [10] A. Stolcke, “SRILM - an extensible language modeling toolkit,” In Proceedings of the International Conference on Statistical Language Processing, Denver, Colorado, 2002.
  • [11] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely, “The Kaldi Speech Recognition Toolkit,” In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society, 2011.
  • [12] C. Allauzen, M. Riley, J. Schalkwyk, W. Skut, and M. Mohri, “OpenFst: A general and efficient weighted finite-state transducer library,” in CIAA 2007, 2007, vol. 4783 of LNCS, pp. 11–-23, http://www.openfst.org.
  • [13] G. Saon, D. Povey, and G. Zweig, “Anatomy of an extremely fast LVCSR decoder,” Interspeech 2005 – 9th Annual Conference of the International Speech Communication Association, Lisbon, Portugal, Proceedings, 2005, pp. 549–-552.
  • [14] H. Soltau and G. Saon, “Dynamic network decoding revisited,” in Proc. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2009, pp. 276–-281.
  • [15] D. Povey, V. Peddinti, D. Galvez, P. Ghahrmani, V. Manohar, X. Na, Y. Wang, and S. Khudanpur, “Purely sequence-trained neural networks for ASR based on lattice-free MMI”. Interspeech 2015 – 16th Annual Conference of the International Speech Communication Association, San Francisco, USA, Proceedings, 2015, pp. 2751–2755.