Personalization of End-to-end Speech Recognition On Mobile Devices For Named Entities

by   Khe Chai Sim, et al.

We study the effectiveness of several techniques to personalize end-to-end speech models and improve the recognition of proper names relevant to the user. These techniques differ in the amounts of user effort required to provide supervision, and are evaluated on how they impact speech recognition performance. We propose using keyword-dependent precision and recall metrics to measure vocabulary acquisition performance. We evaluate the algorithms on a dataset that we designed to contain names of persons that are difficult to recognize. Therefore, the baseline recall rate for proper names in this dataset is very low: 2.4 with no need for speech input from the user. With speech input, if the user corrects only the names, the name recall rate improves to 64.4 corrects all the recognition errors, we achieve the best recall of 73.5 eliminate the need to upload user data and store personalized models on a server, we focus on performing the entire personalization workflow on a mobile device.



There are no comments yet.


page 1

page 2

page 3

page 4


An Investigation Into On-device Personalization of End-to-end Automatic Speech Recognition Models

Speaker-independent speech recognition systems trained with data from ma...

Phoneme-Based Contextualization for Cross-Lingual Speech Recognition in End-to-End Models

Contextual automatic speech recognition, i.e., biasing recognition towar...

Streaming End-to-end Speech Recognition For Mobile Devices

End-to-end (E2E) models, which directly predict output character sequenc...

VoiceMask: Anonymize and Sanitize Voice Input on Mobile Devices

Voice input has been tremendously improving the user experience of mobil...

Contextual Speech Recognition with Difficult Negative Training Examples

Improving the representation of contextual information is key to unlocki...

Personalization Strategies for End-to-End Speech Recognition Systems

The recognition of personalized content, such as contact names, remains ...

Personalized Speech recognition on mobile devices

We describe a large vocabulary speech recognition system that is accurat...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

End-to-end speech recognition systems based on the Recurrent Neural Network Transducer architecture (RNN-T) 

[1] have been shown to achieve state-of-the-art performance for large-vocabulary continuous speech recognition [2]. A RNN-T model has also been successfully deployed to run efficiently on mobile devices [3]. The ability to perform efficient on-device speech recognition opens up new avenues to perform on-device personalization of these models without the need for complex server-side infrastructure. More importantly, personalization means that user data and models are stored on users’ devices and not sent to a centralized server, thus increasing data privacy and security.

To personalize speech models on device, we need a learning algorithm that is memory efficient and robust to overfitting. We also need users to provide labeled data, which can be time consuming. To minimize the user effort needed, we investigate ways of improving on-device learning and compare personalization techniques that require different levels of user engagement.

The remainder of this paper is organized as follows. Section 2 describes the on-device speech personalization framework. Section 3 presents several personalization methods that aim at learning new named entities. In Section 4, we describe the Wiki-Names dataset, which we have collected specifically to evaluate the effectiveness of personalization techniques for learning new named entities. Section 5

describes the evaluation metrics we use throughout our experiments. Section 

6 presents the experimental results.

2 On-device Speech Personalization

Figure 1: On-device Speech Personalization Workflow.

Fig. 1 shows how the personalization workflow runs on a mobile device. It consists of two main phases: first, the user interacts with the device by voice. The user’s speech is captured and transcribed into text using an on-device speech recognition system. The user may then optionally edit the recognition outputs to correct errors. The input speech and corrected texts are stored in a training cache on device. During the second phase, data from the training cache is used to perform on-device personalization of the speech model. This phase is usually performed while the device is idle so as to maximize the available memory and compute resources for personalization and preserve user experience. These two phases may take place in an interleaving fashion to achieve an online personalization process, where the system continuously adapts to the user’s voice and usage patterns.

Figure 2: Architecture of the on-device RNN-T model (8 encoder layers, 2 LM layers and a joint network with 1 hidden layer).

Fig. 2

depicts the RNN-T model architecture that will be used for the experiments in this paper. The model consists of an encoder with 8 Long Short Term Memory (LSTM) 


layers for the acoustic features, an encoder with 2 LSTM layers for the label sequences (denoted as the language model (LM) component) and a joint network with a single hidden layer. The LM and joint network components are referred to as the decoder. Each LSTM layer has 2048 hidden units along with a projection to 640 units. 3 consecutive frames of 80-dimensional log Mel features (extracted every 10 milliseconds) are stacked together to form the 240-dimensional inputs to the network 


. There is also frame stacking after the second encoder layer with a stride size of 2 to increase the frame shift to 60 milliseconds. The network outputs correspond to 75 graphemes and a blank symbol 


3 Vocabulary Acquisition

Vocabulary acquisition is an important aspect of personalizing speech models, providing the system with the flexibility to learn new words that do not appear in the training data. The model becomes attuned to the small set of words that matter to the individual user, such as the names of family and friends.

For an end-to-end sequence model, where the output units are sub-words, the vocabulary of the system is not explicitly defined. If the sub-word units are sufficiently defined to construct all possible words, there are technically no OOV words. However, words that do not appear in the training data will typically be assigned a low probability by the model, leading to recognition errors for rare or unseen words. The output probabilities of the new words can be increased by modifying the decoder to incorporate a biased language model 

[6]. This includes techniques such as biasing [7, 8, 9], shallow fusion [10], deep fusion [10] and cold fusion [11]. Alternatively, the end-to-end model can also be fine-tuned to learn the new words. In the following, we will describe the biasing and model fine-tuning approaches for handling new words.

3.1 Biasing

Biasing is a technique whereby new vocabulary is injected into the speech recognition system by boosting the language model probability of the new words during decoding. This approach does not require retraining of the existing model and can be viewed as a form of language model interpolation. This technique requires only a list of biasing words or phrases. It has been shown in 

[9] that personalization can be achieved efficiently through new vocabulary injection and biasing the language model on the fly [7, 8].

3.2 Model fine-tuning

A more direct way to learn new words is to fine-tune the model with training data that contains examples of the new words. This requires both speech and the corresponding transcription. High-quality speech transcription is important to train speech recognition models. For server-side training, this is typically outsourced to professional transcribers. However, on-device personalization has to rely solely on individual users to provide training supervision. This would require a substantial amount of user effort and may not be practical for on-device training.

There are several ways to reduce user effort. For example, users may provide a list of named entities important to them, or simply grant access to an existing list, such as contact names, place names and media names used by other apps. A text-to-speech engine can then be used to synthesize the corresponding speech. This requires the same amount of user effort as biasing (cf. Section 3.1). Alternatively, users may provide example sentences that contain the words of interest. This will provide more contextual information for the new words and may improve generalization to unseen contexts. Finally, users may also speak to the system and correct any recognition errors, as illustrated in the personalization workflow in Fig. 1. If the user corrects every recognition error, we can achieve supervised learning. Otherwise, if the user only corrects some of the errors, we can perform semi-supervised learning.

3.2.1 Overfitting

Although the speech RNN-T model has about 117M parameters, our results show that it is possible to fine-tune the entire model and achieve good improvements. However, fine-tuning the full model can easily lead to overfitting. Overfitting can be suppressed by using a lower learning rate and early stopping. This is consistent with the results reported in [12], where robust domain adaptation can be achieved by fine-tuning the entire model with millions of parameters on a small amount of adaptation data.

There are existing speaker adaptation techniques for neural networks that offer a more compact model representation by having a large number of global parameters shared across different speakers and a smaller number of speaker-specific parameters [13, 14, 15, 16, 17, 18, 19, 20]. These methods are more suitable for a centralized adaptation solution where the adapted models for all speakers are stored on a server.

On the other hand, regularization [21, 22] and data augmentation [23, 24] techniques can also be used to prevent overfitting when training on a small amount of data. They achieve robust model adaptation without modifying the model structure. These approaches are suitable for on-device personalization since the personalized models are stored separately on individual user devices. In the following section, we will describe the Elastic Weight Consolidation (EWC) regularization technique to mitigate the overfitting problem.

3.2.2 Elastic Weight Consolidation

To perform personalization, we first train the model on a multi-speaker corpus, then fine-tune it on a single user. During the second training step, the model “forgets” about the previous task; this problem is called “Catastrophic Forgetting” [25]. We can mitigate this problem using Elastic Weight Consolidation (EWC) [22], which aims at maximizing , or finding the most probable parameter values, , given some dataset, . By assuming that contains data from two tasks, and , and the model has already been trained on task A, the conditional probability is given by:

Note that

is the standard loss function for task B.

can be approximated as a Gaussian distribution with the mean given by the parameters fine-tuned on task A and the precision matrix given by the Fisher Information Matrix,

. Therefore, the EWC loss function has an additional regularization term:

where denotes the set of model parameters. is the value of the th parameter fine-tuned for task A. is the penalty weight that can be adjusted to control the amount of regularization. The Fisher Information Matrix, , is used to penalize moving in the directions critical (with high Fisher Information) to the first task. We compute by summing the square gradients of the loss with a model trained on the first task, as described in [22].

4 Dataset

We collected a dataset called Wiki-Names to evaluate the performance of speech personalization. These were sentences extracted from English Wikipedia pages containing repeated occurrences of politician and artist names that are difficult to recognize. To identify the difficult words, we extracted a list of names and ran our baseline speech recognition system on the synthesized speech for these names. The ones that were incorrectly recognized are deemed to be difficult words. Four categories of names were chosen: American, Chinese, Indian and Italian. We recruited 100 participants to read the extracted sentences: each speaker participated in one category only and were comfortable reading the names in their respective category. Nevertheless, the prompts were still difficult to read due to the presence of many name entities. As a consequence, the resulting speech was also often accented and disfluent (with hesitations and corrections).

Category No. of Amount of data (mins)
Speakers    Train Test
American 6 24.7 9.7
Chinese 24 126.0 49.7
Indian 40 161.3 66.1
Italian 30 144.6 57.9
Table 1: Wiki-Names Data

Table 1 shows the distribution of the data over the four categories. Each participant provided 50 utterances (4.6 minutes) of training data and 20 utterances (1.9 minutes) of test data. The prompts for each user covered five names, each with 10 training utterances and 4 test utterances, with each name potentially appearing multiple times per utterance. We manually transcribed the data to use as ground truth for both training and evaluation.

5 Evaluation

We evaluate performance based on word error rate (WER), a standard metric used to evaluate speech recognition systems. In order to measure the performance based on a selected subset of words, we treat our problem as a retrieval task (similar to keyword spotting [26]). We propose using keyword-dependent precision and recall to measure vocabulary acquisition performance, which can be computed as follows:


where and are the number of keywords in the reference and hypothesis, respectively. is the number of keywords that are correctly recognized. To compute , we first perform edit distance alignment between the reference and hypothesis texts. Then, we count the number of correct matches for the keywords only.

Precision and recall can better explain the performance of a personalization method. A low recall indicates the model’s lack of ability to recognize the new entities (miss), while a low precision indicates over-generation of those name entities (false alarm).

REF Zhuge Dan was from Yangdu
HYP Zhuge was from young Zhuge
Table 2: An example of aligned reference and hypothesis texts. The relevant words are in bold face (, and . Precision = and recall = ).

For example, Table 2 shows a pair of aligned reference and hypothesis texts. There are 3 relevant words in the reference (‘Zhuge’, ‘Dan’ and ‘Yangdu’) and 2 relevant words in the hypothesis (‘Zhuge’ and ‘Zhuge’). Only one keyword in the hypothesis has been aligned correctly with the reference. The recall is penalized by the system’s inability to recognize ‘Dan’ and ‘Yangdu’ correctly while the precision is penalized by falsely recognizing ‘Zhuge’ at the end.

Generally, as we train our models to learn new words, we observed that the recall will improve gradually as the model begins to recognize those new words. However, the personalized model also starts to over-generate those words in the output, which reduces precision. Therefore, precision and recall offer a way of measuring the trade-off.

6 Experimental Results

In this section, we present experimental results to evaluate various techniques for personalizing the RNN-T speech recognition model. The architecture of this model is described in Section 3.2. This model was trained with 35 million anonymized hand-transcribed English utterances (∼27,500 hours), from Google’s voice search traffic [3]

. All models are trained and evaluated using Tensorflow 

[27]. The RNN-T loss and gradients are computed using the efficient implementation as described in [28]. We fine-tune the models using the momentum optimizer [29]

for 15 epochs with a learning rate of

and batch size of 5.

6.1 Fine-tune Selected Layers

Model # of fine-tuning WER
Parameters TTS Real
Baseline 67.2 67.2
Joint 901k 67.0 59.9
LM 19M 61.8 56.3
Decoder 20M 65.3 55.5
Encoder 96M 65.8 58.3
All 117M 64.8 50.4
Table 3: Comparison of word error rate performance on Wiki-Names dataset (15 epochs).

First, we evaluate the WER performance of fine-tuning selected layers of the RNN-T model. Table 3 shows the WER performance of the personalized models fine-tuned using either synthesized (TTS) or the actual (Real) speech data 111 Note that (Real) represents the best case scenario where users correct all the recognition errors to achieve fully supervised training.. The table also shows the corresponding number of fine-tuned parameters. Wiki-Names is a difficult dataset due to the presence of many name entities as well as the accented and disfluent speech. The baseline model has a rather high 67.2% WER, making many errors for the names as well as short words (e.g. ‘a’, ‘an’ and ‘the’).

Fine-tuning the entire model with the actual speech data achieves the lowest WER of 50.4%. If we fine-tune a partial model, it is better to fine-tune the decoder compared to the encoder despite having about 5 times fewer parameters. This is expected because it is much easier for the LM component of the model to learn a new named entity. The additional gains from fine-tuning the entire model may come from learning the speakers’ accents. On the other hand, if we fine-tune the model with synthesized speech, we achieved the best WER performance of 61.8% by updating the LM component only. This is again expected because fine-tuning the rest of the model would over-fit to the synthesized speech.

6.2 Elastic Weight Consolidation

Model Epoch WER
Voice Search Wiki-Names
Baseline 7.3 7.3 48.2 48.2
LM 5 7.3 7.3 47.6 46.9
10 7.6 7.5 40.0 41.1
15 8.3 7.8 35.9 36.5
All 5 8.6 8.3 42.3 43.1
10 10.1 8.8 35.2 32.7
15 11.4 9.7 32.0 28.7
Table 4: Comparison of word error rate performance without () and with () EWC for different adaptation models (average over 4 speakers, one from each category).

Next, we study the effectiveness of Elastic Weight Consolidation (EWC), as described in Section 3.2.2, in handling a small amount of training data for speech personalization. We evaluate the performance of four personalized speech models (one speaker from each category) on a separate voice search test set, to understand how much the models have deviated from the baseline after personalization. Table 4 shows the average performance of two types of personalized model (by fine-tuning either the LM component or the entire model) with different numbers of training epochs. We compare training without () and with () EWC. The baseline model achieves 7.3% WER on the voice search test set. As expected, WER increases as we personalize the models towards the Wiki-Names data with more training epochs. Without using EWC, we observe that the WER performance on Voice Search increases from 7.3% to 8.3% by fine-tuning the LM component and 11.4% by fine-tuning the entire model. In this case, fine-tuning the LM component only offers a better trade-off between the two test sets. For example, after 15 epochs, it achieves 35.9% WER on Wiki-Names and 8.3% on Voice Search. In comparison, fine-tuning the entire model for 10 epochs achieves a similar WER of 35.2% on Wiki-Names, but with a higher WER of 11.4% on Voice Search. With elastic weight consolidation, we were able to reduce the degradation on voice search. Surprisingly, we also achieve a better WER performance of 28.7% on the Wiki-Names set when fine-tuning the entire model. The penalty term in EWC served as an effective regularizer to mitigate overfitting to a small amount of personalization data, and even assisted with learning the new data.

6.3 Effect of TTS Engine

TTS American Chinese Indian Italian
Baseline 50.7 75.9 72.1 55.9
en 45.2 70.4 68.1 49.6
zh 46.8 66.0 66.8 50.3
en-in 48.0 70.0 64.5 48.6
it 46.6 68.6 66.5 46.5
Table 5: Comparison of word error rate performance when using different TTS engines to generate labels.

Next, we study the effect of using different Text-To-Speech (TTS) engines for personalization. We choose four TTS engines that match closely with the four name categories: English (en), Chinese (zh), Indian (en-in) and Italian (it). From the results in Table 5, the choice of TTS engine does not matter so much compared to the gains over the baseline. Nevertheless, using matched TTS engines shows consistent improvements across all four name categories.

6.4 Semi-supervised Learning

Supervision Names Non-names WER
Baseline 97.9 2.7 82.5 76.0 67.2
Unsupervised 96.7 3.5 83.8 62.2 69.1
+ biasing 67.2 10.6 84.9 48.7 74.1
Semi-supervised 68.8 68.2 91.1 61.3 63.8
Supervised 82.5 77.1 92.2 80.1 50.4
Table 6: Comparison of precision and recall (names) using the actual speech and different types of supervision. Personalization is achieved by fine-tuning the entire model (All). ‘Supervised’ represents an ideal scenario.

So far, we’ve assumed that correct transcriptions are available for training. However, training data for on-device personalization exists only on user devices, and attaining correct transcriptions on-device requires time-consuming effort from users. In a realistic scenario, users may not correct all the errors in the recognition outputs. To better understand the impact of different amounts of user effort on personalization, we measure the breakdown of the precision and recall for the names of interest, as well as for the rest of the words in the sentence (non-names222Non-names may include other named entities that we do not track.). In Table 6, we compare using different types of supervision that require different amounts of user effort for training. The baseline model performs poorly for the names, with a very low recall of 2.7%. It rarely outputs the names, but it does so with a very high precision of 97.9%.

For unsupervised training, we use the recognition output of the baseline model (67.2% WER) as supervision for personalization. This requires no additional effort from the user to make corrections. However, there is no performance gain because the new names are not generated by the recognizer in the unsupervised transcripts. Alternatively, if we use the biased recognition output as supervision, the name recall rate improved to 10.6%. However, the precision for the names, the recall for the non-names as well as the overall WER performance degrade significantly because the model is learning from erroneous transcripts.

For semi-supervised training, we simulate the situation where users only correct the errors that matter (in this case the new named entities of interest). We do so by aligning the baseline recognition outputs with the reference and replace the words in the hypotheses that are aligned to the names in the references. For example, given the aligned reference and hypothesis in Table 2, the name-corrected transcript will be ‘Zhuge Dan was from Yangdu Zhuge’. This assumes that the user corrects the missing ‘Dan’ and the incorrectly recognized ‘Yangdu’. With a relatively small effort from the user (to correct only the names), we see a substantial improvement in recalling those names (3.5% to 68.2%, or 67.0% relative improvement). However, the precision for the names and the recall for the non-names are still much worse than the baseline. The overall relative WER improvement is only about 5.1% (from 67.2% to 63.8%).

6.5 Biasing and TTS Data

So far, we have considered fine-tuning the RNN-T model using full sentences. Next, we also compare with two other personalization techniques that require only the names: 1) biasing (as described in Section 3.1); and 2) fine-tuning the LM component using synthesized speech of the names only.

Method Names Non-names WER
Baseline 100.0 2.4 85.7 59.6 70.0
+ biasing 87.5 30.1 87.5 63.7 63.5
TTS (names) 91.6 13.4 87.2 60.3 69.1
+ biasing 82.3 34.5 90.1 59.7 66.7
TTS (sentences) 90.1 22.5 88.2 64.2 65.2
+ biasing 76.9 48.6 91.4 64.5 62.2
Semi-supervised 75.5 52.5 92.8 51.4 68.3
+ biasing 63.0 64.4 93.9 51.8 66.9
Supervised 88.1 65.0 93.2 71.7 54.8
+ biasing 80.1 73.5 94.0 71.5 53.8
Table 7: Comparison of precision (P) and recall (R) for names/non-names and word error rate (WER). Supervised represents an ideal scenario.

Table 7 shows the comparison of word error rate as well as the precision and recall performances of four personalization techniques. For this experiment, we use a slightly different decoding setup that includes end-pointing and supports biasing. The results in Table 7 are not directly comparable with the earlier results in the paper, but are self consistent.

Our results show that training with synthesized sentences achieves better results than training with synthesized names only. This is not surprising because the model sees the names in different linguistic contexts during training. Moreover, training with the actual speech data performs better than using synthesized data because the model learns the user-specific pronunciation of the names.

We also observe that biasing can be applied to fine-tuned models and obtain consistent further improvements. With biasing, the recall rate for the names improve as the user provides more supervision signals: 30.1% (biasing only) 48.6% (with TTS sentences) 64.4% (actual speech with name-corrected transcripts) 73.5% (fully supervised). However, biasing increases the number of false alarms for the names, which results in lower precision for names.

6.6 On-device Benchmark

Batch Size Memory (GB) Epoch Time (minutes)
1 1.5 50
5 1.6 22
10 1.5 14
20 1.8 10
Table 8: Benchmark Results of training memory and speed on a Pixel 3 mobile phone with different batch sizes.

Finally, we ran benchmark experiments to measure the memory and speed performances for on-device training. Training was done on a Pixel 3 mobile phone using 50 utterances from one speaker over 1 epoch. The benchmark results are shown in Table 8, comparing different batch sizes. The batch size does not affect the memory usage much. Training consumes between 1.5 – 1.8 GB memory with batch sizes 1, 5, 10 and 20. On the other hand, increasing the batch size significantly improves the throughput. With batch size of 1, the time needed to perform one epoch of training is 50 minutes. By increasing the batch size to 20, the training time per epoch reduces by 5 times to 10 minutes. Therefore, training on device for 15 epochs with 5 utterances per mini batch (which is the configuration used for the experiments in this paper) will take about 5.5 hours. Increasing the batch size to 20 will reduce the training time to 2.5 hours. This fits reasonably well within one typical training session when the phone is idle. Moreover, personalization can also be performed over multiple training sessions to progressively improve the model as more user data become available.

7 Conclusions

This paper investigates the effectiveness of several techniques for personalizing end-to-end speech recognition models on device by learning new named entities. We find that it is possible to fine-tune the entire model with 117 million parameters using a small learning rate and early stopping. Elastic weight consolidation can be used to mitigate overfitting, and achieves a 10.3% relative word error rate improvement on the Wiki-Names dataset. Since supervised on-device training would rely on the user to provide labeled data, we compare training with differing amounts of supervision (and corresponding user effort). We propose using keyword-dependent precision and recall to measure vocabulary acquisition performance. Our results show that biasing combined with synthesized data of sentences can improve the recall of new named entities from 2.4% to 48.6% without the users providing speech inputs and/or correcting the recognition outputs. The name recall rate can be further improved to 64.4% and 73.5% if the user corrects only the errors for the names (semi-supervised) and all the errors (supervised), respectively.