1 Introduction
Current stateoftheart neural networkbased ASR systems have advanced to nearly human performance in several evaluation settings [25, 28]; however, these systems perform poorly for domains^{1}^{1}1In this paper, we use the term ”domain” to refer to a group of utterances that share some common characteristics. that are not included in the original training data [4, 27, 11, 12]. For example, if we train an ASR system using a U.S. English dataset, the performance of the system significantly degrades for other English accents (e.g., Australian, Indian, and Hispanic). In order to improve performance of the system for an unseen domain, we can adapt the previously trained model to capture the statistics of the new domain. However, adaptation techniques suffer from the forgetting effect: previously learned information will be lost by learning the new information. We need an ASR system that not only performs well for the new domain, but also retains performance for previously seen domains. This is the goal of domain expansion methods.
Domain Expansion – In a domain expansion scenario, we are given a model trained on an initial domain and a dataset for an unseen domain, the goal is to modify the model such that it performs well for both domains. The main difficulty of domain expansion is to preserve the functionality (inputoutput mapping) of the original model (mitigating the forgetting problem). Many approaches have been proposed to deal with the forgetting problem in neural networks. These approaches can be divided into three categories: architectural, rehearsal, and regularization strategies.
1.1 Architectural strategies
In this class of methods, architectures of neural networks are modified to mitigate the forgetting problem. Progressive neural network (PNN) [24] is a popular architectural strategy; it freezes the previously trained network and uses its intermediate representations as inputs into a new smaller network. PNN has been applied in many different applications including speech synthesis [10], speaker identification [6] and speech emotion recognition [6]. However, it has been shown that PNN is not efficient for long sequences of domains, since the number of weights in PNN increases linearly with the number of domains [24].
1.2 Rehearsal strategies
These approaches store part of the previous training data and periodically replay them for future training. A full rehearsal strategy can alleviate the forgetting effect, but it is very slow and memory intensive. Tylor et al. proposed EXSTREAM, a new partitioningbased approach, to address the memory problem of the full rehearsal strategy [7]. In another approach, [3] proposed to train an encoderdecoder model that distills information which exists in the previous domains. Their method uses the trained encoderdecoder to simulate pseudo patterns of the previous domains and exploits these pseudo patterns during the training of the new domain.
1.3 Regularization strategies
Regularization refers to a set of techniques that alleviate the forgetting effect by imposing additional constraints on updating parameters. A straightforward constraint is weight constraint adaptation (WCA) which penalizes the deviation of the model parameters from the original model parameters; it adds an distance between the original and adapted weights [19]. Another popular regularization approach is learning without forgetting (LWF) [20] that tries to learn a sequence of relevant tasks without losing performance for the older ones by imposing output stability. Jung et al. [13] explored the domain expansion problem for image classification tasks. They used an
distance between the final hidden representations of the original network and the adapted network. Kirkpatrick et al.
[18] introduced elastic weight consolidation (EWC) which selectively slows down the training for weights that are important for older domains.In this study, we explore approaches to address the domain expansion problem for the deep neural network (DNN)based acoustic models. To the best of our knowledge, this is the first study that explores domain expansion for speech recognition. We investigate several existing and proposed regularization strategies to alleviate the forgetting effect in domain expansion. We employ WCA and EWC as the baseline techniques for the domain expansion problem; we also propose two new domain expansion techniques: soft KLdivergence (SKLD) and hybrid SKLDEWC. SKLD penalizes the KLdivergence (KLD) between the original model’s output and the adapted model’s output as a measure of the deviation of the model. We will demonstrate that the proposed SKLD and EWC are complementary to each other, and combining them can lead to a better domain expansion technique which we refer to as SKLDEWC. We will compare the efficacy of these methods in an accent adaptation task in which we adapt a DNN acoustic model trained with native English to three different English accents: Australian, Hispanic, and Indian. Our results will show that the proposed hybrid technique, SKLDEWC, results in the best overall performance and SKLD performs significantly better than EWC and WCA.
2 Domain Expansion Approaches
In this section, we explain details of four domain expansion techniques (i.e., weight constraint adaptation (WCA), elastic weight consolidation (EWC), soft KLDivergence (SKLD), and hybrid SKLDEWC) that we investigate in this study.
Problem Setup – In the domain expansion task, we are given an original model , trained on an original domain , and a dataset for an unseen domain , where the goal is to find a new model that performs well for both and .
2.1 Weight Constraint Adaptation (WCA)
WCA was first proposed in [19]
to regularize the adaptation process for discriminative classifiers. In another study
[20], WCA was employed for continual learning in a sequence of disjoint tasks. This technique tries to find a solution that performs well for the new domain, , which is also close to the original model, .According to [18], for a given neural network architecture, there are many configurations of model parameters that lead to comparable performance. Therefore, there are many configurations that can efficiently represent our new domain . Among such configurations, an effective solution for domain expansion is the one that stands closer to the original model . Different distance metrics can be used to measure the similarity between models. WCA benefits from the Euclidean distance between the learnable parameters of and . This idea can be implemented by imposing an additional
constraint on the optimization loss function of the neural network:
(1) 
where and are the learnable parameters of and , respectively; is the main optimization loss (crossentropy loss function); is the regularized loss with the WCA technique; is the norm; and is a regularization parameter that determines how far the parameters could diverge from their initial values to learn the new domain.
2.2 Elastic Weight Consolidation (EWC)
The WCA technique considers all weights equally. Therefore, it is unable to find an efficient compromise to maintain the model performance for the original domain and learning the new domain . However, all weights are not equally important, and using an approach that takes weight importance into account would perform better than a naive WCA.
Intuitively, after training a DNN with sufficient iterations, the model converges to a local minimum point of the optimization landscape. At such a point, the sensitivity of the loss function w.r.t. the th learnable weight, , can be calculated by the curvature of the loss function along the direction specified by changes. High curvature for a weight means that the loss function is sensitive to small changes to that weight. Therefore, to preserve the performance of the network for the previous domain, we must prevent modifying the parameters with high curvature. On the other hand, parameters with low curvature values are proper choices to be tuned with new data without losing the model performance for the original data.
The curvature of the loss function is equivalent to the diagonal of the Fisher information matrix [21]. EWC offers a straightforward method to incorporate the importance of the learnable weights (curvature of the loss function w.r.t. the weights) in the adaptation process. The method is similar to WCA; the only difference is that EWC employs a weighted norm instead of the regular norm in WCA:
(2) 
where is the th element of the diagonal of the Fisher information matrix (representing the importance of the th learnable weight); and are the th weight of the new and original models, respectively; and the summation is taken over all learnable weights of the network.
can be easily calculated by the variance of the first order derivatives of the loss function w.r.t. the learnable weights (i.e.,
}) [21].2.3 Soft KLDivergence (SKLD)
A major difficulty of the domain expansion task is to preserve the functionality (inputoutput mapping) of the original model . WCA and EWC achieve this by providing a link between the learnable weights of the new model and the original model . According to the experiments performed in [13], linking the learnable parameters is not an efficient way of preserving the functionality of the parameters, since applying slight changes to some of the parameters may significantly modify the inputoutput mapping of the network. Another method for preserving the functionality of is to impose new constraints on the outputs of the model [29]. By constraining the outputs of to be consistent with the outputs of , we can assure that these two models are similar to each other. SKLD leverages this idea through two steps: (1) it takes the original model and the data of the new domain ; it then generates the output of for all samples of the dataset. (2) next, SKLD trains the new model by initializing it from and using a regularized loss function that can be expressed by:
(3) 
where is the total number of samples; is the KL distance; is the
th input feature vector;
and are the outputs of the original and the new models obtained for the th sample ; and is a regularization hyperparameter that provides a compromise between learning the new domain (by optimizing ) and preserving the inputoutput mapping of the original model (by optimizing ). results in the conventional pretraining/finetuning adaptation. By increasing the value of , we can ensure a balanced tradeoff between learning the new domain and mitigating the forgetting effect problem. For this study, we tune to achieve the best performance for both domains.Equation (3) uses the KL divergence between and to deal with the forgetting problem. However, some parts of the KL divergence are not related to the learnable parameters of . In [2], it is demonstrated that by removing these parts, the KL divergence will be simplified to the crossentropy:
(4) 
where is the total number of classes; and
are the probability of the
th class generated by and for an input vector .In neural networks, we typically use a softmax with temperature to produce the probability for each class. However, Hinton et al. [9] suggested that using
that increases the probability of small logits, performs better in transferring the functionality of a large network to a smaller one. Therefore, we also consider tuning the temperature to examine its effects in preserving the functionality of the model for the original data. We consider using a softmax with adjustable temperature to produce the output distribution for both
and in the constraint term of equations (4). Note that we use for the tuning loss as well as in the evaluation phase.2.4 Hybrid SKLDEWC
In the previous sections, we explained both SKLD and EWC approaches. Each one has its advantages and disadvantages. The advantage of EWC is that it computes the Fisher information matrix (that quantifies the importance of the weights) based on the original data during the initial training. However, SKLD does not exploit such information about the importance of the weights. On the other hand, EWC uses a fixed fisher matrix that is estimated for the initial model. However, fisher matrix changes during the adaptation procedure and therefore the fixed assumption of the EWC method is not reliable. The advantage of SKLD is that it is more efficient in preserving the functionality of the original model as the efficacy of SKLD does not change during the adaptation procedure. We propose to combine these two techniques into a new hybrid approach SKLDEWC. Our proposed technique can be implemented by imposing both SKLD and EWC constraints on the tuning loss:
(5) 
This hybrid method requires two regularization parameters: and defined for regularizing the outputs and the weights, respectively. These two parameters provide a more flexible data expansion technique, but at the expense of more difficult hyperparameter tuning.
3 Experiments and Results
3.1 Experimental Setup
Dataset – We evaluate the efficacy of the domain expansion techniques for a DNNHMMbased ASR system. To train the original model for native English, we use the 100h part of the LIBRISPEECH corpus (Libri) [22]. This part has higher recording quality and the speakers’ accents are closer to the native English compared to the rest of the corpus. For the domain expansion experiments, we use UTCRSS4EnglishAccent corpus [5] that contains speech data from 3 nonUS English accents, namely Hispanic (HIS), Indian (IND) and Australian (AUS). The data for each accent consists of 100 speakers, with session content that consists of read and spontaneous speech. In this corpus, for each accent, there is more than 28h of training data, 5h of development and 5h of evaluation data. We use the standard language model (LM) provided for Libri to decode the original data (i.e., Libri) [22]. However, for the other accents, since we have spontaneous utterances too, we train a 3gram and a 4gram LM by pooling transcriptions of Fisher, Switchboard, and UTCRSS4EnglishAccent. The decoding procedure used for Libri and other accents is the same.
Method  Org  New  Avg  RelMC  

AUS 
MC  8.35  10.7  9.52  — 
Original  8.10  23.04  15.57  63.5  
FineTuned  20.3  9.64  14.97  57.2  
WCA  10.23  12.2  11.22  17.74  
EWC  9.27  12.71  11  15.38  
SKLD  8.84  11.23  10.03  5.35  
SKLDEWC  8.49  11.48  9.98  4.8  
IND  MC  8.29  15.96  12.12  — 
Original  8.10  28.68  18.39  51.6  
FineTuned  22.54  15.56  19.05  57.1  
WCA  10.66  17.61  14.13  16.5  
EWC  10.26  17.6  13.93  14.8  
SKLD  9.29  16.45  12.87  6.14  
SKLDEWC  8.92  16.61  12.76  5.2  
HIS  MC  8.21  11.65  9.93  — 
Original  8.10  20.09  14.1  41.9  
FineTuned  16.14  11.52  13.83  39  
WCA  8.65  12.84  10.74  8.2  
EWC  8.72  12.61  10.66  7.4  
SKLD  9.26  12.0  10.63  7.0  
SKLDEWC  8.29  12.5  10.39  4.7 
Model structure – We implement the domain expansion techniques for a DNNHMM based ASR system using Kaldi [23]
and Tensorflow
[1]. In all experiments, we extract 40dimensional Melfilterbank coefficients [23]for each 25ms frame with a skip rate of 10ms. Each frame is expanded by stacking 5 frames from each side; therefore, the input to the network is the Melfilterbank coefficients of 11 successive frames. The acoustic model is a 5layer fully connected network with 1024 neurons at each hidden layer and 3440 output units that produce a distribution over senones. We use “ReLU” activation function in intermediate layers and “softmax” in the output layer that generates the probabilities of senones. We initialize all weights using the “henormal” initialization technique
[8]. The loss function for training the baseline model and also adaptation (i.e., in our equations) is the crossentropy between the forced aligned senone labels and the model outputs.Model training – In all experiments, we use Adam optimizer to train or adapt the DNN models [17]. For training the original model with Libri, we use a learning rate (LR) of 0.001; however, we found that smaller learning rates perform better for adapting the original model. Our initial experiments showed that the learning rate of 0.0001 is an effective choice for the model expansion experiments. To train the original model, we employ the earlystopping technique to deal with the overtraining problem. Earlystopping is performed by monitoring the performance of the model on a heldout validation set [15, 16]. However, applying earlystopping is not efficient for the domain expansion task [20]; it is because the data of the original domain is not available and performing the earlystopping only on the data of the new domain is just beneficial for the new domain, and it may significantly reduce the model performance on the original domain [20]. In continual learning, a common approach is to perform a fixed number of iterations to train the new model [14]
. In this study, we found that finetuning the original model converges to an optimum solution in 20 epochs. To investigate the efficacy of each approach, we evaluate their performance in three independent scenarios that consider IND, AUS, and HIS accents as the new domains.
3.2 Results
We conduct several experiments to evaluate the performance of the domain expansion methods explored in this study. As mention in previous sections, each domain expansion technique has a controlling hyperparameter that provides a compromise between keeping the model performance for the original data and learning the new domains. In the first set of experiments, we tune these hyperparameters to achieve the best overall performance for both the original and the new domains. The results for all approaches are summarized in Table 1. We also report the results of multicondition training [26] in which the model is trained with both original and new domains by pooling their data. The performance of the multicondition system can be considered as an upper bound for the performance of domainexpansion methods.
In figure 1, we study the effect of changing the abovementioned hyperparameters for three model expansion techniques: WCA, EWC and SKLD. This figure shows which method is better in proving a tradeoff between retaining the original model and learning the new domain.
Forgetting effect. The original model performs well for the Libri clean test set that matches the training set conditions; however, for the unseen domains, the performance of the model degrades significantly. Finetuning (FT) this model to the unseen domains results in a significant improvement in WER for the new domains, but the model performance for the original domain drops dramatically (Table 1). Figure 2 shows the rate of forgetting the information of the original data as we finetune the model to AUS accent (FTOriginal). We also show how the SKLD approach performs in the same setting. SKLD can successfully preserve the model performance for the original data while learning the new data. The overall performance of the model on both old and new domains demonstrates that SKLD performs significantly better than naive finetuning for domain expansion.
The performance of WCA and EWC. WCA as a naive domain expansion method performs significantly better than finetuning in finding a compromise between the performance of the original and new domains. For EWC experiments, since the diagonal of the Fisher information matrix F is zero for many of the original model’s weights, simply applying EWC does not preserve the model performance. We found that adding an empirically determined value of 1 to the elements of the matrix addresses the problem. EWC outperforms WCA in all the experiments (Table 1 and Figure 1), which demonstrates the efficacy of the Fisher information matrix in preserving the learned information of the original data. For example, for IND accent in Table 1, both approaches achieve 17.6 WER for the new data, while EWC achieves a relative WER improvement of +3.8% vs. WCA.
The performance of SKLD and Hybrid SKLDEWC. SKLD significantly outperforms all other single domainexpansion approaches yielding a relative WER improvement of +8.8% and +7.6% vs. EWC for AUS and IND accent, respectively (Table 1). For HIS accent, SKLD is still slightly better than EWC. The hybrid SKLDEWC that benefits from both SKLD and EWC results in the best overall performance. Comparing the performance of domain expansion techniques with multicondition training in Table 1 indicates that we have achieved comparable performance with multicondition training which uses the original training data we consider unavailable for the domain expansion approaches.
4 Conclusions
In this paper, we explore several continual learningbased domain expansion techniques as an effective solution for domain mismatch problem in ASR systems. We examine the efficacy of the approaches through experiments on adapting a model trained with native English to three different English accents: Australian, Hispanic and Indian. We demonstrate that simply adapting the original model to the target domains results in a significant performance degradation of the adapted model for the original data. However, we demonstrate that SKLD and hybrid SKLDEWC are effective in adapting the native English model to the new accents while retaining the performance of the adapted model for native English. The proposed SKLDEWC outperformed other existing approaches such as finetuning, WCA, and EWC.
References

[1]
(2016)
Tensorflow: a system for largescale machine learning
. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. Cited by: §3.1.  [2] (2016) Distilling knowledge from ensembles of neural networks for speech recognition.. In Interspeech, pp. 3439–3443. Cited by: §2.3.

[3]
(2017)
Neurogenesis deep learning: extending deep networks to accommodate new classes
. In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 526–533. Cited by: §1.2.  [4] (2018) Advancing multiaccented lstmctc speech recognition using a domain specific studentteacher learning paradigm. In 2018 IEEE Spoken Language Technology Workshop (SLT), pp. 29–35. Cited by: §1.
 [5] (2018) Leveraging native language information for improved accented speech recognition. Proc. Interspeech. Cited by: §3.1.

[6]
(2017)
Progressive neural networks for transfer learning in emotion recognition
. arXiv preprint arXiv:1706.03256. Cited by: §1.1.  [7] (2018) Memory efficient experience replay for streaming learning. arXiv preprint arXiv:1809.05922. Cited by: §1.2.

[8]
(2015)
Delving deep into rectifiers: surpassing humanlevel performance on imagenet classification
. InProceedings of the IEEE international conference on computer vision
, pp. 1026–1034. Cited by: §3.1.  [9] (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §2.3.
 [10] (2018) Learning interpretable control dimensions for speech synthesis by using external data. Proc. Interspeech 2018, pp. 32–36. Cited by: §1.1.

[11]
(2017)
Unsupervised domain adaptation for robust speech recognition via variational autoencoderbased data augmentation
. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 16–23. Cited by: §1.  [12] (2019) Analyzing large receptive field convolutional networks for distant speech recognition. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Cited by: §1.

[13]
(2018)
Lessforgetful learning for domain expansion in deep neural networks.
In
ThirtySecond AAAI Conference on Artificial Intelligence
, Cited by: §1.3, §2.3.  [14] (2018) Measuring catastrophic forgetting in neural networks. In ThirtySecond AAAI Conference on Artificial Intelligence, Cited by: §3.1.
 [15] (2018) The priori emotion dataset: linking mood to emotion detected inthewild. arXiv preprint arXiv:1806.10658. Cited by: §3.1.
 [16] (2019) Jointly aligning and predicting continuous emotion annotations. IEEE Transactions on Affective Computing. Cited by: §3.1.
 [17] (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.1.
 [18] (2017) Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114 (13), pp. 3521–3526. Cited by: §1.3, §2.1.
 [19] (2006) Regularized adaptation of discriminative classifiers. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Vol. 1, pp. I–I. Cited by: §1.3, §2.1.
 [20] (2018) Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (12), pp. 2935–2947. Cited by: §1.3, §2.1, §3.1.
 [21] (2018) Continuous learning in singleincrementaltask scenarios. arXiv preprint arXiv:1806.08568. Cited by: §2.2.
 [22] (2015) Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206–5210. Cited by: §3.1.
 [23] (2011) The kaldi speech recognition toolkit. Technical report IEEE Signal Processing Society. Cited by: §3.1.
 [24] (2016) Progressive neural networks. arXiv preprint arXiv:1606.04671. Cited by: §1.1.
 [25] (2017) English conversational telephone speech recognition by humans and machines. arXiv preprint arXiv:1703.02136. Cited by: §1.
 [26] (2013) An investigation of deep neural networks for noise robust speech recognition. In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 7398–7402. Cited by: §3.2.
 [27] (2017) An unsupervised deep domain adaptation approach for robust speech recognition. Neurocomputing 257, pp. 79–87. Cited by: §1.
 [28] (2016) Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256. Cited by: §1.
 [29] (2013) KLdivergence regularized deep neural network adaptation for improved large vocabulary speech recognition. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 7893–7897. Cited by: §2.3.
Comments
There are no comments yet.