Domain Expansion in DNN-based Acoustic Models for Robust Speech Recognition

10/01/2019 ∙ by Shahram Ghorbani, et al. ∙ 0

Training acoustic models with sequentially incoming data – while both leveraging new data and avoiding the forgetting effect– is an essential obstacle to achieving human intelligence level in speech recognition. An obvious approach to leverage data from a new domain (e.g., new accented speech) is to first generate a comprehensive dataset of all domains, by combining all available data, and then use this dataset to retrain the acoustic models. However, as the amount of training data grows, storing and retraining on such a large-scale dataset becomes practically impossible. To deal with this problem, in this study, we study several domain expansion techniques which exploit only the data of the new domain to build a stronger model for all domains. These techniques are aimed at learning the new domain with a minimal forgetting effect (i.e., they maintain original model performance). These techniques modify the adaptation procedure by imposing new constraints including (1) weight constraint adaptation (WCA): keeping the model parameters close to the original model parameters; (2) elastic weight consolidation (EWC): slowing down training for parameters that are important for previously established domains; (3) soft KL-divergence (SKLD): restricting the KL-divergence between the original and the adapted model output distributions; and (4) hybrid SKLD-EWC: incorporating both SKLD and EWC constraints. We evaluate these techniques in an accent adaptation task in which we adapt a deep neural network (DNN) acoustic model trained with native English to three different English accents: Australian, Hispanic, and Indian. The experimental results show that SKLD significantly outperforms EWC, and EWC works better than WCA. The hybrid SKLD-EWC technique results in the best overall performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Current state-of-the-art neural network-based ASR systems have advanced to nearly human performance in several evaluation settings [25, 28]; however, these systems perform poorly for domains111In this paper, we use the term ”domain” to refer to a group of utterances that share some common characteristics. that are not included in the original training data [4, 27, 11, 12]. For example, if we train an ASR system using a U.S. English dataset, the performance of the system significantly degrades for other English accents (e.g., Australian, Indian, and Hispanic). In order to improve performance of the system for an unseen domain, we can adapt the previously trained model to capture the statistics of the new domain. However, adaptation techniques suffer from the forgetting effect: previously learned information will be lost by learning the new information. We need an ASR system that not only performs well for the new domain, but also retains performance for previously seen domains. This is the goal of domain expansion methods.

Domain Expansion – In a domain expansion scenario, we are given a model trained on an initial domain and a dataset for an unseen domain, the goal is to modify the model such that it performs well for both domains. The main difficulty of domain expansion is to preserve the functionality (input-output mapping) of the original model (mitigating the forgetting problem). Many approaches have been proposed to deal with the forgetting problem in neural networks. These approaches can be divided into three categories: architectural, rehearsal, and regularization strategies.

1.1 Architectural strategies

In this class of methods, architectures of neural networks are modified to mitigate the forgetting problem. Progressive neural network (PNN) [24] is a popular architectural strategy; it freezes the previously trained network and uses its intermediate representations as inputs into a new smaller network. PNN has been applied in many different applications including speech synthesis [10], speaker identification [6] and speech emotion recognition [6]. However, it has been shown that PNN is not efficient for long sequences of domains, since the number of weights in PNN increases linearly with the number of domains [24].

1.2 Rehearsal strategies

These approaches store part of the previous training data and periodically replay them for future training. A full rehearsal strategy can alleviate the forgetting effect, but it is very slow and memory intensive. Tylor et al. proposed EXSTREAM, a new partitioning-based approach, to address the memory problem of the full rehearsal strategy [7]. In another approach, [3] proposed to train an encoder-decoder model that distills information which exists in the previous domains. Their method uses the trained encoder-decoder to simulate pseudo patterns of the previous domains and exploits these pseudo patterns during the training of the new domain.

1.3 Regularization strategies

Regularization refers to a set of techniques that alleviate the forgetting effect by imposing additional constraints on updating parameters. A straightforward constraint is weight constraint adaptation (WCA) which penalizes the deviation of the model parameters from the original model parameters; it adds an distance between the original and adapted weights [19]. Another popular regularization approach is learning without forgetting (LWF) [20] that tries to learn a sequence of relevant tasks without losing performance for the older ones by imposing output stability. Jung et al. [13] explored the domain expansion problem for image classification tasks. They used an

distance between the final hidden representations of the original network and the adapted network. Kirkpatrick et al. 

[18] introduced elastic weight consolidation (EWC) which selectively slows down the training for weights that are important for older domains.

In this study, we explore approaches to address the domain expansion problem for the deep neural network (DNN)-based acoustic models. To the best of our knowledge, this is the first study that explores domain expansion for speech recognition. We investigate several existing and proposed regularization strategies to alleviate the forgetting effect in domain expansion. We employ WCA and EWC as the baseline techniques for the domain expansion problem; we also propose two new domain expansion techniques: soft KL-divergence (SKLD) and hybrid SKLD-EWC. SKLD penalizes the KL-divergence (KLD) between the original model’s output and the adapted model’s output as a measure of the deviation of the model. We will demonstrate that the proposed SKLD and EWC are complementary to each other, and combining them can lead to a better domain expansion technique which we refer to as SKLD-EWC. We will compare the efficacy of these methods in an accent adaptation task in which we adapt a DNN acoustic model trained with native English to three different English accents: Australian, Hispanic, and Indian. Our results will show that the proposed hybrid technique, SKLD-EWC, results in the best overall performance and SKLD performs significantly better than EWC and WCA.

2 Domain Expansion Approaches

In this section, we explain details of four domain expansion techniques (i.e., weight constraint adaptation (WCA), elastic weight consolidation (EWC), soft KL-Divergence (SKLD), and hybrid SKLD-EWC) that we investigate in this study.

Problem Setup – In the domain expansion task, we are given an original model , trained on an original domain , and a dataset for an unseen domain , where the goal is to find a new model that performs well for both and .

2.1 Weight Constraint Adaptation (WCA)

WCA was first proposed in [19]

to regularize the adaptation process for discriminative classifiers. In another study 

[20], WCA was employed for continual learning in a sequence of disjoint tasks. This technique tries to find a solution that performs well for the new domain, , which is also close to the original model, .

According to [18], for a given neural network architecture, there are many configurations of model parameters that lead to comparable performance. Therefore, there are many configurations that can efficiently represent our new domain . Among such configurations, an effective solution for domain expansion is the one that stands closer to the original model . Different distance metrics can be used to measure the similarity between models. WCA benefits from the Euclidean distance between the learnable parameters of and . This idea can be implemented by imposing an additional

constraint on the optimization loss function of the neural network:

(1)

where and are the learnable parameters of and , respectively; is the main optimization loss (cross-entropy loss function); is the regularized loss with the WCA technique; is the norm; and is a regularization parameter that determines how far the parameters could diverge from their initial values to learn the new domain.

2.2 Elastic Weight Consolidation (EWC)

The WCA technique considers all weights equally. Therefore, it is unable to find an efficient compromise to maintain the model performance for the original domain and learning the new domain . However, all weights are not equally important, and using an approach that takes weight importance into account would perform better than a naive WCA.

Intuitively, after training a DNN with sufficient iterations, the model converges to a local minimum point of the optimization landscape. At such a point, the sensitivity of the loss function w.r.t. the -th learnable weight, , can be calculated by the curvature of the loss function along the direction specified by changes. High curvature for a weight means that the loss function is sensitive to small changes to that weight. Therefore, to preserve the performance of the network for the previous domain, we must prevent modifying the parameters with high curvature. On the other hand, parameters with low curvature values are proper choices to be tuned with new data without losing the model performance for the original data.

The curvature of the loss function is equivalent to the diagonal of the Fisher information matrix  [21]. EWC offers a straightforward method to incorporate the importance of the learnable weights (curvature of the loss function w.r.t. the weights) in the adaptation process. The method is similar to WCA; the only difference is that EWC employs a weighted norm instead of the regular norm in WCA:

(2)

where is the -th element of the diagonal of the Fisher information matrix (representing the importance of the -th learnable weight); and are the -th weight of the new and original models, respectively; and the summation is taken over all learnable weights of the network.

can be easily calculated by the variance of the first order derivatives of the loss function w.r.t. the learnable weights (i.e.,

}) [21].

2.3 Soft KL-Divergence (SKLD)

A major difficulty of the domain expansion task is to preserve the functionality (input-output mapping) of the original model . WCA and EWC achieve this by providing a link between the learnable weights of the new model and the original model . According to the experiments performed in  [13], linking the learnable parameters is not an efficient way of preserving the functionality of the parameters, since applying slight changes to some of the parameters may significantly modify the input-output mapping of the network. Another method for preserving the functionality of is to impose new constraints on the outputs of the model [29]. By constraining the outputs of to be consistent with the outputs of , we can assure that these two models are similar to each other. SKLD leverages this idea through two steps: (1) it takes the original model and the data of the new domain ; it then generates the output of for all samples of the dataset. (2) next, SKLD trains the new model by initializing it from and using a regularized loss function that can be expressed by:

(3)

where is the total number of samples; is the KL distance; is the

-th input feature vector;

and are the outputs of the original and the new models obtained for the -th sample ; and is a regularization hyper-parameter that provides a compromise between learning the new domain (by optimizing ) and preserving the input-output mapping of the original model (by optimizing ). results in the conventional pre-training/fine-tuning adaptation. By increasing the value of , we can ensure a balanced trade-off between learning the new domain and mitigating the forgetting effect problem. For this study, we tune to achieve the best performance for both domains.

Equation (3) uses the KL divergence between and to deal with the forgetting problem. However, some parts of the KL divergence are not related to the learnable parameters of . In [2], it is demonstrated that by removing these parts, the KL divergence will be simplified to the cross-entropy:

(4)

where is the total number of classes; and

are the probability of the

-th class generated by and for an input vector .

In neural networks, we typically use a softmax with temperature to produce the probability for each class. However, Hinton et al. [9] suggested that using

that increases the probability of small logits, performs better in transferring the functionality of a large network to a smaller one. Therefore, we also consider tuning the temperature to examine its effects in preserving the functionality of the model for the original data. We consider using a softmax with adjustable temperature to produce the output distribution for both

and in the constraint term of equations (4). Note that we use for the tuning loss as well as in the evaluation phase.

2.4 Hybrid SKLD-EWC

In the previous sections, we explained both SKLD and EWC approaches. Each one has its advantages and disadvantages. The advantage of EWC is that it computes the Fisher information matrix (that quantifies the importance of the weights) based on the original data during the initial training. However, SKLD does not exploit such information about the importance of the weights. On the other hand, EWC uses a fixed fisher matrix that is estimated for the initial model. However, fisher matrix changes during the adaptation procedure and therefore the fixed assumption of the EWC method is not reliable. The advantage of SKLD is that it is more efficient in preserving the functionality of the original model as the efficacy of SKLD does not change during the adaptation procedure. We propose to combine these two techniques into a new hybrid approach SKLD-EWC. Our proposed technique can be implemented by imposing both SKLD and EWC constraints on the tuning loss:

(5)

This hybrid method requires two regularization parameters: and defined for regularizing the outputs and the weights, respectively. These two parameters provide a more flexible data expansion technique, but at the expense of more difficult hyper-parameter tuning.

3 Experiments and Results

3.1 Experimental Setup

Dataset – We evaluate the efficacy of the domain expansion techniques for a DNN-HMM-based ASR system. To train the original model for native English, we use the 100h part of the LIBRISPEECH corpus (Libri) [22]. This part has higher recording quality and the speakers’ accents are closer to the native English compared to the rest of the corpus. For the domain expansion experiments, we use UT-CRSS-4EnglishAccent corpus [5] that contains speech data from 3 non-US English accents, namely Hispanic (HIS), Indian (IND) and Australian (AUS). The data for each accent consists of 100 speakers, with session content that consists of read and spontaneous speech. In this corpus, for each accent, there is more than 28h of training data, 5h of development and 5h of evaluation data. We use the standard language model (LM) provided for Libri to decode the original data (i.e., Libri) [22]. However, for the other accents, since we have spontaneous utterances too, we train a 3-gram and a 4-gram LM by pooling transcriptions of Fisher, Switchboard, and UT-CRSS-4EnglishAccent. The decoding procedure used for Libri and other accents is the same.

Method Org New Avg Rel-MC

AUS
MC 8.35 10.7 9.52
Original 8.10 23.04 15.57 63.5
Fine-Tuned 20.3 9.64 14.97 57.2
WCA 10.23 12.2 11.22 17.74
EWC 9.27 12.71 11 15.38
SKLD 8.84 11.23 10.03 5.35
SKLD-EWC 8.49 11.48 9.98 4.8
IND MC 8.29 15.96 12.12
Original 8.10 28.68 18.39 51.6
Fine-Tuned 22.54 15.56 19.05 57.1
WCA 10.66 17.61 14.13 16.5
EWC 10.26 17.6 13.93 14.8
SKLD 9.29 16.45 12.87 6.14
SKLD-EWC 8.92 16.61 12.76 5.2
HIS MC 8.21 11.65 9.93
Original 8.10 20.09 14.1 41.9
Fine-Tuned 16.14 11.52 13.83 39
WCA 8.65 12.84 10.74 8.2
EWC 8.72 12.61 10.66 7.4
SKLD 9.26 12.0 10.63 7.0
SKLD-EWC 8.29 12.5 10.39 4.7
Table 1: WERs of different domain expansion methods on the original (Org) and the new domains (New). For each approach, among the different settings of regularization parameters, the one that results in the best overall performance for both original and new domains (i.e., the lowest average of WERs) is reported. The relative WER increase of the methods compared to multi-condition (MC) training is also reported (Rel-MC)
Figure 1: Visualizing WER of different techniques on original and new datasets. These curves are generated by changing the hyper-parameters that control the trade-off between the performance of the original and the new domains.
Figure 2: Visualizing the forgetting effect of fine-tuning the original model to the AUS accent. The performance of SKLD in the same setting is also reported, which demonstrates the efficacy of SKLD to preserve the learned knowledge while adapting to the new domain. FT-Original: performance of the fine-tuned model on Libri; FT-New: performance of the fine-tuned model on AUS; FT-Avg: average of FT-Original and FT-new; SKLD-Original: performance of SKLD on Libri; SKLD-New: performance of SKLD on AUS; SKLD-Avg: average of SKLD-Original and SKLD-New

Model structure – We implement the domain expansion techniques for a DNN-HMM based ASR system using Kaldi [23]

and Tensorflow 

[1]. In all experiments, we extract 40-dimensional Mel-filterbank coefficients [23]

for each 25ms frame with a skip rate of 10ms. Each frame is expanded by stacking 5 frames from each side; therefore, the input to the network is the Mel-filterbank coefficients of 11 successive frames. The acoustic model is a 5-layer fully connected network with 1024 neurons at each hidden layer and 3440 output units that produce a distribution over senones. We use “ReLUactivation function in intermediate layers and “softmax” in the output layer that generates the probabilities of senones. We initialize all weights using the “he-normal” initialization technique 

[8]. The loss function for training the baseline model and also adaptation (i.e., in our equations) is the cross-entropy between the forced aligned senone labels and the model outputs.

Model training – In all experiments, we use Adam optimizer to train or adapt the DNN models [17]. For training the original model with Libri, we use a learning rate (LR) of 0.001; however, we found that smaller learning rates perform better for adapting the original model. Our initial experiments showed that the learning rate of 0.0001 is an effective choice for the model expansion experiments. To train the original model, we employ the early-stopping technique to deal with the over-training problem. Early-stopping is performed by monitoring the performance of the model on a held-out validation set [15, 16]. However, applying early-stopping is not efficient for the domain expansion task [20]; it is because the data of the original domain is not available and performing the early-stopping only on the data of the new domain is just beneficial for the new domain, and it may significantly reduce the model performance on the original domain [20]. In continual learning, a common approach is to perform a fixed number of iterations to train the new model [14]

. In this study, we found that fine-tuning the original model converges to an optimum solution in 20 epochs. To investigate the efficacy of each approach, we evaluate their performance in three independent scenarios that consider IND, AUS, and HIS accents as the new domains.

3.2 Results

We conduct several experiments to evaluate the performance of the domain expansion methods explored in this study. As mention in previous sections, each domain expansion technique has a controlling hyper-parameter that provides a compromise between keeping the model performance for the original data and learning the new domains. In the first set of experiments, we tune these hyper-parameters to achieve the best overall performance for both the original and the new domains. The results for all approaches are summarized in Table 1. We also report the results of multi-condition training [26] in which the model is trained with both original and new domains by pooling their data. The performance of the multi-condition system can be considered as an upper bound for the performance of domain-expansion methods.

In figure 1, we study the effect of changing the above-mentioned hyper-parameters for three model expansion techniques: WCA, EWC and SKLD. This figure shows which method is better in proving a trade-off between retaining the original model and learning the new domain.

Forgetting effect. The original model performs well for the Libri clean test set that matches the training set conditions; however, for the unseen domains, the performance of the model degrades significantly. Fine-tuning (FT) this model to the unseen domains results in a significant improvement in WER for the new domains, but the model performance for the original domain drops dramatically (Table 1). Figure 2 shows the rate of forgetting the information of the original data as we fine-tune the model to AUS accent (FT-Original). We also show how the SKLD approach performs in the same setting. SKLD can successfully preserve the model performance for the original data while learning the new data. The overall performance of the model on both old and new domains demonstrates that SKLD performs significantly better than naive fine-tuning for domain expansion.

The performance of WCA and EWC. WCA as a naive domain expansion method performs significantly better than fine-tuning in finding a compromise between the performance of the original and new domains. For EWC experiments, since the diagonal of the Fisher information matrix F is zero for many of the original model’s weights, simply applying EWC does not preserve the model performance. We found that adding an empirically determined value of 1 to the elements of the matrix addresses the problem. EWC outperforms WCA in all the experiments (Table 1 and Figure 1), which demonstrates the efficacy of the Fisher information matrix in preserving the learned information of the original data. For example, for IND accent in Table 1, both approaches achieve 17.6 WER for the new data, while EWC achieves a relative WER improvement of +3.8% vs. WCA.

The performance of SKLD and Hybrid SKLD-EWC. SKLD significantly outperforms all other single domain-expansion approaches yielding a relative WER improvement of +8.8% and +7.6% vs. EWC for AUS and IND accent, respectively (Table 1). For HIS accent, SKLD is still slightly better than EWC. The hybrid SKLD-EWC that benefits from both SKLD and EWC results in the best overall performance. Comparing the performance of domain expansion techniques with multi-condition training in Table 1 indicates that we have achieved comparable performance with multi-condition training which uses the original training data we consider unavailable for the domain expansion approaches.

4 Conclusions

In this paper, we explore several continual learning-based domain expansion techniques as an effective solution for domain mismatch problem in ASR systems. We examine the efficacy of the approaches through experiments on adapting a model trained with native English to three different English accents: Australian, Hispanic and Indian. We demonstrate that simply adapting the original model to the target domains results in a significant performance degradation of the adapted model for the original data. However, we demonstrate that SKLD and hybrid SKLD-EWC are effective in adapting the native English model to the new accents while retaining the performance of the adapted model for native English. The proposed SKLD-EWC outperformed other existing approaches such as fine-tuning, WCA, and EWC.

References

  • [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. (2016)

    Tensorflow: a system for large-scale machine learning

    .
    In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. Cited by: §3.1.
  • [2] Y. Chebotar and A. Waters (2016) Distilling knowledge from ensembles of neural networks for speech recognition.. In Interspeech, pp. 3439–3443. Cited by: §2.3.
  • [3] T. J. Draelos, N. E. Miner, C. C. Lamb, J. A. Cox, C. M. Vineyard, K. D. Carlson, W. M. Severa, C. D. James, and J. B. Aimone (2017)

    Neurogenesis deep learning: extending deep networks to accommodate new classes

    .
    In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 526–533. Cited by: §1.2.
  • [4] S. Ghorbani, A. E. Bulut, and J. H. Hansen (2018) Advancing multi-accented lstm-ctc speech recognition using a domain specific student-teacher learning paradigm. In 2018 IEEE Spoken Language Technology Workshop (SLT), pp. 29–35. Cited by: §1.
  • [5] S. Ghorbani and J. H. Hansen (2018) Leveraging native language information for improved accented speech recognition. Proc. Interspeech. Cited by: §3.1.
  • [6] J. Gideon, S. Khorram, Z. Aldeneh, D. Dimitriadis, and E. M. Provost (2017)

    Progressive neural networks for transfer learning in emotion recognition

    .
    arXiv preprint arXiv:1706.03256. Cited by: §1.1.
  • [7] T. L. Hayes, N. D. Cahill, and C. Kanan (2018) Memory efficient experience replay for streaming learning. arXiv preprint arXiv:1809.05922. Cited by: §1.2.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun (2015)

    Delving deep into rectifiers: surpassing human-level performance on imagenet classification

    .
    In

    Proceedings of the IEEE international conference on computer vision

    ,
    pp. 1026–1034. Cited by: §3.1.
  • [9] G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §2.3.
  • [10] Z. Hodari, O. Watts, S. Ronanki, and S. King (2018) Learning interpretable control dimensions for speech synthesis by using external data. Proc. Interspeech 2018, pp. 32–36. Cited by: §1.1.
  • [11] W. Hsu, Y. Zhang, and J. Glass (2017)

    Unsupervised domain adaptation for robust speech recognition via variational autoencoder-based data augmentation

    .
    In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 16–23. Cited by: §1.
  • [12] S. Jafarlou, S. Khorram, V. Kothapally, and J. H.L. Hansen (2019) Analyzing large receptive field convolutional networks for distant speech recognition. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Cited by: §1.
  • [13] H. Jung, J. Ju, M. Jung, and J. Kim (2018) Less-forgetful learning for domain expansion in deep neural networks. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    ,
    Cited by: §1.3, §2.3.
  • [14] R. Kemker, M. McClure, A. Abitino, T. L. Hayes, and C. Kanan (2018) Measuring catastrophic forgetting in neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §3.1.
  • [15] S. Khorram, M. Jaiswal, J. Gideon, M. McInnis, and E. M. Provost (2018) The priori emotion dataset: linking mood to emotion detected in-the-wild. arXiv preprint arXiv:1806.10658. Cited by: §3.1.
  • [16] S. Khorram, M. McInnis, and E. M. Provost (2019) Jointly aligning and predicting continuous emotion annotations. IEEE Transactions on Affective Computing. Cited by: §3.1.
  • [17] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.1.
  • [18] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. (2017) Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114 (13), pp. 3521–3526. Cited by: §1.3, §2.1.
  • [19] X. Li and J. Bilmes (2006) Regularized adaptation of discriminative classifiers. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Vol. 1, pp. I–I. Cited by: §1.3, §2.1.
  • [20] Z. Li and D. Hoiem (2018) Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (12), pp. 2935–2947. Cited by: §1.3, §2.1, §3.1.
  • [21] D. Maltoni and V. Lomonaco (2018) Continuous learning in single-incremental-task scenarios. arXiv preprint arXiv:1806.08568. Cited by: §2.2.
  • [22] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur (2015) Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206–5210. Cited by: §3.1.
  • [23] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, et al. (2011) The kaldi speech recognition toolkit. Technical report IEEE Signal Processing Society. Cited by: §3.1.
  • [24] A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell (2016) Progressive neural networks. arXiv preprint arXiv:1606.04671. Cited by: §1.1.
  • [25] G. Saon, G. Kurata, T. Sercu, K. Audhkhasi, S. Thomas, D. Dimitriadis, X. Cui, B. Ramabhadran, M. Picheny, L. Lim, et al. (2017) English conversational telephone speech recognition by humans and machines. arXiv preprint arXiv:1703.02136. Cited by: §1.
  • [26] M. L. Seltzer, D. Yu, and Y. Wang (2013) An investigation of deep neural networks for noise robust speech recognition. In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 7398–7402. Cited by: §3.2.
  • [27] S. Sun, B. Zhang, L. Xie, and Y. Zhang (2017) An unsupervised deep domain adaptation approach for robust speech recognition. Neurocomputing 257, pp. 79–87. Cited by: §1.
  • [28] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, and G. Zweig (2016) Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256. Cited by: §1.
  • [29] D. Yu, K. Yao, H. Su, G. Li, and F. Seide (2013) KL-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 7893–7897. Cited by: §2.3.