Unsupervised Domain Adaptation Schemes for Building ASR in Low-resource Languages

09/12/2021 ∙ by Anoop C S, et al. ∙ 0

Building an automatic speech recognition (ASR) system from scratch requires a large amount of annotated speech data, which is difficult to collect in many languages. However, there are cases where the low-resource language shares a common acoustic space with a high-resource language having enough annotated data to build an ASR. In such cases, we show that the domain-independent acoustic models learned from the high-resource language through unsupervised domain adaptation (UDA) schemes can enhance the performance of the ASR in the low-resource language. We use the specific example of Hindi in the source domain and Sanskrit in the target domain. We explore two architectures: i) domain adversarial training using gradient reversal layer (GRL) and ii) domain separation networks (DSN). The GRL and DSN architectures give absolute improvements of 6.71 baseline deep neural network model when trained on just 5.5 hours of data in the target domain. We also show that choosing a proper language (Telugu) in the source domain can bring further improvement. The results suggest that UDA schemes can be helpful in the development of ASR systems for low-resource languages, mitigating the hassle of collecting large amounts of annotated speech data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Advancements in deep learning have brought performance improvements in acoustic and language modeling, yielding robust automatic speech recognition (ASR) systems in many languages. However, such systems require a large amount of speech data and the associated transcriptions. It is tough to collect large volumes of paired speech and transcriptions for most low-resource languages. It is estimated that only about 1% of the world languages have the minimum amount of data that is needed to train an ASR

[2]. However, in many cases, especially for languages in south Asia, we can find a “close enough” language with the same set (or a superset) of phonemes as the low-resource language and enough resources for building an ASR. In this work, we show that a better performing ASR can be built for the low-resource language using unsupervised domain adaptation (UDA) of acoustic models from the corresponding high-resource language. This method has the benefit of modeling on real data in comparison to the data augmentation techniques like vocal tract length perturbation [12, 6], speech and tempo perturbation [13], noise addition [9], data synthesis [18], and spectral augmentation [16], where the modeling makes use of the synthetic data as well.

Unsupervised domain adaptation (UDA) has been successfully applied to various tasks to alleviate the shift between the train and test distributions. [8] shows good adaptation performance in the classification task on digit image datasets having considerable domain shifts. They learn features that are discriminative for the image classification task and invariant to the domain. They introduce a gradient reversal layer (GRL) for achieving this objective. [23] reports performance improvements in speech recognition for data shifted in the domain by gender and accent. [1] employs GRL layers to reduce the mismatch between train and test domains in the task of emotion recognition from speech data. [20] uses the GRL approach to improve the word error rate (WER) in speech recognition with the source domain data as clean speech and the target domain data as contaminated speech. They also show the robustness of the approach to the domain shifts caused by the differences in datasets.

The basic UDA scheme with a GRL tries to learn domain invariant features but ignores the individual characteristics of each domain. [5] introduces domain separation networks (DSN) and shows improvements in a range of UDA scenarios in the image classification task. They learn two representations: one specific to each domain and the other common to both domains. [14] uses DSN for adaptation from clean speech to noisy speech.

In this work, we explore the feasibility of the UDA schemes in the ASR task on low-resource languages that share the same acoustic space with a high-resource language. Specifically, we place the following assumptions in the selection of high and low resource language pairs.

  1. The acoustic space spanned by the low-resource language is a subspace of that of the high-resource language. This requires the phoneme set of the low-resource language to be a subset of the high-resource language.

  2. There exists a high-resource language with reasonable amount of paired audio data and transcriptions to build an ASR.

  3. The low-resource language has enough text data available to train the language models.

  4. The speech data available in the low-resource language is quite limited, and the transcriptions are not available.

Though the above assumptions seem quite constrictive, we can easily find a few low-resource languages in the Indian subcontinent which share a common acoustic space with a reasonably high-resource language. In this work, we use Hindi as the high-resource language and Sanskrit as the low-resource language, both belonging to the Indo-Aryan language family. Hindi is written in the Devanagari script, and many of its words are derived from Sanskrit. However, there exist substantial differences in the vocabulary and pronunciation between the two languages. One of the important differences is that the schwa, implicit in each consonant of the script, is not pronounced at the end of words and in some other contexts in Hindi. The script does not tell us when the schwa should be deleted, a phenomenon known as schwa deletion. For example, the word for ”salty” is pronounced as nam’kīn in Hindi and not namakīna. Another difference is the pitch accents that are common in Sanskrit. Despite the above differences, these languages share a common phoneme set. So we can intuitively argue that there exists a domain shift between the distributions of acoustic features in Hindi and Sanskrit. This makes us believe that the speech recognition problem in Sanskrit, an extremely low-resource language, may be posed as an UDA problem from Hindi, a language with a reasonably good collection of annotated audio.

2 Unsupervised domain adaptation for acoustic modeling

We pose the problem of acoustic modeling in Sanskrit as an unsupervised domain adaptation task from Hindi. We build a deep neural network (DNN) - hidden Markov model (HMM) ASR system

[11] for Sanskrit with the domain-independent acoustic models learned from Hindi through UDA approaches. We make use of the UDA schemes introduced in [8] and [5].

Figure 1: Block diagram of the basic scheme for unsupervised domain adaptation with a gradient reversal layer (GRL). , and

represent the parameters of the feature extractor, senone classifier, and domain classifier, respectively.

Figure 2: Block diagram of the domain separation network to model private and shared components.

2.1 Adversarial training using GRL

In GRL-based adversarial training, we try to learn a feature representation invariant to the domain, but good enough in discriminating the senone labels. The neural network architecture for learning the acoustic model consists of three parts: feature extractor, senone classifier, and domain classifier. A block diagram of the UDA architecture employing GRL is shown in Fig. 1. Feature extractor maps the input acoustic features to an internal representation . Senone classifier maps the output of the feature extractor to the senone labels whereas the domain classifier maps them to domain labels .

We train the network to minimize the senone classification loss during the training phase by optimizing the parameters in the feature extractor and senone classifier . This makes the network look for features that are capable of discriminating the senone labels. To make the features domain-invariant, we optimize the parameters of the feature extractor to maximize the domain classification loss. But at the same time, the parameters of the domain classifier are optimized to minimize the domain classification loss. This is achieved by introducing a GRL between the feature extractor and domain classifier . During the forward pass, GRL acts as an identity transform. GRL takes the gradient from the subsequent layer during the backward pass, multiplies it with

, a hyperparameter to control the trade-off between senone discrimination and domain-invariance, and passes it to the preceding layer.

During the inference time, the domain classifier and GRL are ignored. The acoustic feature vectors are passed through the feature extractor and the senone classifier, and senone labels are predicted.

2.2 Domain separation networks

A block diagram of the DSN architecture is shown in Fig. 2. They model both the private and shared components of the domain representation. Private encoders and extract components and which are specific to the target and source domains. Shared encoder is common to both domains and extract the shared components and . Shared decoder tries to reconstruct the input using the private and shared components. , the senone classifier maps to the senone label . The domain classifier maps the shared components and to their respective domain labels .

The network is trained to minimize the following loss function with respect to the parameters of

, , and :

(1)

where , and are hyperparameters. represents the senone classification loss and is applied only to the source domain. It is computed as the negative log-likelihood of the ground-truth senone labels. represents the domain adversarial similarity loss and is computed as the negative log-likelihood of the domain labels. ensures that the shared components and are as similar as possible irrespective of their domain so that the domain classifier cannot reliably predict the domain of the sample from its shared representation. Parameters of the domain classifier are trained to minimize the domain classification loss while the parameters of the shared encoder are trained to maximise the domain classification loss. This is also accomplished with a GRL. encourages the shared component and the private component to encode different aspects of the input. This is achieved by imposing a soft subspace orthogonality constraint between the private and shared components.

(2)

where , , and are matrices with , , and as rows. denotes the Frobenius norm. is the reconstruction loss, computed as the mean squared error (MSE) between and .

(3)

where and represent the number of speech frames from the source and target domains. We also validate the performance of the system with scale-invariant mean squared error (SIMSE) which is computed as:

(4)

where in the dimension of the input vector , is a -dimensional vector of ones and is the norm.

3 Experimental setup

3.1 Datasets used for the study

We primarily use a Hindi dataset [10] in the source domain. We also use a Telugu dataset [22] in the ablation studies. Both these datasets are the same as the ones used in multilingual and code-switching ASR challenge, Interspeech-2021. The details of both these datasets are available in [4]. Both the speech data and the corresponding transcriptions are available. Hindi and Telugu audio files have sampling frequencies of 8 and 16 kHz, respectively. Both have 16-bit encoding. Telugu audio is downsampled to 8 kHz in our experiments. We randomly select 15,000 utterances ( 15 hours) from their train sets for training the domain-independent acoustic models. The senone labels required for training the acoustic model are obtained from the alignments generated by a HMM-GMM system [17] trained using Kaldi [7]. We also use a random selection of 1000 utterances from the test set to validate the domain independence of learned features. We refer to this set as dev.

The Sanskrit dataset used in the target domain has 3395 utterances with 16 kHz sampling frequency and 16-bit encoding. The data is randomly divided into two sets - train and test, with approximately 5.5 hours (2837 utterances) in the train set and 1 hour (558 utterances) in the test set. The train set is used for domain training, and the test set is used for inference. The data is downsampled to 8 kHz before its use in all our experiments. The text corpus for building the Sanskrit language models makes use of the wiki Sanskrit data dump [25] and data from several Sanskrit websites. The extracted text is cleaned to remove unwanted characters and pre-processed to restrict the graphemes to the Devanagari Unicode symbols.

3.2 Details of feature extraction

We use 40-dimensional filterbank features together with their delta and acceleration coefficients. Cepstral mean variance normalisation is performed at the utterance level. The features are spliced with a left and right context of 5 frames each. Thus the acoustic feature vector at the input of the DNN has dimensions of 1320 (40 x 3 x 11). Feature extraction is performed using Kaldi

[7].

3.3 Training of the GRL model

The feature extractor () has six hidden layers with 1024 nodes in each layer. The input to the is a 1320-dimensional acoustic feature vector, and the output is a 1024-dimensional feature . The feature vector is forwarded to both the senone classifier and the domain classifier

. The senone classifier has two hidden layers, each with 1024 nodes and an output layer with 3080 nodes (equal to the number of senones in the Hindi training data). Domain classifier has a hidden layer with 256 nodes and an output layer with two nodes corresponding to the source and target domains. All the hidden layers are followed by batch normalization and ReLU activation. The logarithm of the softmax is computed at the output of both domain and senone classifiers. All the parameters (

, and ) are updated during the training with Hindi utterances. Only and are updated during the training with unlabelled Sanskrit utterances.

Negative log-likelihood loss is used for training. The models are trained using stochastic gradient descent with momentum

[21]

. We use a batch size of 32 and an initial learning rate of 0.01. The learning rate is scaled by a factor of 0.95 after every 20000 steps. Training is performed for 20 epochs. The same number of frames from the source and target domains are used for training at every epoch. The domain adaptation factor

is gradually changed from 0 to 1 using the approach in [8].

3.4 Training of the DSN model

Acoustic frames from the source and target domains, both of dimension 1320, are Input to the DSN. Private encoders for the source and target have four hidden layers with 512 nodes in each layer. The shared encoder has six hidden layers with 1024 nodes each. The senone and domain classifiers have the same architecture as the GRL model. All the hidden layers are followed by batch normalization and ReLU activation. The shared decoder has three hidden layers and an output layer with 1320 nodes.

The hyperparameters , , and are chosen as 0.25, 0.075, and 0.1, respectively. In order to promote the learning of the senone classifier in the initial phase of training, domain adversarial similarity losses are activated only after 10000 steps. The rest of the training process is the same as in the GRL model.

3.5 Decoding

During the inference stage, only the output of the senone classifier is considered. The pre-softmax output of the senone classifier is normalized using the log probability of priors. To find the most probable word sequence, we use weighted finite-state transducers (WFST)

[15] based decoding.

The vocabulary of Sanskrit is distinct from that of Hindi. So the FSTs for grammar (G) and lexicon (L) are built using the text corpus collected in Sanskrit (target domain). Pronunciation dictionary for building the L-FST uses the grapheme to phoneme (G2P) mapping scheme in Sanskrit. This is different from the Hindi G2P scheme in aspects like schwa deletion and pronunciation of visargas

[3]. HMM (H) and context-dependency (C) FSTs are created using the HMMs learned from the source domain data. These four FSTs are composed to form a single HCLG graph, which maps the senones directly to the words in the target domain.

4 Results

We decode the Sanskrit test set of 558 utterances with the adversarially trained GRL and DSN architectures. These models use both the labeled Hindi data and the unlabelled Sanskrit data for training. In order to benchmark the performance of these UDA models, we decode the utterances using a simple DNN model trained only with the Hindi speech data. In this model, the domain classifier is not part of the network architecture. We also compare our results with a DNN model trained in multi-task (MT) learning setup, training the whole network to minimize both the senone and domain classification objectives. This network has the same architecture as the network in Fig. 1, except that it does not have the GRL. This model also makes use of both the Hindi and Sanskrit data.

Model Source Target I II
DNN Hindi - 24.58% 16.14%
MT Hindi Sanskrit 21.43% 13.10%
GRL Hindi Sanskrit 17.87% 10.22%
DSN Hindi Sanskrit 17.26% 9.89%
Table 1: WER on the Sanskrit-test set. Column-I gives the WER when the text corpus for creating L and G FSTs includes the wiki data dump and web-crawled data ( 436840 words). Column-II gives the WER when the text corpus is restricted to the transcriptions of the Sanskrit speech corpus ( 12250 words).

The performance measure used to evaluate these models is word error rate (WER). The results are shown in Table 1 for two cases: i) when the language models are trained on the whole text corpus made out of wiki data dump and web-crawling (column I) and ii) when the language models are trained only on the transcriptions of the Sanskrit speech corpus (column II).

GRL approach gives an absolute improvement of 6.71% in WER over the baseline DNN model when the language model is trained on a large text corpus in Sanskrit (column I of Table 1). DSN provides an absolute improvement of 7.32%. Both the UDA models have nearly similar performance, and they beat the multi-task learning model by more than 3.5%. When the language model training is restricted to the transcriptions of the Sanskrit speech corpus, the performance of all the models improves as expected. However, the UDA approaches still retain the edge over the MT models by about 3%. We have also tried to fine-tune the UDA models using the senone labels of the Sanskrit train set computed from the DNN models trained on Hindi, but they did not improve the performance of the models.

4.1 Ablation studies

In all the experiments below, we compute WER using the bi-gram language models trained on the larger text corpus associated with column I of Table 1.

4.1.1 Effect of the amount of unlabelled training data from the target domain

Next, we address the question of the amount of unlabelled data required for proper adaptation. We split the training set in the target domain into six sets with 0.5, 1.5, 2.5, 3.5, 4.5, and 5.5 hours of data and train the UDA models using them. The entire source domain data is used for training. The performance of these models in terms of WER is shown in Fig. 3. The performance of both the models improves as the amount of unlabelled training data increases but nearly saturates after about 2.5 hours of data in the target domain (Sanskrit).

Figure 3: Effect of the amount of unlabelled training data from the target domain on the WER.

4.1.2 Domain independence of features

We also visualize the features at the output of the feature extractor (shared encoder for DSN) for our models. We collect an equal number of frames with the same senone label (based on the HMM-GMM alignment) from the Hindi-dev and Sanskrit-test sets and plot the vectors at the output of the feature extractor (or shared encoder) using t-SNE [24]. The results are shown in Fig. 4. Compared to Figs.4(a) and 4(b), the features in Figs. 4(c) and 4(d) are more intermingled indicating the domain independence of features. The domain-discriminative power is higher for the features from MT models as seen from Fig. 4(b).

(a)
(b)
(c)
(d)
Figure 4: 2-D t-SNE plots of features at the output of feature extractor/shared encoder in (a) baseline DNN (trained only with Hindi data), (b) MT, (c) GRL, and (d) DSN models for frames with senone label 3009. H and S denote the frames obtained from the Hindi-dev set (source domain) and Sanskrit-test set (target domain), respectively.

To verify the extent of domain independence achieved by the model, we compute the frame-level domain accuracy on the Hindi-dev and the Sanskrit-test sets. The results are listed in Table 2. It can be seen that the features from UDA models are much more domain-independent compared to the MT models.

Domain Accuracy
Model Sanskrit-test Hindi-dev
MT 91.86% 76.31%
GRL 63.77% 44.94%
DSN 63.21% 52.18%
Table 2: Frame-level domain accuracy of the UDA models computed on the Sanskrit-test and Hindi-dev sets.

4.1.3 Effect of the loss functions in DSN

Next, we experiment with the constituents of the loss function in DSN. We train four models: (i) with all the loss functions in DSN, (ii) without the difference loss ( = 0), (iii) without the similarity loss ( = 0), and (iv) with the reconstruction loss computed as SIMSE. The results are listed in Table 3. The performance degrades slightly in the absence of difference loss which tries to enhance the orthogonality between the private and shared components. There is considerable degradation in the performance in the absence of similarity loss. The results are still better than the baseline DNN and MT models (refer column I in Table 1) indicating the usefulness of private and shared component decomposition in DSN. The model using SIMSE as the reconstruction loss performs inferior to that of the one using MSE.

Loss functions WER
All terms included 17.26%
= 0 ( = 0) 18.10%
= 0 ( = 0) 20.37%
= SIMSE 18.00%
Table 3: Effect of the different constituents of the loss function on the performance of the DSN.

4.1.4 Source domain language selection

Though both Hindi and Sanskrit are written in Devanagari, they have a difference in pronunciation, as pointed out in section 1. However, [19] suggests that Telugu and Malayalam are the closest languages to Sanskrit in terms of pronunciation, vocabulary, and grammar. Moreover, the schwa is retained in Dravidian languages like Telugu and Malayalam, just like Sanskrit. Since suitable datasets are available in Telugu, we repeat the experiment with Telugu in the source domain, hoping for better acoustic and pronunciation models in Sanskrit. The results are shown in Table 4. All the models improve, as the phone HMM models learned from Telugu better match Sanskrit. Here also, the performance of UDA models is better than the DNN and MT models. The UDA approaches improve by around 3.5-4.5% compared to adaptation from Hindi.

Model Source Target WER
DNN Telugu - 17.65%
MT Telugu Sanskrit 14.71%
GRL Telugu Sanskrit 13.09%
DSN Telugu Sanskrit 13.72%
Table 4: WER on the Sanskrit-test set when trained with Telugu as the source domain language.

5 Conclusions

In this work, we propose UDA as an option to tackle the scarcity of data in low-resource languages which share a common acoustic space with a high-resource language. We experiment with Hindi as the source domain language and Sanskrit as the target domain language. GRL and DSN models improve the WER by 6.71% and 7.32%, respectively, compared to a baseline DNN model trained only on Hindi. The models perform better than the multi-task learning framework. Proper selection of source domain language (Telugu in our case) further improves the results. The results indicate that UDA can provide a faster way of building ASR systems in low-resource languages, reducing the hassle of collecting large amounts of annotated training data if a suitable high-resource language is available.

References

  • [1] M. Abdel Wahab and C. Busso (2018) Domain adversarial for acoustic emotion recognition. IEEE/ACM Transactions on Audio Speech and Language Processing 26 (12), pp. 2423–2435. External Links: Document Cited by: §1.
  • [2] G. Adda et al. (2016) Breaking the unwritten language barrier: the BULB project. Procedia Computer Science 81, pp. 8–14. External Links: Document Cited by: §1.
  • [3] C. S. Anoop and A. G. Ramakrishnan (2019) Automatic speech recognition for Sanskrit. 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT) 1 (), pp. 1146–1151. External Links: Document Cited by: §3.5.
  • [4] D. Anuj et al. (2021) Multilingual and code-switching ASR challenges for low resource Indian languages. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Cited by: §3.1.
  • [5] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan (2016) Domain separation networks. Advances in Neural Information Processing Systems, pp. 343–351. Cited by: §1, §2.
  • [6] X. Cui, V. Goel, and B. Kingsbury (2015) Data augmentation for deep neural network acoustic modeling. IEEE Transactions on Audio, Speech and Language Processing 23 (9), pp. 1469–1477. External Links: Document Cited by: §1.
  • [7] D. P. et al. (2011-12) The Kaldi speech recognition toolkit. IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. Cited by: §3.1, §3.2.
  • [8] Y. Ganin and V. Lempitsky (2015)

    Unsupervised domain adaptation by backpropagation

    .

    32nd International Conference on Machine Learning, ICML 2015

    2, pp. 1180–1189.
    Cited by: §1, §2, §3.3.
  • [9] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, and A. Y. Ng (2014) Deep speech: scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567. External Links: 1412.5567 Cited by: §1.
  • [10] Hindi dataset. Note: https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html Cited by: §3.1.
  • [11] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Processing Magazine 29 (6), pp. 82–97. External Links: Document Cited by: §2.
  • [12] N. Jaitly and G. E. Hinton (2013) Vocal tract length perturbation (VTLP) improves speech recognition. International Conference on Machine Learning (ICML) Workshop on deep learning for audio, speech, and language processing. Cited by: §1.
  • [13] T. Ko, V. Peddinti, D. Povey, and S. Khudanpur (2015) Audio augmentation for speech recognition. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Cited by: §1.
  • [14] Z. Meng, Z. Chen, V. Mazalov, J. Li, and Y. Gong (2018) Unsupervised adaptation with domain separation networks for robust speech recognition. 2017 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2017 - Proceedings 2018-January, pp. 214–221. External Links: Document Cited by: §1.
  • [15] M. Mohri, F. Pereira, and M. Riley (2002) Weighted finite-state transducers in speech recognition. Computer, Speech & Language 16 (1), pp. 69–88. External Links: ISSN 0885-2308, Document Cited by: §3.5.
  • [16] D. S. Park, W. Chan, Y. Zhang, C. C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le (2019-09) SpecAugment: a simple data augmentation method for automatic speech recognition. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. External Links: Document Cited by: §1.
  • [17] L. R. Rabiner (1989-02)

    A tutorial on hidden Markov models and selected applications in speech recognition

    .
    Proceedings of the IEEE 77 (2), pp. 257–286. External Links: Document Cited by: §3.1.
  • [18] A. Ragni, K. Knill, S. Rath, and M.J.F. Gales (2014-01) Data augmentation for low resource languages. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pp. 810–814. Cited by: §1.
  • [19] P. Sreekumar (2015) Sanskrit superstratum and the development of telugu and malayalam: a comparative note. International School of Dravidian Linguistics, VI Subramoniam commemoration: Studies on Dravidian, vol 1, pp. 169-180, ISBN 81-85692-60-2. Cited by: §4.1.4.
  • [20] S. Sun, B. Zhang, L. Xie, and Y. Zhang (2017) An unsupervised deep domain adaptation approach for robust speech recognition. Neurocomputing 257, pp. 79–87. External Links: Document Cited by: §1.
  • [21] I. Sutskever, J. Martens, G. Dahl, and G. Hinton (2013) On the importance of initialization and momentum in deep learning. 30th International Conference on Machine Learning, ICML 2013 (PART 3), pp. 2176–2184. Cited by: §3.3.
  • [22] Telugu dataset. Note: Data provided by Microsoft and SpeechOcean.comhttps://navana-tech.github.io/IS21SS-indicASRchallenge/data.html Cited by: §3.1.
  • [23] A. Tripathi, A. Mohan, S. Anand, and M. Singh (2018) Adversarial learning of raw speech features for domain invariant speech recognition. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2018-April, pp. 5959–5963. External Links: Document Cited by: §1.
  • [24] L. Van Der Maaten and G. Hinton (2008) Visualizing data using t-SNE. Journal of Machine Learning Research 9, pp. 2579–2625. Cited by: §4.1.2.
  • [25] Wiki Sanskrit data dump. Note: https://dumps.wikimedia.org/sawiki/ Cited by: §3.1.