Investigating Target Set Reduction for End-to-End Speech Recognition of Hindi-English Code-Switching Data

07/15/2019 ∙ by Kunal Dhawan, et al. ∙ ERNET India 0

End-to-end (E2E) systems are fast replacing the conventional systems in the domain of automatic speech recognition. As the target labels are learned directly from speech data, the E2E systems need a bigger corpus for effective training. In the context of code-switching task, the E2E systems face two challenges: (i) the expansion of the target set due to multiple languages involved, and (ii) the lack of availability of sufficiently large domain-specific corpus. Towards addressing those challenges, we propose an approach for reducing the number of target labels for reliable training of the E2E systems on limited data. The efficacy of the proposed approach has been demonstrated on two prominent architectures, namely CTC-based and attention-based E2E networks. The experimental validations are performed on a recently created Hindi-English code-switching corpus. For contrast purpose, the results for the full target set based E2E system and a hybrid DNN-HMM system are also reported.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Code-switching is a common phenomenon in which people switch between languages for the ease of expression Gumperz_1982_Discourse . It has been observed that people use words of a foreign language while conversing in their native tongue so as to effectively communicate with other people eastman1992 ; Myers_1992_Comparing . The recent spread of urbanization and globalization have positively impacted the growth of bilingual/multilingual communities and hence made this phenomenon more prominent. The growth in such communities has made automatic recognition of code-switching speech an important area of interest Lyu_2006_Speech ; Bhuvanagirir_2012_Mixed ; ahmed2012automatic . In India, Hindi is the native language of around of its billion population cen_1991 . A large portion of the remaining half, especially those residing in metropolitan cities understand the Hindi language well enough. Due to prominence in administration, law and corporate world, English language is also used by around million people in India. Thus, Indians naturally tend to use some English words within their Hindi discourse, which is referred to as Hindi-English code-switching malhotra1980hindi ; bali2014 . Despite the increasing code-switching phenomenon, the research activity in this area is somewhat limited due to lack of resources, specially for building robust code-switching ASR systems. We recently created a corpus, named as the HingCoS corpus, for addressing the data scarcity in Hindi-English code-switching domain. The initial version of the work describing the data collection for HingCoS corpus is available at hingcos_2018 . The corpus primarily contains intra-sentential code-switching sentences and a few example ones along with their English translations are shown in Table 1.

Table 1: Sample code-switching sentences in HingCoS corpus and their corresponding English translations.

End-to-end (E2E) systems are fast becoming the norm for automatic speech recognition (ASR) task. Unlike the conventional systems, the E2E systems directly model the output labels given the acoustic features. This is usually achieved by employing two techniques: (i) connectionist temporal classification (CTC)  graves2006connectionist ; graves2012sequence , and (ii) sequence to sequence modelling with attention mechanism graves2014towards ; chorowski2014end ; bahdanau2014neural ; prabhavalkar2017comparison

. CTC allows us to train E2E models without the requirement of alignment between input features and output labels as required in conventional systems. It is used as cost function along with deep bi-directional long short term memory (DBLSTM) architecture to build ASR systems. Attention-based systems consist of three modules: (i) a pyramidal BLSTM network which acts as the acoustic model encoder, (ii) an attention layer which helps choose input frames to look at while making current label decision, and (iii) an LSTM network which acts as the decoder.

Table 2: The top two rows show the default Hindi and English character sets, respectively. The proposed reduced target labels covering both Hindi and English sets are shown in the bottom row.

Conventionally, the E2E systems have been trained for characters as output labels which simplifies the process of data preparation. In google_grapheme_phoneme , it is shown that grapheme-based E2E ASR systems slightly outperform phoneme-based systems when a large amount of data (

12,500 Hrs) is used for training. Presently, building grapheme-based systems for code-switching tasks seems infeasible for two reasons. Firstly, a limited amount of data is available for code-switching tasks as yet. Secondly, the target set (output labels) in a code-switching task gets expanded in proportion to the number of languages involved. Towards addressing those constraints, we explore a target set reduction scheme by exploiting the acoustic similarity in the underlying languages of the code-switching task. This scheme is primarily intended to enhance the performances of code-switching E2E ASR systems. The validation of the proposed idea has been done on Hindi-English code-switching task using both E2E network and hybrid deep neural network based hidden Markov model (DNN-HMM).

The remainder of this paper is organized as follows: Discussion of the proposed target set reduction scheme along with a review of CTC- and attention-based E2E ASR networks is done in Section 2. The experimental setup including system description is presented in Section 3. The results are presented and discussed in Section 4. Finally, the paper is concluded in Section 5.

2 E2E Paradigms for Code-Switch ASR

The conventional E2E ASR systems are trained directly from speech data (filterbank energies) with characters as the target labels. In the context of code-switching, a conventional E2E ASR system models the unified character set of the underlying languages. With unified character set modelling, such a system would face the following challenges:

  • More than double expansion in the target set.

  • Enhanced confusion among the target labels.

  • Requirement of more data for reliable modelling.

  • Weakening of attention mechanism, if employed.

Towards addressing the above challenges, we first propose a novel scheme for reducing the output target labels. It is followed by the descriptions of two popular E2E architectures employed to evaluate the efficacy of the proposed scheme.

2.1 Proposed Scheme for Reduction of Target Set

Despite the expansion of the target set in the case of code-switching E2E ASR, the phone sets corresponding to the underlying languages may have significant acoustic similarity. This fact is well known and has been exploited in the creation of a common phone set across languages ramani2013common . Motivated by that, we propose a scheme for target set reduction in code-switching E2E ASR task by creating common target labels based on acoustic similarity. In the following, the proposed scheme has been explained in detail in the context of Hindi-English code-switching ASR task which has been used in this work for experimentation. In principle, it can be extended to any other code-switching context as well.

Hindi and English languages comprise of and characters, respectively. For reference purpose, those are shown in the top two rows of Table 1. In ramani2013common , a composite phone set covering major Indian languages is proposed in the context of computer processing. On a similar line, a phone set for English has been defined. Combining the labels for Hindi and English, a common phone set comprising elements is derived and is shown in the bottom row of Table 1. Using this common phone set, a dictionary keeping the default pronunciations for all Hindi and English words in the HingCoS corpus is created. Now, the targets for acoustic modelling are derived by taking the pronunciation breakup of all Hindi and English words. A few example words along with their default character-level and the proposed common phone-level tokenizations are shown in Figure 2.1. It can be observed that the considered Hindi/English words lead to unique targets when tokenized at the character level and unique targets when tokenized using the proposed scheme. For the Hindi-English code-switching task, the proposed approach results in % reduction in the size of the target set. The importance of this reduction gets enhanced by the fact that the availability of code-switching data is yet limited.

Table 3: Sample examples for the proposed common phone level labelling and existing character level labelling schemes for E2E ASR system training. Note that for the given words, the unique targets when tokenized at character level turns out to be and unique targets when tokenized using the proposed scheme.

2.2 CTC-based architecture

CTC based E2E ASR systems consist of a DBLSTM encoder which is trained to minimize the CTC cost function. These components are described below.

2.2.1 DBLTSM network

Deep bidirectional long short term memory (DBLSTM) is a prominent sequence modelling architecture. It combines the advantage of multiple levels of representation that come from the use of a deep network along with long range context enabled by the use of recurrent neural networks (RNN). Conventional RNNs process sequence data from left to right, thus making use of only the previous context. In speech recognition tasks, making use of future context can be useful. Bidirectional RNNs process input data in both directions with separate hidden layers which are fed forward to the same output layer. The following equations illustrate the calculation of forward and backward activations:

where and represent the forward and backward activations respectively. The other terms have their conventional meanings as defined in graves2013hybrid . The output layer is given by

The network is trained to minimize the CTC loss function as explained in the following section.

2.2.2 CTC cost function

CTC allows training of RNNs without requiring a prior alignment between input and output sequences. In CTC, the output softmax layer of RNN has one unit each for the targets in addition to a blank symbol

denoting a null emission. For a given training speech example, there are as many possible alignments as there are ways of separating the labels with blanks. At every time-step, the network decides whether to emit a symbol or not. As a result, a distribution over all possible alignments between the input and target sequences is obtained.

Finally, CTC employs a dynamic programming based forward-backward algorithm to obtain the sum over all possible alignments and produces the probability of output sequence given a speech input. Given a target transcription

and input , the network is trained to minimize the CTC cost function:

Here the total probability of an output transcription is the sum of the probabilities of the alignments that correspond to it. So,

where corresponds to all the CTC alignments which map to required output sequence , as represented by .

Figure 1: Block diagram of E2E Networks using: a) CTC mechanism, and b) attention mechanism.

2.3 Attention-based Architecture

This model employs an encoder RNN which plays a role similar to that of an acoustic model in conventional systems, an attention layer which helps choose the input frames to look at while making current label decision and a decoder RNN for end-to-end training of the ASR system. This architecture predicts output labels without making any independence assumption between the labels, unlike the assumption in CTC. The role of the attention layer is to select the portion of the input to be considered while making current label decision at the decoder.

In particular, we have used the listen, attend and spell (LAS) model las for training the ASR system presented in this work. The LAS architecture is composed of three sub-modules: listener, attender and speller. The listener is the acoustic encoder that transforms the original input signal into a higher level representation . The AttendAndSpell function takes

as input and produces a probability distribution over character/phoneme sequences whilst utilizing the attention mechanism:

The listener/encoder is a Bidirectional LSTM network having a pyramidal structure. This reduces the number of time steps over which the attention layer has to extract relevant information and hence improves the efficacy of the attention mechanism. The speller/decoder uses an attention-based LSTM transducer and decoding is performed using a left-to-right beam search. The network is trained to optimize the following log probability:

where is the ground truth of the previous characters and represents the model parameters.

3 Experimental Setup

3.1 Database

In this work, the experiments are performed using the Hindi-English code switching speech corpus. This database is referred to as the HingCoS Corpus111www.iitg.ac.in/eee/emstlab/HingCoS_Database/HingCoS.html. An initial description of this database is available at hingcos_2018 . It consists of 101 speakers, each of whom has been asked to sound unique code-switching sentences given to him/her. The length of those sentences varies from to words. All speech data is recorded at 8 kHz sampling rate and 16-bits/sample resolution. The database contains Hindi-English code-switching utterances which correspond to about hours of speech data. For ASR system modelling, the database is partitioned into train, development and test sets containing , and sentences, respectively. To study the effect of utterance-length in decoding, three partitions of test set are created on the basis of length of utterances. Those partitions correspond to utterance-length ranges as -, -, and - words and are referred to as Test1, Test2, and Test3, respectively. So obtained, Test1, Test2, and Test3 data sets consist of , , and utterances, respectively. The unified character set modelling case comprises of targets ( English characters, Hindi characters, and a word separator). In contrast, the proposed scheme reduces that to targets ( common phones and a word separator). In this work, we contrast the performances of the proposed reduced target set based E2E ASR systems with those of unified character set based ones.

3.2 System Description

The E2E models developed in this work are trained using the Nabu toolkit nabu_2017

, which is based on TensorFlow

. For contrast purpose, the DNN-HMM systems have also been trained and evaluated using Kaldi toolkit povey2011kaldi . The parameter setting used for analyzing the speech data include window length of ms, window shift of ms, and pre-emphasis factor of . The -dimensional features comprising log filter-bank energies are used for developing E2E systems. It is to be noted that, the E2E systems are optimized for the reduced target set and the same parameters have been used for the unified character set systems. The remaining details of the above mentioned systems are presented next.

Model Test1 Test2 Test3 Average
PER CER PER CER PER CER PER CER
Attention-based E2E 21.01 33.69 21.06 34.80 23.70 39.38 21.92 35.96
CTC-based E2E 32.91 35.82 28.89 32.85 28.33 33.87 30.04 34.18
DNN-HMM 48.21 48.74 47.85 48.17 47.88 48.62 47.98 48.51
Table 4: Evaluation of attention and CTC based E2E systems developed using both reduced and unified target sets on Hindi-English code-switching data. The performances of reduced and unified target set based systems are measured using phone error rate (PER) and character error rate (CER), respectively. The performances for DNN-HMM system on those tasks are also given for reference purpose.
Table 5: Sample decoded outputs for E2E code-switching ASR systems developed using reduced and unified target sets. The errors have been highlighted in bold. Note that, the symbol ‘_’ is used to mark separation between the words.

3.2.1 Attention-based E2E model

The architectural details of the LAS model are as follows. The encoder has pyramidal DBLSTM layers with units in each layer. The pyramidal step size is kept as and the dropout rate in training is set to . The LSTM decoder consists of layers with units in each layer. The dropout rate for the LSTM decoder is also set to . Loss function used for training is the average cross-entropy loss and Gaussian noise with is added to the data while training. We have employed the beam-search decoder with beam width set as . The model is trained for epochs with a batch size of and learning rate decay set to .

3.2.2 CTC-based E2E model

This modelling paradigm involves a DBLSTM network as the encoder which consists of 4 layers and 256 units in each layer with dropout rate set to . The decoder utilizes CTC loss function as discussed in Section 2.2.2. Gaussian noise with is added to the speech data for modelling robustness. In model training, the number of epochs is set as and the mini-batch size is set to .

3.2.3 DNN-HMM model

The DNN-HMM acoustic model contains hidden layers and nodes in each layer. The hidden nodes use tanh as the non-linearity. First, -dimensional MFCC features are spliced across frames to produce

-dimensional feature vectors, which are then projected to

dimensions by applying linear discriminant analysis. These -dimensional feature vectors are used for training the DNN-HMM acoustic model. The model is run for epochs with a batch size of .

4 Results and Discussion

For the unified target set case, the performances are measured in terms of the character error rate (CER). Whereas, for the reduced target set case, we have used the phone error rate (PER) as the measure. For proper evaluation, both attention and CTC based E2E ASR systems are developed using reduced and unified target sets and their performances are reported in Table 4. It can be observed that with proposed reduction in target set, all explored E2E systems yield significantly improved recognition performance (i.e., target error rate) over the corresponding unified target set based systems. Interestingly, this trend is carried over all the three test sets as defined earlier. On comparing the reduced target set systems, we note that the attention-based E2E ASR system has outperformed. Whereas, the CTC-based E2E system has yielded slightly better CER for the unified target set modelling case.

It is worth emphasizing that with more reduction in the target set further improvement in PERs could be achieved in reference to CERs. But any such reduction would be counterproductive if we can not derive accurate word sequences given the output hypotheses in terms of those reduced target labels. That criterion is very much satisfied by the proposed phone based reduction of the target set in the case of code-switching speech. On the other hand, for unified target set based E2E ASR systems, the decoded outputs may comprise of cross-language character insertions due to acoustic similarity. Towards illustrating that, we show a few example decoded sequences for both reduced and unified target set based E2E systems in Table 5. From that table, we can note that the decoded sequence for the attention-based E2E system exhibits better sequence modelling as well as word boundary marking in comparison to that of CTC-based system. This trend is attributed to the ability of attention-based E2E network to utilize all the previous decoded labels along with the current input while making decisions.

5 Conclusions

In this work, we present a novel target label reduction scheme for training the E2E code-switching ASR systems. The systems employing the reduced targets are shown to outperform the unified target based systems. It has been demonstrated that the attention based E2E system trained with reduced target set achieves the best averaged target (phone) error rate. In the future, we aim to incorporate language information in E2E code-switching ASR systems developed in this work under the paradigm of multi-task learning.

References

  • (1) John J Gumperz, Discourse Strategies, Cambridge University Press, 1982.
  • (2) Carol M Eastman, “Codeswitching as an urban language-contact phenomenon,” Journal of Multilingual & Multicultural Development, vol. 13, no. 1-2, pp. 1–17, 1992.
  • (3) Carol Myers Scotton, “Comparing codeswitching and borrowing,” Journal of Multilingual & Multicultural Development, vol. 13, no. 1-2, pp. 19–39, 1992.
  • (4) Dau Cheng Lyu, Ren Yuan Lyu, Yuang Chin Chiang, and Chun Nan Hsu, “Speech recognition on code-switching among the Chinese dialects,” in Proc. of International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2006, vol. 1.
  • (5) Kiran Bhuvanagirir and Sunil Kumar Kopparapu, “Mixed language speech recognition without explicit identification of language,” American Journal of Signal Processing, vol. 2, no. 5, pp. 92–97, 2012.
  • (6) Basem HA Ahmed and Tien-Ping Tan, “Automatic speech recognition of code switching speech using 1-best rescoring,” in Proc. of International Conference on Asian Language Processing (IALP), 2012, pp. 137–140.
  • (7) LIS-India, “1991 census of india,” [Online] http://www.ciil-lisindia.net/, Accessed: 2019-03-29.
  • (8) Sunita Malhotra, “Hindi-English, Code Switching and Language Choice in Urban, Uppermiddle-class Indian Families,” University of Kansas. Linguistics Graduate Student Association, 1980.
  • (9) Kalika Bali, Jatin Sharma, Monojit Choudhury, and Yogarshi Vyas, “I am borrowing ya mixing? An Analysis of English-Hindi Code Mixing in Facebook,” in Proc. of the First Workshop on Computational Approaches to Code Switching, 2014, pp. 116–126.
  • (10) Ganji Sreeram, Kunal Dhawan, and Rohit Sinha, “Hindi-English code-switching speech corpus,” arXiv:1810.00662, 2018.
  • (11) Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in

    Proc. of the 23rd International Conference on Machine Learning

    , 2006, pp. 369–376.
  • (12) Alex Graves, “Sequence transduction with recurrent neural networks,” Proc. of International Conference on Machine Learning: Representation Learning Workshop, 2012.
  • (13) Alex Graves and Navdeep Jaitly, “Towards end-to-end speech recognition with recurrent neural networks,” in International Conference on Machine Learning, 2014, pp. 1764–1772.
  • (14) Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, “End-to-end continuous speech recognition using attention-based recurrent NN: First results,”

    Proc. of Deep Learning and Representation Learning Workshop

    , 2014.
  • (15) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio,

    Neural machine translation by jointly learning to align and translate,”

    Proc. of International Conference on Learning Representations, 2015.
  • (16) Rohit Prabhavalkar, Kanishka Rao, Tara N Sainath, Bo Li, Leif Johnson, and Navdeep Jaitly, “A comparison of sequence-to-sequence models for speech recognition.,” in Proc. of Interspeech, 2017, pp. 939–943.
  • (17) Tara N. Sainath, Rohit Prabhavalkar, Shankar Kumar, Seungji Lee, Anjuli Kannan, David Rybach, Vlad Schogol, Patrick Nguyen, Bo Li, Yonghui Wu, Zhifeng Chen, and Chung-Cheng Chiu, “No need for a lexicon? evaluating the value of the pronunciation lexica in end-to-end models,” CoRR, vol. abs/1712.01864, 2017.
  • (18) B Ramani, S Lilly Christina, G Anushiya Rachel, V Sherlin Solomi, Mahesh Kumar Nandwana, Anusha Prakash, S Aswin Shanmugam, Raghava Krishnan, S Kishore Prahalad, K Samudravijaya, P Vijayalakshmi, T Nagarajan, and Hema A Murthy, “A common attribute based unified HTS framework for speech synthesis in Indian languages,” in Proc. of 8th ISCA Workshop on Speech Synthesis, 2013.
  • (19) Alex Graves, Navdeep Jaitly, and Abdel-Rahman Mohamed, “Hybrid speech recognition with deep bidirectional LSTM,” in Proc. of Workshop on Automatic Speech Recognition and Understanding, 2013, pp. 273–278.
  • (20) W. Chan, N. Jaitly, Q. Le, and O. Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” 2016, pp. 4960–4964.
  • (21) Vincent, “Nabu: An end-to-end speech recognition toolkit,” [Online] https://vrenkens.github.io/nabu/, Accessed: 2019-03-24.
  • (22) Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al., “The Kaldi speech recognition toolkit,” IEEE Signal Processing Society, 2011.