The goal of an Automatic Speech Recognition (ASR) system is to transcribe an audio file to text. It is a domain-specific task, similar to other machine learning work . However, in order to build a robust ASR system, a large amount of data is required. Because the number of datasets is so limited, low-resource languages like Thai suffer from not having a good ASR system. Several studies were proposed to address this problem by using a self-supervised pre-training method. For example, Wav2Vec2.0 from Facebook  used a lot of unlabeled data to pre-train an unsupervised objective before training an ASR model as a downstream task.
Recently, the Thai ASR community, led by AIResearch.in.th and PyThaiNLP , released the Thai Wav2Vec2.0 ASR model by finetuning the XLSR-Wav2Vec2 model with the Thai CommonVoice corpus V7 on the NewMM tokenizer . This work states that the CommonVoice dataset has data leakage on their train/test splits as the same speakers were identified in different splits. Thus, they re-split the corpus and trained the model to ensure there was no speaker overlapping among splits. The experimental results showed that the model achieved the word error rate (WER) at 13.643% on the NewMM tokenization and 8.152% on the deepcut tokenization .
In this technical report, we train a new Thai ASR model by fine-tuning the pre-trained XLSR-Wav2Vec2 model with a newer version of the Commonvoice corpus and changing the stop criteria from character error rate (CER) to WER. In addition, we also train a new language model using a tri-gram language model to improve the performance of the ASR model in the decoder stage.
We used the CommonVoice corpus V8 for training models, but the CommonVoice corpus has data leakage, so we re-split the corpus before training models. We used a pre-trained XLSR-Wav2Vec2 available on Huggingface to fine-tune a model on the CommonVoice corpus V8. The language model was also trained on the same corpus using the train split. Additionally, we also fine-tune the model using different tokenizers to investigate the impact of different open-sourced Thai tokenizers on the ASR model. This section will describe the preprocessing step we applied to the corpus, ASR model setups, and language model creation details.
We use the CommonVoice corpus V8 as the main dataset for training and evaluating. The CommonVoice corpus is a crowdsourced speech corpus that collects and validates audio by the crowdsourcing method . Nevertheless, we noticed that the CommonVoice corpus train/test split has data leakage where the same speakers were identified in both the train and test set. Thus, we re-split the corpus before training the model, following Charin Polpanumas .222The source code and corpus is publicly available at https://github.com/wannaphong/thai_commonvoice_dataset.. We split the CommonVoice V8 dataset as follows:
Removes all data in the CommonVoice corpus V7 from the CommonVoice corpus V8.
Splitting the Common Voice corpus V8 without the Common Voice corpus V7.
Add the CommonVoice corpus V7 back to the corpus.
With these steps, we can ensure that the same data used in the training and test set of the old CommonVoice corpus V7 split remains the same while the new V8 data is added without speaker leakage between the training and test set. Moreover, we cleaned the corpus according to AIResearch.in.th’s work (i.e., removing non-alphanumeric characters). We also fixed missing words and replaced a repeat character (maiyamok) with repeated text by hand.
The statistics of Thai CommonVoice corpus v7 and 8 are shown in Table 1.
|Set||CommonVoice V7||CommonVoice V8|
|Train||116 hours 18 minutes 41 seconds||118 hours 45 minutes 35 seconds|
|Valid||2 hours 39 minutes 48 seconds||5 hours 0 minutes 54 seconds|
|Test||3 hours 7 minutes 36 seconds||5 hours 9 minutes 5 seconds|
|Total||122 hours 6 minutes 5 seconds||128 hours 55 minutes 35 seconds|
2.2 ASR Model
We fine-tune the wav2vec2-large-xlsr-53 model with Connectionist Temporal Classification (CTC) . For the model setting, we use the same setting as AIResearch.in.th did, but we train models with two word-tokenizers while NewMM, and Deepcut . We evaluate the WER score on the validation set for the model selection to select the best model.
2.3 Language model
We train a tri-gram language model with the CommonVoice V8 training set text using KenLM 
. We trained two language models that were tokenized by two differnet tokenizers: NewMM and DeepCut. We then leverage the language model to boost the performance of Wav2Vec2.0, following Patrick von Platen.
3 Experimental Results
We evaluate the performance of our models using the WER and CER scores . In addition, we perform post-processing similar to the training stage, and retokenize the prediction as follows:
Remove all white space in the predicted text.
Retokenize the predicted text using the same word-tokenizers in the training stage.
Joined the tokenized words with whitespaces
Evaluate the performance of our models using the WER and CER.
3.1 Thai CommonVoice V8
The ASR results were shown in Table 2. Comparing to the baseline models (AIResearch and PyThaiNLP), our models improves the WER from 17.4% and 11.9% to 16.3% on NewMM tokenizer, and 11.4% on DeepCut. Moreover, we used the Trigram language model to boost the performance of Wav2Vec2.0, and the WER decreased from 17.4% to 12.5%. We can conclude that the boosting technique from Patrick von Platen  significantly improves ASR models’ performance.
|Model||WER by newmm (%)||WER by deepcut (%)||CER (%)|
|AIResearch.in.th and PyThaiNLP ||17.414503||11.923089||3.854153|
|wav2vec2 with deepcut||16.354521||11.424476||3.684060|
|wav2vec2 with newmm||16.698299||11.436941||3.737407|
|wav2vec2 with deepcut + language model||12.630260||9.613886||3.292073|
|wav2vec2 with newmm + language model||12.583706||9.598305||3.276610|
3.2 Thai CommonVoice v7
The results is shown on the Table 3. The best model is Wav2Vec2.0 with newmm + language model.
|Model||WER by newmm (%)||WER by deepcut (%)||CER (%)|
|AIResearch.in.th and PyThaiNLP ||13.936698||9.347462||2.804787|
|wav2vec2 with deepcut||12.776381||8.773006||2.628882|
|wav2vec2 with newmm||12.750596||8.672616||2.623341|
|wav2vec2 with deepcut + language model||9.940050||7.423313||2.344940|
|wav2vec2 with newmm + language model||9.559724||7.339654||2.277071|
We train Thai Automatic Speech Recognition by fine-tune a pre-trained XLSR-Wav2Vec2 model with a newer CommonVoice corpus while change the early stop criterion from CER to WER. We also train the language model and use it to boost the ASR performance. The best model is wav2vec2 with the NewMM + language model. We achieve a better word error rate than previous works with the help of language model.
-  M. Malik, M. Malik, K. Mehmood, and I. Makhdoom, “Automatic speech recognition: a survey,“ Multimedia Tools and Applications, vol. 80, pp. 1–47, Mar. 2021, doi: 10.1007/s11042-020-10073-7.
-  A. Baevski, H. Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations.“ arXiv, Oct. 22, 2020. doi: 10.48550/arXiv.2006.11477.
-  Charin Polpanumas, “AIResearch.in.th and PyThaiNLP Release High-Performance, Open-source Automatic Speech Recognition (ASR) Models for Thai,“ AIResearch.in.th, Sep. 06, 2021. https://medium.com/airesearch-in-th/airesearch-in-th-3c1019a99cd (accessed Jun. 28, 2022).
W. Phatthiyaphaibun, K. Chaovavanich, C. Polpanumas, A. Suriyawongkul, L. Lowphansirikul, and P. Chormai, “PyThaiNLP: Thai Natural Language Processing in Python.” Zenodo, Jun. 2016. doi: 10.5281/zenodo.3519354.
-  R. Ardila et al., “Common Voice: A Massively-Multilingual Speech Corpus.” arXiv, Mar. 05, 2020. doi: 10.48550/arXiv.1912.06670.
-  Patrick von Platen, “Fine-Tune Wav2Vec2 for English ASR in Hugging Face with Transformers.” https://huggingface.co/blog/fine-tune-wav2vec2-english (accessed Jun. 28, 2022).
-  T. Wolf et al., “HuggingFace’s Transformers: State-of-the-art Natural Language Processing.” arXiv, Jul. 13, 2020. doi: 10.48550/arXiv.1910.03771.
Graves A, Fernández S, Gomez F, Schmidhuber J. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the 23rd international conference on Machine learning. 2006. p. 369–76.
-  R. Kittinaradorn, Deepcut. 2022. Accessed: Jun. 28, 2022. [Online]. Available: https://github.com/rkcosmos/deepcut
Patrick von Platen, “Boosting Wav2Vec2 with n-grams in Transformers.”https://huggingface.co/blog/wav2vec2-with-ngram (accessed Jun. 28, 2022).
-  K. Heafield, “KenLM: Faster and Smaller Language Model Queries,” in Proceedings of the Sixth Workshop on Statistical Machine Translation, Edinburgh, Scotland, Jul. 2011, pp. 187–197. [Online]. Available: https://aclanthology.org/W11-2123
-  JiWER: Similarity measures for automatic speech recognition evaluation. Jitsi, 2022. Accessed: Jun. 28, 2022. [Online]. Available: https://github.com/jitsi/jiwer