Towards Building ASR Systems for the Next Billion Users

11/06/2021
by   Tahir Javed, et al.
3

Recent methods in speech and language technology pretrain very LARGE models which are fine-tuned for specific tasks. However, the benefits of such LARGE models are often limited to a few resource rich languages of the world. In this work, we make multiple contributions towards building ASR systems for low resource languages from the Indian subcontinent. First, we curate 17,000 hours of raw speech data for 40 Indian languages from a wide variety of domains including education, news, technology, and finance. Second, using this raw speech data we pretrain several variants of wav2vec style models for 40 Indian languages. Third, we analyze the pretrained models to find key features: codebook vectors of similar sounding phonemes are shared across languages, representations across layers are discriminative of the language family, and attention heads often pay attention within small local windows. Fourth, we fine-tune this model for downstream ASR for 9 languages and obtain state-of-the-art results on 3 public datasets, including on very low-resource languages such as Sinhala and Nepali. Our work establishes that multilingual pretraining is an effective strategy for building ASR systems for the linguistically diverse speakers of the Indian subcontinent. Our code, data and models are available publicly at https://indicnlp.ai4bharat.org/indicwav2vec/ and we hope they will help advance research in ASR for Indic languages.

READ FULL TEXT
research
04/01/2021

Multilingual and code-switching ASR challenges for low resource Indian languages

Recently, there is increasing interest in multilingual automatic speech ...
research
04/20/2023

Spaiche: Extending State-of-the-Art ASR Models to Swiss German Dialects

Recent breakthroughs in NLP largely increased the presence of ASR system...
research
05/24/2023

Vistaar: Diverse Benchmarks and Training Sets for Indian Language ASR

Improving ASR systems is necessary to make new LLM-based use-cases acces...
research
06/03/2023

Adapting Pretrained ASR Models to Low-resource Clinical Speech using Epistemic Uncertainty-based Data Selection

While there has been significant progress in ASR, African-accented clini...
research
05/06/2022

Aksharantar: Towards building open transliteration tools for the next billion users

We introduce Aksharantar, the largest publicly available transliteration...
research
08/26/2022

Effectiveness of Mining Audio and Text Pairs from Public Data for Improving ASR Systems for Low-Resource Languages

End-to-end (E2E) models have become the default choice for state-of-the-...
research
07/04/2022

Vietnamese Capitalization and Punctuation Recovery Models

Despite the rise of recent performant methods in Automatic Speech Recogn...

Please sign up or login with your details

Forgot password? Click here to reset