LSTM Acoustic Models Learn to Align and Pronounce with Graphemes

08/13/2020
by   Arindrima Datta, et al.
0

Automated speech recognition coverage of the world's languages continues to expand. However, standard phoneme based systems require handcrafted lexicons that are difficult and expensive to obtain. To address this problem, we propose a training methodology for a grapheme-based speech recognizer that can be trained in a purely data-driven fashion. Built with LSTM networks and trained with the cross-entropy loss, the grapheme-output acoustic models we study are also extremely practical for real-world applications as they can be decoded with conventional ASR stack components such as language models and FST decoders, and produce good quality audio-to-grapheme alignments that are useful in many speech applications. We show that the grapheme models are competitive in WER with their phoneme-output counterparts when trained on large datasets, with the advantage that grapheme models do not require explicit linguistic knowledge as an input. We further compare the alignments generated by the phoneme and grapheme models to demonstrate the quality of the pronunciations learnt by them using four Indian languages that vary linguistically in spoken and written forms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2022

ASR2K: Speech Recognition for Around 2000 Languages without Audio

Most recent speech recognition models rely on large supervised datasets,...
research
06/12/2017

Acoustic data-driven lexicon learning based on a greedy pronunciation selection framework

Speech recognition systems for irregularly-spelled languages like Englis...
research
01/24/2022

Data and knowledge-driven approaches for multilingual training to improve the performance of speech recognition systems of Indian languages

We propose data and knowledge-driven approaches for multilingual trainin...
research
02/18/2021

Echo State Speech Recognition

We propose automatic speech recognition (ASR) models inspired by echo st...
research
06/16/2021

Topic Classification on Spoken Documents Using Deep Acoustic and Linguistic Features

Topic classification systems on spoken documents usually consist of two ...
research
03/22/2023

AfroDigits: A Community-Driven Spoken Digit Dataset for African Languages

The advancement of speech technologies has been remarkable, yet its inte...
research
01/04/2022

A Hierarchical Model for Spoken Language Recognition

Spoken language recognition (SLR) refers to the automatic process used t...

Please sign up or login with your details

Forgot password? Click here to reset