DTW-SiameseNet: Dynamic Time Warped Siamese Network for Mispronunciation Detection and Correction

03/01/2023
by   Raviteja Anantha, et al.
0

Personal Digital Assistants (PDAs) - such as Siri, Alexa and Google Assistant, to name a few - play an increasingly important role to access information and complete tasks spanning multiple domains, and by diverse groups of users. A text-to-speech (TTS) module allows PDAs to interact in a natural, human-like manner, and play a vital role when the interaction involves people with visual impairments or other disabilities. To cater to the needs of a diverse set of users, inclusive TTS is important to recognize and pronounce correctly text in different languages and dialects. Despite great progress in speech synthesis, the pronunciation accuracy of named entities in a multi-lingual setting still has a large room for improvement. Existing approaches to correct named entity (NE) mispronunciations, like retraining Grapheme-to-Phoneme (G2P) models, or maintaining a TTS pronunciation dictionary, require expensive annotation of the ground truth pronunciation, which is also time consuming. In this work, we present a highly-precise, PDA-compatible pronunciation learning framework for the task of TTS mispronunciation detection and correction. In addition, we also propose a novel mispronunciation detection model called DTW-SiameseNet, which employs metric learning with a Siamese architecture for Dynamic Time Warping (DTW) with triplet loss. We demonstrate that a locale-agnostic, privacy-preserving solution to the problem of TTS mispronunciation detection is feasible. We evaluate our approach on a real-world dataset, and a corpus of NE pronunciations of an anonymized audio dataset of person names recorded by participants from 10 different locales. Human evaluation shows our proposed approach improves pronunciation accuracy on average by  6 phoneme-based and audio-based baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2021

Mispronunciation Detection and Correction via Discrete Acoustic Units

Computer-Assisted Pronunciation Training (CAPT) plays an important role ...
research
08/07/2023

Labeling without Seeing? Blind Annotation for Privacy-Preserving Entity Resolution

The entity resolution problem requires finding pairs across datasets tha...
research
04/26/2022

Named Entity Recognition for Audio De-Identification

Data anonymization is often a task carried out by humans. Automating it ...
research
10/26/2018

Named Person Coreference in English News

People are often entities of interest in tasks such as search and inform...
research
07/27/2023

PEANUT: A Human-AI Collaborative Tool for Annotating Audio-Visual Data

Audio-visual learning seeks to enhance the computer's multi-modal percep...
research
07/02/2019

Attention model for articulatory features detection

Articulatory distinctive features, as well as phonetic transcription, pl...
research
10/21/2022

Named Entity Detection and Injection for Direct Speech Translation

In a sentence, certain words are critical for its semantic. Among them, ...

Please sign up or login with your details

Forgot password? Click here to reset