ASR Rescoring and Confidence Estimation with ELECTRA

10/05/2021
by   Hayato Futami, et al.
0

In automatic speech recognition (ASR) rescoring, the hypothesis with the fewest errors should be selected from the n-best list using a language model (LM). However, LMs are usually trained to maximize the likelihood of correct word sequences, not to detect ASR errors. We propose an ASR rescoring method for directly detecting errors with ELECTRA, which is originally a pre-training method for NLP tasks. ELECTRA is pre-trained to predict whether each word is replaced by BERT or not, which can simulate ASR error detection on large text corpora. To make this pre-training closer to ASR error detection, we further propose an extended version of ELECTRA called phone-attentive ELECTRA (P-ELECTRA). In the pre-training of P-ELECTRA, each word is replaced by a phone-to-word conversion model, which leverages phone information to generate acoustically similar words. Since our rescoring method is optimized for detecting errors, it can also be used for word-level confidence estimation. Experimental evaluations on the Librispeech and TED-LIUM2 corpora show that our rescoring method with ELECTRA is competitive with conventional rescoring methods with faster inference. ELECTRA also performs better in confidence estimation than BERT because it can learn to detect inappropriate words not only in fine-tuning but also in pre-training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2022

RED-ACE: Robust Error Detection for ASR using Confidence Embeddings

ASR Error Detection (AED) models aim to post-process the output of Autom...
research
10/03/2019

Neural Zero-Inflated Quality Estimation Model For Automatic Speech Recognition System

The performances of automatic speech recognition (ASR) systems are usual...
research
09/24/2019

Learning ASR-Robust Contextualized Embeddings for Spoken Language Understanding

Employing pre-trained language models (LM) to extract contextualized wor...
research
10/26/2022

UFO2: A unified pre-training framework for online and offline speech recognition

In this paper, we propose a Unified pre-training Framework for Online an...
research
02/24/2022

Ask2Mask: Guided Data Selection for Masked Speech Modeling

Masked speech modeling (MSM) methods such as wav2vec2 or w2v-BERT learn ...
research
11/29/2022

Better Transcription of UK Supreme Court Hearings

Transcription of legal proceedings is very important to enable access to...
research
07/21/2017

An Error-Oriented Approach to Word Embedding Pre-Training

We propose a novel word embedding pre-training approach that exploits wr...

Please sign up or login with your details

Forgot password? Click here to reset