Discriminative training of RNNLMs with the average word error criterion

11/06/2018
by   Rémi Francis, et al.
0

In automatic speech recognition (ASR), recurrent neural language models (RNNLM) are typically used to refine hypotheses in the form of lattices or n-best lists, which are generated by a beam search decoder with a weaker language model. The RNNLMs are usually trained generatively using the perplexity (PPL) criterion on large corpora of grammatically correct text. However, the hypotheses are noisy, and the RNNLM doesn't always make the choices that minimise the metric we optimise for, the word error rate (WER). To address this mismatch we propose to use a task specific loss to train an RNNLM to discriminate between multiple hypotheses within lattice rescoring scenario. By fine-tuning the RNNLM on lattices with the average edit distance loss, we show that we obtain a 1.9 purely generatively trained model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2022

Improving Speech Recognition for Indic Languages using Language Model

We study the effect of applying a language model (LM) on the output of A...
research
11/19/2021

Lattention: Lattice-attention in ASR rescoring

Lattices form a compact representation of multiple hypotheses generated ...
research
02/05/2021

Intermediate Loss Regularization for CTC-based Speech Recognition

We present a simple and efficient auxiliary loss function for automatic ...
research
04/11/2021

Innovative Bert-based Reranking Language Models for Speech Recognition

More recently, Bidirectional Encoder Representations from Transformers (...
research
08/03/2022

VQ-T: RNN Transducers using Vector-Quantized Prediction Network States

Beam search, which is the dominant ASR decoding algorithm for end-to-end...
research
02/02/2022

RescoreBERT: Discriminative Speech Recognition Rescoring with BERT

Second-pass rescoring is an important component in automatic speech reco...

Please sign up or login with your details

Forgot password? Click here to reset