DeepAI AI Chat
Log In Sign Up

Promising Accurate Prefix Boosting for sequence-to-sequence ASR

11/07/2018
by   Murali Karthick Baskar, et al.
Brno University of Technology
Johns Hopkins University
MERL
0

In this paper, we present promising accurate prefix boosting (PAPB), a discriminative training technique for attention based sequence-to-sequence (seq2seq) ASR. PAPB is devised to unify the training and testing scheme in an effective manner. The training procedure involves maximizing the score of each partial correct sequence obtained during beam search compared to other hypotheses. The training objective also includes minimization of token (character) error rate. PAPB shows its efficacy by achieving 10.8% and 3.8% WER with and without RNNLM respectively on Wall Street Journal dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/05/2017

Minimum Word Error Rate Training for Attention-based Sequence-to-Sequence Models

Sequence-to-sequence models, such as attention-based models in automatic...
11/11/2022

Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting

In this paper, we present a novel approach to adapt a sequence-to-sequen...
10/02/2018

Optimal Completion Distillation for Sequence Learning

We present Optimal Completion Distillation (OCD), a training procedure f...
06/09/2016

Sequence-to-Sequence Learning as Beam-Search Optimization

Sequence-to-Sequence (seq2seq) modeling has rapidly become an important ...
05/20/2018

Learning compositionally through attentive guidance

In this paper, we introduce Attentive Guidance (AG), a new mechanism to ...
08/03/2017

Sensor Transformation Attention Networks

Recent work on encoder-decoder models for sequence-to-sequence mapping h...