Back from the future: bidirectional CTC decoding using future information in speech recognition

10/07/2021
by   Namkyu Jung, et al.
0

In this paper, we propose a simple but effective method to decode the output of Connectionist Temporal Classifier (CTC) model using a bi-directional neural language model. The bidirectional language model uses the future as well as the past information in order to predict the next output in the sequence. The proposed method based on bi-directional beam search takes advantage of the CTC greedy decoding output to represent the noisy future information. Experiments on the Librispeechdataset demonstrate the superiority of our proposed method compared to baselines using unidirectional decoding. In particular, the boost inaccuracy is most apparent at the start of a sequence which is the most erroneous part for existing systems based on unidirectional decoding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2022

Blank Collapse: Compressing CTC emission for the faster decoding

Connectionist Temporal Classification (CTC) model is a very efficient me...
research
08/18/2020

Complementary Language Model and Parallel Bi-LRNN for False Trigger Mitigation

False triggers in voice assistants are unintended invocations of the ass...
research
10/23/2019

Efficient Dynamic WFST Decoding for Personalized Language Models

We propose a two-layer cache mechanism to speed up dynamic WFST decoding...
research
11/06/2022

Suffix Retrieval-Augmented Language Modeling

Causal language modeling (LM) uses word history to predict the next word...
research
08/12/2014

First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs

We present a method to perform first-pass large vocabulary continuous sp...
research
03/21/2022

Enhancing Speech Recognition Decoding via Layer Aggregation

Recently proposed speech recognition systems are designed to predict usi...
research
02/06/2020

Consistency of a Recurrent Language Model With Respect to Incomplete Decoding

Despite strong performance on a variety of tasks, neural sequence models...

Please sign up or login with your details

Forgot password? Click here to reset