Joint Learning of Correlated Sequence Labelling Tasks Using Bidirectional Recurrent Neural Networks

03/14/2017
by   Vardaan Pahuja, et al.
0

The stream of words produced by Automatic Speech Recognition (ASR) systems is typically devoid of punctuations and formatting. Most natural language processing applications expect segmented and well-formatted texts as input, which is not available in ASR output. This paper proposes a novel technique of jointly modeling multiple correlated tasks such as punctuation and capitalization using bidirectional recurrent neural networks, which leads to improved performance for each of these tasks. This method could be extended for joint modeling of any other correlated sequence labeling tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2019

A Comparative Study on Transformer vs RNN in Speech Applications

Sequence-to-sequence models have been widely used in end-to-end speech p...
research
09/11/2015

Hessian-free Optimization for Learning Deep Multidimensional Recurrent Neural Networks

Multidimensional recurrent neural networks (MDRNNs) have shown a remarka...
research
12/02/2013

Bidirectional Recursive Neural Networks for Token-Level Labeling with Structure

Recently, deep architectures, such as recurrent and recursive neural net...
research
07/04/2021

Cross-Modal Transformer-Based Neural Correction Models for Automatic Speech Recognition

We propose a cross-modal transformer-based neural correction models that...
research
10/24/2019

A Bayesian Approach to Recurrence in Neural Networks

We begin by reiterating that common neural network activation functions ...
research
07/09/2019

Joint Speech Recognition and Speaker Diarization via Sequence Transduction

Speech applications dealing with conversations require not only recogniz...

Please sign up or login with your details

Forgot password? Click here to reset