Twin Regularization for online speech recognition

04/15/2018
by   Mirco Ravanelli, et al.
0

Online speech recognition is crucial for developing natural human-machine interfaces. This modality, however, is significantly more challenging than off-line ASR, since real-time/low-latency constraints inevitably hinder the use of future information, that is known to be very helpful to perform robust predictions. A popular solution to mitigate this issue consists of feeding neural acoustic models with context windows that gather some future frames. This introduces a latency which depends on the number of employed look-ahead features. This paper explores a different approach, based on estimating the future rather than waiting for it. Our technique encourages the hidden representations of a unidirectional recurrent network to embed some useful information about the future. Inspired by a recently proposed technique called Twin Networks, we add a regularization term that forces forward hidden states to be as close as possible to cotemporal backward ones, computed by a "twin" neural network running backwards in time. The experiments, conducted on a number of datasets, recurrent architectures, input features, and acoustic conditions, have shown the effectiveness of this approach. One important advantage is that our method does not introduce any additional computation at test time if compared to standard unidirectional recurrent networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/14/2016

Recurrent Deep Stacking Networks for Speech Recognition

This paper presented our work on applying Recurrent Deep Stacking Networ...
research
07/06/2022

Improving Streaming End-to-End ASR on Transformer-based Causal Models with Encoder States Revision Strategies

There is often a trade-off between performance and latency in streaming ...
research
10/24/2019

A Bayesian Approach to Recurrence in Neural Networks

We begin by reiterating that common neural network activation functions ...
research
05/26/2018

Automatic context window composition for distant speech recognition

Distant speech recognition is being revolutionized by deep learning, tha...
research
11/26/2018

Improving Gated Recurrent Unit Based Acoustic Modeling with Batch Normalization and Enlarged Context

The use of future contextual information is typically shown to be helpfu...
research
04/12/2022

Unidirectional Video Denoising by Mimicking Backward Recurrent Modules with Look-ahead Forward Ones

While significant progress has been made in deep video denoising, it rem...
research
10/07/2020

Super-Human Performance in Online Low-latency Recognition of Conversational Speech

Achieving super-human performance in recognizing human speech has been a...

Please sign up or login with your details

Forgot password? Click here to reset