Exploiting Nontrivial Connectivity for Automatic Speech Recognition

11/28/2017
by   Marius Paraschiv, et al.
0

Nontrivial connectivity has allowed the training of very deep networks by addressing the problem of vanishing gradients and offering a more efficient method of reusing parameters. In this paper we make a comparison between residual networks, densely-connected networks and highway networks on an image classification task. Next, we show that these methodologies can easily be deployed into automatic speech recognition and provide significant improvements to existing models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2018

Densely Connected Convolutional Networks for Speech Recognition

This paper presents our latest investigation on Densely Connected Convol...
research
07/18/2017

Encoding Word Confusion Networks with Recurrent Neural Networks for Dialog State Tracking

This paper presents our novel method to encode word confusion networks, ...
research
04/09/2021

Accented Speech Recognition Inspired by Human Perception

While improvements have been made in automatic speech recognition perfor...
research
10/18/2022

HMM vs. CTC for Automatic Speech Recognition: Comparison Based on Full-Sum Training from Scratch

In this work, we compare from-scratch sequence-level cross-entropy (full...
research
01/16/2023

BayesSpeech: A Bayesian Transformer Network for Automatic Speech Recognition

Recent developments using End-to-End Deep Learning models have been show...
research
09/25/2021

Topic Model Robustness to Automatic Speech Recognition Errors in Podcast Transcripts

For a multilingual podcast streaming service, it is critical to be able ...
research
06/05/2018

LSTM Benchmarks for Deep Learning Frameworks

This study provides benchmarks for different implementations of LSTM uni...

Please sign up or login with your details

Forgot password? Click here to reset