DeepAI AI Chat
Log In Sign Up

Chain-based Discriminative Autoencoders for Speech Recognition

03/25/2022
by   Hung-Shin Lee, et al.
0

In our previous work, we proposed a discriminative autoencoder (DcAE) for speech recognition. DcAE combines two training schemes into one. First, since DcAE aims to learn encoder-decoder mappings, the squared error between the reconstructed speech and the input speech is minimized. Second, in the code layer, frame-based phonetic embeddings are obtained by minimizing the categorical cross-entropy between ground truth labels and predicted triphone-state scores. DcAE is developed based on the Kaldi toolkit by treating various TDNN models as encoders. In this paper, we further propose three new versions of DcAE. First, a new objective function that considers both categorical cross-entropy and mutual information between ground truth and predicted triphone-state sequences is used. The resulting DcAE is called a chain-based DcAE (c-DcAE). For application to robust speech recognition, we further extend c-DcAE to hierarchical and parallel structures, resulting in hc-DcAE and pc-DcAE. In these two models, both the error between the reconstructed noisy speech and the input noisy speech and the error between the enhanced speech and the reference clean speech are taken into the objective function. Experimental results on the WSJ and Aurora-4 corpora show that our DcAE models outperform baseline systems.

READ FULL TEXT
03/25/2022

Speech-enhanced and Noise-aware Networks for Robust Speech Recognition

Compensation for channel mismatch and noise interference is essential fo...
04/01/2022

Filter-based Discriminative Autoencoders for Children Speech Recognition

Children speech recognition is indispensable but challenging due to the ...
07/29/2019

Multi-Frame Cross-Entropy Training for Convolutional Neural Networks in Speech Recognition

We introduce Multi-Frame Cross-Entropy training (MFCE) for convolutional...
10/26/2018

A novel pyramidal-FSMN architecture with lattice-free MMI for speech recognition

Deep Feedforward Sequential Memory Network (DFSMN) has shown superior pe...
11/28/2019

ASR is all you need: cross-modal distillation for lip reading

The goal of this work is to train strong models for visual speech recogn...
10/12/2021

Word Order Does Not Matter For Speech Recognition

In this paper, we study training of automatic speech recognition system ...
04/21/2021

On Sampling-Based Training Criteria for Neural Language Modeling

As the vocabulary size of modern word-based language models becomes ever...