DeepAI AI Chat
Log In Sign Up

Improving noise robustness of automatic speech recognition via parallel data and teacher-student learning

by   Ladislav Mošner, et al.
Brno University of Technology

For real-world speech recognition applications, noise robustness is still a challenge. In this work, we adopt the teacher-student (T/S) learning technique using a parallel clean and noisy corpus for improving automatic speech recognition (ASR) performance under multimedia noise. On top of that, we apply a logits selection method which only preserves the k highest values to prevent wrong emphasis of knowledge from the teacher and to reduce bandwidth needed for transferring data. We incorporate up to 8000 hours of untranscribed data for training and present our results on sequence trained models apart from cross entropy trained ones. The best sequence trained student model yields relative word error rate (WER) reductions of approximately 10.1 clean, simulated noisy and real test sets respectively comparing to a sequence trained teacher.


page 1

page 2

page 3

page 4


On the Efficacy and Noise-Robustness of Jointly Learned Speech Emotion and Automatic Speech Recognition

New-age conversational agent systems perform both speech emotion recogni...

Voice activity detection in the wild: A data-driven approach using teacher-student training

Voice activity detection is an essential pre-processing component for sp...

Analyzing Robustness of End-to-End Neural Models for Automatic Speech Recognition

We investigate robustness properties of pre-trained neural models for au...

Non causal deep learning based dereverberation

In this paper we demonstrate the effectiveness of non-causal context for...

Unsupervised training of neural mask-based beamforming

We present an unsupervised training approach for a neural network-based ...

Improving Noisy Student Training on Non-target Domain Data for Automatic Speech Recognition

Noisy Student Training (NST) has recently demonstrated extremely strong ...