Knowledge Distillation and Data Selection for Semi-Supervised Learning in CTC Acoustic Models

08/10/2020
by   Prakhar Swarup, et al.
0

Semi-supervised learning (SSL) is an active area of research which aims to utilize unlabelled data in order to improve the accuracy of speech recognition systems. The current study proposes a methodology for integration of two key ideas: 1) SSL using connectionist temporal classification (CTC) objective and teacher-student based learning 2) Designing effective data-selection mechanisms for leveraging unlabelled data to boost performance of student models. Our aim is to establish the importance of good criteria in selecting samples from a large pool of unlabelled data based on attributes like confidence measure, speaker and content variability. The question we try to answer is: Is it possible to design a data selection mechanism which reduces dependence on a large set of randomly selected unlabelled samples without compromising on Word Error Rate (WER)? We perform empirical investigations of different data selection methods to answer this question and quantify the effect of different sampling strategies. On a semi-supervised ASR setting with 40000 hours of carefully selected unlabelled data, our CTC-SSL approach gives 17 improvement over a baseline CTC system trained with labelled data. It also achieves on-par performance with CTC-SSL system trained on order of magnitude larger unlabeled data based on random sampling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2021

A Studious Approach to Semi-Supervised Learning

The problem of learning from few labeled examples while using large amou...
research
07/17/2021

Self Training with Ensemble of Teacher Models

In order to train robust deep learning models, large amounts of labelled...
research
03/29/2021

Industry Scale Semi-Supervised Learning for Natural Language Understanding

This paper presents a production Semi-Supervised Learning (SSL) pipeline...
research
02/01/2020

Fully Learnable Front-End for Multi-Channel Acoustic Modeling using Semi-Supervised Learning

In this work, we investigated the teacher-student training paradigm to t...
research
03/07/2019

Active and Semi-Supervised Learning in ASR: Benefits on the Acoustic and Language Models

The goal of this paper is to simulate the benefits of jointly applying a...
research
06/11/2021

Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models

We present results from Alexa speech teams on semi-supervised learning (...
research
03/17/2020

Teacher-Student chain for efficient semi-supervised histology image classification

Deep learning shows great potential for the domain of digital pathology....

Please sign up or login with your details

Forgot password? Click here to reset