CoDERT: Distilling Encoder Representations with Co-learning for Transducer-based Speech Recognition

06/14/2021
by   Rupak Vignesh Swaminathan, et al.
0

We propose a simple yet effective method to compress an RNN-Transducer (RNN-T) through the well-known knowledge distillation paradigm. We show that the transducer's encoder outputs naturally have a high entropy and contain rich information about acoustically similar word-piece confusions. This rich information is suppressed when combined with the lower entropy decoder outputs to produce the joint network logits. Consequently, we introduce an auxiliary loss to distill the encoder logits from a teacher transducer's encoder, and explore training strategies where this encoder distillation works effectively. We find that tandem training of teacher and student encoders with an inplace encoder distillation outperforms the use of a pre-trained and static teacher transducer. We also report an interesting phenomenon we refer to as implicit distillation, that occurs when the teacher and student encoders share the same decoder. Our experiments show 5.37-8.4 (WERR) on in-house test sets, and 5.05-6.18 sets.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/08/2022

Two-Pass End-to-End ASR Model Compression

Speech recognition on smart devices is challenging owing to the small me...
12/05/2020

Multi-head Knowledge Distillation for Model Compression

Several methods of knowledge distillation have been developed for neural...
11/05/2021

Oracle Teacher: Towards Better Knowledge Distillation

Knowledge distillation (KD), best known as an effective method for model...
03/11/2021

Improving Bi-encoder Document Ranking Models with Two Rankers and Multi-teacher Distillation

BERT-based Neural Ranking Models (NRMs) can be classified according to h...
05/18/2022

ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval

Neural retrievers based on pre-trained language models (PLMs), such as d...
11/07/2019

Teacher-Student Training for Robust Tacotron-based TTS

While neural end-to-end text-to-speech (TTS) is superior to conventional...
10/23/2020

Generating Long Financial Report using Conditional Variational Autoencoders with Knowledge Distillation

Automatically generating financial report from a piece of news is quite ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.