Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation

12/06/2019
by   Mitchell A. Gordon, et al.
0

Sequence-level knowledge distillation (SLKD) is a model compression technique that leverages large, accurate teacher models to train smaller, under-parameterized student models. Why does pre-processing MT data with SLKD help us train smaller models? We test the common hypothesis that SLKD addresses a capacity deficiency in students by "simplifying" noisy data points and find it unlikely in our case. Models trained on concatenations of original and "simplified" datasets generalize just as well as baseline SLKD. We then propose an alternative hypothesis under the lens of data augmentation and regularization. We try various augmentation strategies and observe that dropout regularization can become unnecessary. Our methods achieve BLEU gains of 0.7-1.2 on TED Talks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2020

Role-Wise Data Augmentation for Knowledge Distillation

Knowledge Distillation (KD) is a common method for transferring the “kno...
research
06/25/2016

Sequence-Level Knowledge Distillation

Neural machine translation (NMT) offers a novel alternative formulation ...
research
06/06/2020

An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation

Generalization Performance of Deep Learning models trained using the Emp...
research
05/14/2023

Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation

Knowledge distillation (KD) is a promising technique for model compressi...
research
01/01/2022

Role of Data Augmentation Strategies in Knowledge Distillation for Wearable Sensor Data

Deep neural networks are parametrized by several thousands or millions o...
research
12/22/2021

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation

Recently, non-autoregressive (NAT) models predict outputs in parallel, a...
research
05/21/2023

Understanding the Effect of Data Augmentation on Knowledge Distillation

Knowledge distillation (KD) requires sufficient data to transfer knowled...

Please sign up or login with your details

Forgot password? Click here to reset