Representation Consolidation for Training Expert Students

07/16/2021
by   Zhizhong Li, et al.
2

Traditionally, distillation has been used to train a student model to emulate the input/output functionality of a teacher. A more useful goal than emulation, yet under-explored, is for the student to learn feature representations that transfer well to future tasks. However, we observe that standard distillation of task-specific teachers actually *reduces* the transferability of student representations to downstream tasks. We show that a multi-head, multi-task distillation method using an unlabeled proxy dataset and a generalist teacher is sufficient to consolidate representations from task-specific teacher(s) and improve downstream performance, outperforming the teacher(s) and the strong baseline of ImageNet pretrained features. Our method can also combine the representational knowledge of multiple teachers trained on one or multiple domains into a single model, whose representation is improved on all teachers' domain(s).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2022

Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks

Teacher-student knowledge distillation is a popular technique for compre...
research
01/09/2023

ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization

Task-agnostic knowledge distillation attempts to address the problem of ...
research
02/22/2021

Multi-View Feature Representation for Dialogue Generation with Bidirectional Distillation

Neural dialogue models suffer from low-quality responses when interacted...
research
10/04/2019

Distilling Transformers into Simple Neural Networks with Unlabeled Transfer Data

Recent advances in pre-training huge models on large amounts of text thr...
research
08/25/2021

Multi-Task Self-Training for Learning General Representations

Despite the fast progress in training specialized models for various tas...
research
02/18/2023

RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness

Self-supervised speech pre-training enables deep neural network models t...
research
05/31/2021

Multi-Scale Temporal Convolution Network for Classroom Voice Detection

Teaching with the cooperation of expert teacher and assistant teacher, w...

Please sign up or login with your details

Forgot password? Click here to reset