Multi-View Feature Representation for Dialogue Generation with Bidirectional Distillation

02/22/2021
by   Shaoxiong Feng, et al.
2

Neural dialogue models suffer from low-quality responses when interacted in practice, demonstrating difficulty in generalization beyond training data. Recently, knowledge distillation has been used to successfully regularize the student by transferring knowledge from the teacher. However, the teacher and the student are trained on the same dataset and tend to learn similar feature representations, whereas the most general knowledge should be found through differences. The finding of general knowledge is further hindered by the unidirectional distillation, as the student should obey the teacher and may discard some knowledge that is truly general but refuted by the teacher. To this end, we propose a novel training framework, where the learning of general knowledge is more in line with the idea of reaching consensus, i.e., finding common knowledge that is beneficial to different yet all datasets through diversified learning partners. Concretely, the training task is divided into a group of subtasks with the same number of students. Each student assigned to one subtask not only is optimized on the allocated subtask but also imitates multi-view feature representation aggregated from other students (i.e., student peers), which induces students to capture common knowledge among different subtasks and alleviates the over-fitting of students on the allocated subtasks. To further enhance generalization, we extend the unidirectional distillation to the bidirectional distillation that encourages the student and its student peers to co-evolve by exchanging complementary knowledge with each other. Empirical results and analysis demonstrate that our training framework effectively improves the model generalization without sacrificing training efficiency.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/31/2021

Fixing the Teacher-Student Knowledge Discrepancy in Distillation

Training a small student network with the guidance of a larger teacher n...
03/08/2019

Everything old is new again: A multi-view learning approach to learning using privileged information and distillation

We adopt a multi-view approach for analyzing two knowledge transfer sett...
09/16/2020

Collaborative Group Learning

Collaborative learning has successfully applied knowledge transfer to gu...
07/16/2021

Representation Consolidation for Training Expert Students

Traditionally, distillation has been used to train a student model to em...
12/07/2021

ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images

Despite significant advancements of deep learning-based forgery detector...
06/23/2020

Modeling Knowledge Acquisition from Multiple Learning Resource Types

Students acquire knowledge as they interact with a variety of learning m...
08/23/2019

Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation

Recent developments in NLP have been accompanied by large, expensive mod...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.