Data Distillation for Controlling Specificity in Dialogue Generation

02/22/2017
by   Jiwei Li, et al.
0

People speak at different levels of specificity in different situations. Depending on their knowledge, interlocutors, mood, etc. A conversational agent should have this ability and know when to be specific and when to be general. We propose an approach that gives a neural network--based conversational agent this ability. Our approach involves alternating between data distillation and model training : removing training examples that are closest to the responses most commonly produced by the model trained from the last round and then retrain the model on the remaining dataset. Dialogue generation models trained with different degrees of data distillation manifest different levels of specificity. We then train a reinforcement learning system for selecting among this pool of generation models, to choose the best level of specificity for a given input. Compared to the original generative model trained without distillation, the proposed system is capable of generating more interesting and higher-quality responses, in addition to appropriately adjusting specificity depending on the context. Our research constitutes a specific case of a broader approach involving training multiple subsystems from a single dataset distinguished by differences in a specific property one wishes to model. We show that from such a set of subsystems, one can use reinforcement learning to build a system that tailors its output to different input contexts at test time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2019

Classification As Decoder: Trading Flexibility For Control In Neural Dialogue

Generative seq2seq dialogue systems are trained to predict the next word...
research
05/08/2018

Polite Dialogue Generation Without Parallel Data

Stylistic dialogue response generation, with valuable applications in pe...
research
06/05/2016

Deep Reinforcement Learning for Dialogue Generation

Recent neural models of dialogue generation offer great promise for gene...
research
05/09/2020

Diversifying Dialogue Generation with Non-Conversational Text

Neural network-based sequence-to-sequence (seq2seq) models strongly suff...
research
05/05/2022

Diversifying Neural Dialogue Generation via Negative Distillation

Generative dialogue models suffer badly from the generic response proble...
research
04/27/2023

CONSCENDI: A Contrastive and Scenario-Guided Distillation Approach to Guardrail Models for Virtual Assistants

A wave of new task-based virtual assistants has been fueled by increasin...
research
11/29/2016

Dialogue Learning With Human-In-The-Loop

An important aspect of developing conversational agents is to give a bot...

Please sign up or login with your details

Forgot password? Click here to reset