Towards Efficiently Diversifying Dialogue Generation via Embedding Augmentation

by   Yu Cao, et al.

Dialogue generation models face the challenge of producing generic and repetitive responses. Unlike previous augmentation methods that mostly focus on token manipulation and ignore the essential variety within a single sample using hard labels, we propose to promote the generation diversity of the neural dialogue models via soft embedding augmentation along with soft labels in this paper. Particularly, we select some key input tokens and fuse their embeddings together with embeddings from their semantic-neighbor tokens. The new embeddings serve as the input of the model to replace the original one. Besides, soft labels are used in loss calculation, resulting in multi-target supervision for a given input. Our experimental results on two datasets illustrate that our proposed method is capable of generating more diverse responses than raw models while remains a similar n-gram accuracy that ensures the quality of generated responses.


Another Diversity-Promoting Objective Function for Neural Dialogue Generation

Although generation-based dialogue systems have been widely researched, ...

Learning from Perturbations: Diverse and Informative Dialogue Generation with Inverse Adversarial Training

In this paper, we propose Inverse Adversarial Training (IAT) algorithm f...

Counterfactual Data Augmentation via Perspective Transition for Open-Domain Dialogues

The construction of open-domain dialogue systems requires high-quality d...

A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss

Neural models trained for next utterance generation in dialogue task lea...

Stylized Dialogue Response Generation Using Stylized Unpaired Texts

Generating stylized responses is essential to build intelligent and enga...

Transformer-Based Conditioned Variational Autoencoder for Dialogue Generation

In human dialogue, a single query may elicit numerous appropriate respon...

AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses

Many sequence-to-sequence dialogue models tend to generate safe, uninfor...

Please sign up or login with your details

Forgot password? Click here to reset