AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses

01/15/2020
by   Tong Niu, et al.
0

Many sequence-to-sequence dialogue models tend to generate safe, uninformative responses. There have been various useful efforts on trying to eliminate them. However, these approaches either improve decoding algorithms during inference, rely on hand-crafted features, or employ complex models. In our work, we build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering. Specifically, we start with a simple yet effective automatic metric, AvgOut, which calculates the average output probability distribution of all time steps on the decoder side during training. This metric directly estimates which tokens are more likely to be generated, thus making it a faithful evaluation of the model diversity (i.e., for diverse models, the token probabilities should be more evenly distributed rather than peaked at a few dull tokens). We then leverage this novel metric to propose three models that promote diversity without losing relevance. The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch; the second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level; the third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal. Moreover, we experiment with a hybrid model by combining the loss terms of MinAvgOut and RL. All four models outperform their base LSTM-RNN model on both diversity and relevance by a large margin, and are comparable to or better than competitive baselines (also verified via human evaluation). Moreover, our approaches are orthogonal to the base model, making them applicable as an add-on to other emerging better dialogue models in the future.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2018

Polite Dialogue Generation Without Parallel Data

Stylistic dialogue response generation, with valuable applications in pe...
research
10/11/2022

Measuring and Improving Semantic Diversity of Dialogue Generation

Response diversity has become an important criterion for evaluating the ...
research
09/06/2018

Why are Sequence-to-Sequence Models So Dull? Understanding the Low-Diversity Problem of Chatbots

Diversity is a long-studied topic in information retrieval that usually ...
research
11/20/2018

Another Diversity-Promoting Objective Function for Neural Dialogue Generation

Although generation-based dialogue systems have been widely researched, ...
research
05/03/2022

Semantic Diversity in Dialogue with Natural Language Inference

Generating diverse, interesting responses to chitchat conversations is a...
research
02/28/2019

Jointly Optimizing Diversity and Relevance in Neural Response Generation

Although recent neural conversation models have shown great potential, t...
research
03/02/2021

Towards Efficiently Diversifying Dialogue Generation via Embedding Augmentation

Dialogue generation models face the challenge of producing generic and r...

Please sign up or login with your details

Forgot password? Click here to reset