Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training

11/10/2019
by   Margaret Li, et al.
0

Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/13/2020

Public Self-consciousness for Endowing Dialogue Agents with Consistent Persona

Although consistency has been a long-standing issue in dialogue agents, ...
10/09/2020

On Task-Level Dialogue Composition of Generative Transformer Model

Task-oriented dialogue systems help users accomplish tasks such as booki...
04/16/2021

Q^2: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question Answering

Neural knowledge-grounded generative models for dialogue often produce c...
05/31/2022

A Mixture-of-Expert Approach to RL-based Dialogue Management

Despite recent advancements in language models (LMs), their application ...
02/26/2019

Generative Visual Dialogue System via Adaptive Reasoning and Weighted Likelihood Estimation

The key challenge of generative Visual Dialogue (VD) systems is to respo...
11/22/2015

Non-Sentential Utterances in Dialogue: Experiments in Classification and Interpretation

Non-sentential utterances (NSUs) are utterances that lack a complete sen...
12/14/2020

Time to Transfer: Predicting and Evaluating Machine-Human Chatting Handoff

Is chatbot able to completely replace the human agent? The short answer ...