SMRT Chatbots: Improving Non-Task-Oriented Dialog with Simulated Multiple Reference Training

11/01/2020
by   Huda Khayrallah, et al.
0

Non-task-oriented dialog models suffer from poor quality and non-diverse responses. To overcome limited conversational data, we apply Simulated Multiple Reference Training (SMRT; Khayrallah et al., 2020), and use a paraphraser to simulate multiple responses per training prompt. We find SMRT improves over a strong Transformer baseline as measured by human and automatic quality scores and lexical diversity. We also find SMRT is comparable to pretraining in human evaluation quality, and outperforms pretraining on automatic quality and lexical diversity, without requiring related-domain dialog data.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/24/2019

Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context

Conversations have an intrinsic one-to-many property, which means that m...
02/08/2021

A Hybrid Task-Oriented Dialog System with Domain and Task Adaptive Pretraining

This paper describes our submission for the End-to-end Multi-domain Task...
07/24/2019

Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References

The aim of this paper is to mitigate the shortcomings of automatic evalu...
06/02/2019

Pretraining Methods for Dialog Context Representation Learning

This paper examines various unsupervised pretraining objectives for lear...
05/14/2019

Improving Neural Conversational Models with Entropy-Based Data Filtering

Current neural-network based conversational models lack diversity and ge...
03/22/2022

Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection

A limitation of current neural dialog models is that they tend to suffer...
09/23/2020

Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining

There is an increasing focus on model-based dialog evaluation metrics su...