Understanding and Improving the Exemplar-based Generation for Open-domain Conversation

12/13/2021
by   Seungju Han, et al.
0

Exemplar-based generative models for open-domain conversation produce responses based on the exemplars provided by the retriever, taking advantage of generative models and retrieval models. However, they often ignore the retrieved exemplars while generating responses or produce responses over-fitted to the retrieved exemplars. In this paper, we argue that these drawbacks are derived from the one-to-many problem of the open-domain conversation. When the retrieved exemplar is relevant to the given context yet significantly different from the gold response, the exemplar-based generative models are trained to ignore the exemplar since the exemplar is not helpful for generating the gold response. On the other hand, when the retrieved exemplar is lexically similar to the gold response, the generative models are trained to rely on the exemplar highly. Therefore, we propose a training method selecting exemplars that are semantically relevant to the gold response but lexically distanced from the gold response to mitigate the above disadvantages. In the training phase, our proposed training method first uses the gold response instead of dialogue context as a query to select exemplars that are semantically relevant to the gold response. And then, it eliminates the exemplars that lexically resemble the gold responses to alleviate the dependency of the generative models on that exemplars. The remaining exemplars could be irrelevant to the given context since they are searched depending on the gold response. Thus, our proposed training method further utilizes the relevance scores between the given context and the exemplars to penalize the irrelevant exemplars. Extensive experiments demonstrate that our proposed training method alleviates the drawbacks of the existing exemplar-based generative models and significantly improves the performance in terms of appropriateness and informativeness.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

08/28/2021

Distilling the Knowledge of Large-scale Generative Models into Retrieval Models for Efficient Open-domain Conversation

Despite the remarkable performance of large-scale generative models in o...
10/04/2020

Generating Dialogue Responses from a Semantic Latent Space

Existing open-domain dialogue generation models are usually trained to m...
11/14/2018

Generating Multiple Diverse Responses for Short-Text Conversation

Neural generative models have become popular and achieved promising perf...
12/15/2017

Avoiding Echo-Responses in a Retrieval-Based Conversation System

Retrieval-based conversation systems generally tend to rank high respons...
05/10/2018

Improv Chat: Second Response Generation for Chatbot

Existing research on response generation for chatbot focuses on First Re...
05/13/2020

Subsampled Fourier Ptychography using Pretrained Invertible and Untrained Network Priors

Recently pretrained generative models have shown promising results for s...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.