Understanding and Improving the Exemplar-based Generation for Open-domain Conversation

12/13/2021
by   Seungju Han, et al.
0

Exemplar-based generative models for open-domain conversation produce responses based on the exemplars provided by the retriever, taking advantage of generative models and retrieval models. However, they often ignore the retrieved exemplars while generating responses or produce responses over-fitted to the retrieved exemplars. In this paper, we argue that these drawbacks are derived from the one-to-many problem of the open-domain conversation. When the retrieved exemplar is relevant to the given context yet significantly different from the gold response, the exemplar-based generative models are trained to ignore the exemplar since the exemplar is not helpful for generating the gold response. On the other hand, when the retrieved exemplar is lexically similar to the gold response, the generative models are trained to rely on the exemplar highly. Therefore, we propose a training method selecting exemplars that are semantically relevant to the gold response but lexically distanced from the gold response to mitigate the above disadvantages. In the training phase, our proposed training method first uses the gold response instead of dialogue context as a query to select exemplars that are semantically relevant to the gold response. And then, it eliminates the exemplars that lexically resemble the gold responses to alleviate the dependency of the generative models on that exemplars. The remaining exemplars could be irrelevant to the given context since they are searched depending on the gold response. Thus, our proposed training method further utilizes the relevance scores between the given context and the exemplars to penalize the irrelevant exemplars. Extensive experiments demonstrate that our proposed training method alleviates the drawbacks of the existing exemplar-based generative models and significantly improves the performance in terms of appropriateness and informativeness.

READ FULL TEXT
research
08/28/2021

Distilling the Knowledge of Large-scale Generative Models into Retrieval Models for Efficient Open-domain Conversation

Despite the remarkable performance of large-scale generative models in o...
research
10/04/2020

Generating Dialogue Responses from a Semantic Latent Space

Existing open-domain dialogue generation models are usually trained to m...
research
03/02/2023

Less is More: Mitigate Spurious Correlations for Open-Domain Dialogue Response Generation Models by Causal Discovery

In this paper, we conduct the first study on spurious correlations for o...
research
12/15/2017

Avoiding Echo-Responses in a Retrieval-Based Conversation System

Retrieval-based conversation systems generally tend to rank high respons...
research
05/10/2018

Improv Chat: Second Response Generation for Chatbot

Existing research on response generation for chatbot focuses on First Re...
research
02/16/2023

Search-Engine-augmented Dialogue Response Generation with Cheaply Supervised Query Production

Knowledge-aided dialogue response generation aims at augmenting chatbots...

Please sign up or login with your details

Forgot password? Click here to reset