CORAL: Contextual Response Retrievability Loss Function for Training Dialog Generation Models

05/21/2022
by   Bishal Santra, et al.
9

Natural Language Generation (NLG) represents a large collection of tasks in the field of NLP. While many of these tasks have been tackled well by the cross-entropy (CE) loss, the task of dialog generation poses a few unique challenges for this loss function. First, CE loss assumes that for any given input, the only possible output is the one available as the ground truth in the training dataset. In general, this is not true for any task, as there can be multiple semantically equivalent sentences, each with a different surface form. This problem gets exaggerated further for the dialog generation task, as there can be multiple valid responses (for a given context) that not only have different surface forms but are also not semantically equivalent. Second, CE loss does not take the context into consideration while processing the response and, hence, it treats all ground truths with equal importance irrespective of the context. But, we may want our final agent to avoid certain classes of responses (e.g. bland, non-informative or biased responses) and give relatively higher weightage for more context-specific responses. To circumvent these shortcomings of the CE loss, in this paper, we propose a novel loss function, CORAL, that directly optimizes recently proposed estimates of human preference for generated responses. Using CORAL, we can train dialog generation models without assuming non-existence of response other than the ground-truth. Also, the CORAL loss is computed based on both the context and the response. Extensive comparisons on two benchmark datasets show that the proposed methods outperform strong state-of-the-art baseline models of different sizes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2020

Generating Dialogue Responses from a Semantic Latent Space

Existing open-domain dialogue generation models are usually trained to m...
research
02/25/2019

Improving Neural Response Diversity with Frequency-Aware Cross-Entropy Loss

Sequence-to-Sequence (Seq2Seq) models have achieved encouraging performa...
research
10/22/2021

Adaptive Bridge between Training and Inference for Dialogue

Although exposure bias has been widely studied in some NLP tasks, it fac...
research
08/15/2022

Efficient Task-Oriented Dialogue Systems with Response Selection as an Auxiliary Task

The adoption of pre-trained language models in task-oriented dialogue sy...
research
05/17/2023

Dual Semantic Knowledge Composed Multimodal Dialog Systems

Textual response generation is an essential task for multimodal task-ori...
research
08/02/2017

A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models

Visual question answering as recently proposed multimodal learning task ...
research
09/01/2019

A Dataset of General-Purpose Rebuttal

In Natural Language Understanding, the task of response generation is us...

Please sign up or login with your details

Forgot password? Click here to reset