Dual Task Framework for Improving Persona-grounded Dialogue Dataset

02/11/2022
by   Minju Kim, et al.
0

This paper introduces a simple yet effective data-centric approach for the task of improving persona-conditioned dialogue agents. Prior model-centric approaches unquestioningly depend on the raw crowdsourced benchmark datasets such as Persona-Chat. In contrast, we aim to fix annotation artifacts in benchmarking, which is orthogonally applicable to any dialogue model. Specifically, we augment relevant personas to improve dialogue dataset/agent, by leveraging the primal-dual structure of the two tasks, predicting dialogue responses and personas based on each other. Experiments on Persona-Chat show that our approach outperforms pre-trained LMs by an 11.7 point gain in terms of accuracy.

READ FULL TEXT

page 1

page 3

page 7

research
11/30/2021

Learning to Predict Persona Information forDialogue Personalization without Explicit Persona Description

Personalizing dialogue agents is important for dialogue systems to gener...
research
09/22/2020

Dual Learning for Dialogue State Tracking

In task-oriented multi-turn dialogue systems, dialogue state refers to a...
research
05/13/2021

Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters

To diversify and enrich generated dialogue responses, knowledge-grounded...
research
02/28/2022

Structure Extraction in Task-Oriented Dialogues with Slot Clustering

Extracting structure information from dialogue data can help us better u...
research
10/30/2020

Improving Dialogue Breakdown Detection with Semi-Supervised Learning

Building user trust in dialogue agents requires smooth and consistent di...
research
09/10/2021

Reference-Centric Models for Grounded Collaborative Dialogue

We present a grounded neural dialogue model that successfully collaborat...

Please sign up or login with your details

Forgot password? Click here to reset