StyleDGPT: Stylized Response Generation with Pre-trained Language Models

10/06/2020
by   Ze Yang, et al.
0

Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training. In this work, we explore the challenging task with pre-trained language models that have brought breakthrough to various natural language tasks. To this end, we introduce a KL loss and a style classifier to the fine-tuning step in order to steer response generation towards the target style in both a word-level and a sentence-level. Comprehensive empirical studies with two public datasets indicate that our model can significantly outperform state-of-the-art methods in terms of both style consistency and contextual coherence.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/19/2020

Are Pre-trained Language Models Knowledgeable to Ground Open Domain Dialogues?

We study knowledge-grounded dialogue generation with pre-trained languag...
06/27/2020

Video-Grounded Dialogues with Pretrained Generation Language Models

Pre-trained language models have shown remarkable success in improving v...
03/09/2020

An Empirical Investigation of Pre-Trained Transformer Language Models for Open-Domain Dialogue Generation

We present an empirical investigation of pre-trained Transformer-based a...
06/11/2022

Building a Personalized Dialogue System with Prompt-Tuning

Dialogue systems without consistent responses are not fascinating. In th...
06/09/2022

The Case for a Single Model that can Both Generate Continuations and Fill in the Blank

The task of inserting text into a specified position in a passage, known...
04/05/2020

Prototype-to-Style: Dialogue Generation with Style-Aware Editing on Retrieval Memory

The ability of a dialog system to express prespecified language style du...
09/24/2021

Style Control for Schema-Guided Natural Language Generation

Natural Language Generation (NLG) for task-oriented dialogue systems foc...