StyleDGPT: Stylized Response Generation with Pre-trained Language Models

by   Ze Yang, et al.

Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training. In this work, we explore the challenging task with pre-trained language models that have brought breakthrough to various natural language tasks. To this end, we introduce a KL loss and a style classifier to the fine-tuning step in order to steer response generation towards the target style in both a word-level and a sentence-level. Comprehensive empirical studies with two public datasets indicate that our model can significantly outperform state-of-the-art methods in terms of both style consistency and contextual coherence.


page 1

page 2

page 3

page 4


Are Pre-trained Language Models Knowledgeable to Ground Open Domain Dialogues?

We study knowledge-grounded dialogue generation with pre-trained languag...

Video-Grounded Dialogues with Pretrained Generation Language Models

Pre-trained language models have shown remarkable success in improving v...

An Empirical Investigation of Pre-Trained Transformer Language Models for Open-Domain Dialogue Generation

We present an empirical investigation of pre-trained Transformer-based a...

Building a Personalized Dialogue System with Prompt-Tuning

Dialogue systems without consistent responses are not fascinating. In th...

The Case for a Single Model that can Both Generate Continuations and Fill in the Blank

The task of inserting text into a specified position in a passage, known...

Prototype-to-Style: Dialogue Generation with Style-Aware Editing on Retrieval Memory

The ability of a dialog system to express prespecified language style du...

Style Control for Schema-Guided Natural Language Generation

Natural Language Generation (NLG) for task-oriented dialogue systems foc...