Polite Dialogue Generation Without Parallel Data

05/08/2018
by   Tong Niu, et al.
0

Stylistic dialogue response generation, with valuable applications in personality-based conversational agents, is a challenging task because the response needs to be fluent, contextually-relevant, as well as paralinguistically accurate. Moreover, parallel datasets for regular-to-stylistic pairs are usually unavailable. We present three weakly-supervised models that can generate diverse polite (or rude) dialogue responses without parallel data. Our late fusion model (Fusion) merges the decoder of an encoder-attention-decoder dialogue model with a language model trained on stand-alone polite utterances. Our label-fine-tuning (LFT) model prepends to each source sequence a politeness-score scaled label (predicted by our state-of-the-art politeness classifier) during training, and at test time is able to generate polite, neutral, and rude responses by simply scaling the label embedding by the corresponding score. Our reinforcement learning model (Polite-RL) encourages politeness generation by assigning rewards proportional to the politeness classifier score of the sampled response. We also present two retrieval-based polite dialogue model baselines. Human evaluation validates that while the Fusion and the retrieval-based models achieve politeness with poorer context-relevance, the LFT and Polite-RL models can produce significantly more polite responses without sacrificing dialogue quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2020

Generalized Conditioned Dialogue Generation Based on Pre-trained Language Model

We investigate the general problem of conditioned dialogue, in which a c...
research
01/15/2020

AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses

Many sequence-to-sequence dialogue models tend to generate safe, uninfor...
research
04/05/2020

Stylistic Dialogue Generation via Information-Guided Reinforcement Learning Strategy

Stylistic response generation is crucial for building an engaging dialog...
research
02/22/2017

Data Distillation for Controlling Specificity in Dialogue Generation

People speak at different levels of specificity in different situations....
research
04/22/2022

Sparse and Dense Approaches for the Full-rank Retrieval of Responses for Dialogues

Ranking responses for a given dialogue context is a popular benchmark in...
research
11/16/2019

Classification as Decoder: Trading Flexibility for Control in Medical Dialogue

Generative seq2seq dialogue systems are trained to predict the next word...
research
05/17/2023

Boosting Distress Support Dialogue Responses with Motivational Interviewing Strategy

AI-driven chatbots have become an emerging solution to address psycholog...

Please sign up or login with your details

Forgot password? Click here to reset