Controlling Style in Generated Dialogue

09/22/2020
by   Eric Michael Smith, et al.
0

Open-domain conversation models have become good at generating natural-sounding dialogue, using very large architectures with billions of trainable parameters. The vast training data required to train these architectures aggregates many different styles, tones, and qualities. Using that data to train a single model makes it difficult to use the model as a consistent conversational agent, e.g. with a stable set of persona traits and a typical style of expression. Several architectures affording control mechanisms over generation architectures have been proposed, each with different trade-offs. However, it remains unclear whether their use in dialogue is viable, and what the trade-offs look like with the most recent state-of-the-art conversational architectures. In this work, we adapt three previously proposed controllable generation architectures to open-domain dialogue generation, controlling the style of the generation to match one among about 200 possible styles. We compare their respective performance and tradeoffs, and show how they can be used to provide insights into existing conversational datasets, and generate a varied set of styled conversation replies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2019

An End-to-End Conversational Style Matching Agent

We present an end-to-end voice-based conversational agent that is able t...
research
08/24/2021

Taming the Beast: Learning to Control Neural Conversational Models

This thesis investigates the controllability of deep learning-based, end...
research
06/10/2016

Conditional Generation and Snapshot Learning in Neural Dialogue Systems

Recently a variety of LSTM-based conditional language models (LM) have b...
research
04/23/2021

Prediction, Selection, and Generation: Exploration of Knowledge-Driven Conversation System

In open-domain conversational systems, it is important but challenging t...
research
04/10/2022

Reducing Model Jitter: Stable Re-training of Semantic Parsers in Production Environments

Retraining modern deep learning systems can lead to variations in model ...
research
06/15/2016

Natural Language Generation as Planning under Uncertainty Using Reinforcement Learning

We present and evaluate a new model for Natural Language Generation (NLG...
research
03/31/2022

A survey of neural models for the automatic analysis of conversation: Towards a better integration of the social sciences

Some exciting new approaches to neural architectures for the analysis of...

Please sign up or login with your details

Forgot password? Click here to reset