Controlling Personality Style in Dialogue with Zero-Shot Prompt-Based Learning

02/08/2023
by   Angela Ramirez, et al.
0

Prompt-based or in-context learning has achieved high zero-shot performance on many natural language generation (NLG) tasks. Here we explore the performance of prompt-based learning for simultaneously controlling the personality and the semantic accuracy of an NLG for task-oriented dialogue. We experiment with prompt-based learning on the PERSONAGE restaurant recommendation corpus to generate semantically and stylistically-controlled text for 5 different Big-5 personality types: agreeable, disagreeable, conscientious, unconscientious, and extravert. We test two different classes of discrete prompts to generate utterances for a particular personality style: (1) prompts that demonstrate generating directly from a meaning representation that includes a personality specification; and (2) prompts that rely on first converting the meaning representation to a textual pseudo-reference, and then using the pseudo-reference in a textual style transfer (TST) prompt. In each case, we show that we can vastly improve performance by over-generating outputs and ranking them, testing several ranking functions based on automatic metrics for semantic accuracy, personality-match, and fluency. We also test whether NLG personality demonstrations from the restaurant domain can be used with meaning representations for the video game domain to generate personality stylized utterances about video games. Our findings show that the TST prompts produces the highest semantic accuracy (78.46 games) and personality accuracy (100 Our results on transferring personality style to video game utterances are surprisingly good. To our knowledge, there is no previous work testing the application of prompt-based learning to simultaneously controlling both style and semantic accuracy in NLG.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2023

Controllable Generation of Dialogue Acts for Dialogue Systems via Few-Shot Response Generation and Ranking

Dialogue systems need to produce responses that realize multiple types o...
research
07/22/2019

Maximizing Stylistic Control and Semantic Accuracy in NLG: Personality Variation and Discourse Contrast

Neural generation methods for task-oriented dialogue typically generate ...
research
01/12/2021

Transforming Multi-Conditioned Generation from Meaning Representation

In task-oriented conversation systems, natural language generation syste...
research
10/15/2021

Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation for Open-Domain Dialogue

One challenge with open-domain dialogue systems is the need to produce h...
research
02/16/2023

Conversation Style Transfer using Few-Shot Learning

Conventional text style transfer approaches for natural language focus o...
research
06/04/2019

Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG

Neural natural language generation (NNLG) from structured meaning repres...
research
05/22/2018

Controlling Personality-Based Stylistic Variation with Neural Natural Language Generators

Natural language generators for task-oriented dialogue must effectively ...

Please sign up or login with your details

Forgot password? Click here to reset