Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG

by   Shereen Oraby, et al.

Neural natural language generation (NNLG) from structured meaning representations has become increasingly popular in recent years. While we have seen progress with generating syntactically correct utterances that preserve semantics, various shortcomings of NNLG systems are clear: new tasks require new training data which is not available or straightforward to acquire, and model outputs are simple and may be dull and repetitive. This paper addresses these two critical challenges in NNLG by: (1) scalably (and at no cost) creating training datasets of parallel meaning representations and reference texts with rich style markup by using data from freely available and naturally descriptive user reviews, and (2) systematically exploring how the style markup enables joint control of semantic and stylistic aspects of neural model output. We present YelpNLG, a corpus of 300,000 rich, parallel meaning representations and highly stylistically varied reference texts spanning different restaurant attributes, and describe a novel methodology that can be scalably reused to generate NLG datasets for other domains. The experiments show that the models control important aspects, including lexical choice of adjectives, output length, and sentiment, allowing the models to successfully hit multiple style targets without sacrificing semantics.


page 1

page 2

page 3

page 4


Characterizing Variation in Crowd-Sourced Data for Training Neural Language Generators to Produce Stylistically Varied Outputs

One of the biggest challenges of end-to-end language generation from mea...

Schema-Guided Natural Language Generation

Neural network based approaches to natural language generation (NLG) hav...

We went to look for meaning and all we got were these lousy representations: aspects of meaning representation for computational semantics

In this paper we examine different meaning representations that are comm...

Neural Text Generation from Rich Semantic Representations

We propose neural models to generate high-quality text from structured r...

Controlling Personality Style in Dialogue with Zero-Shot Prompt-Based Learning

Prompt-based or in-context learning has achieved high zero-shot performa...

The E2E Dataset: New Challenges For End-to-End Generation

This paper describes the E2E data, a new dataset for training end-to-end...

Comparison by Conversion: Reverse-Engineering UCCA from Syntax and Lexical Semantics

Building robust natural language understanding systems will require a cl...

Please sign up or login with your details

Forgot password? Click here to reset