AGGGEN: Ordering and Aggregating while Generating

by   Xinnuo Xu, et al.
Charles University in Prague
Heriot-Watt University

We present AGGGEN (pronounced 'again'), a data-to-text model which re-introduces two explicit sentence planning stages into neural data-to-text systems: input ordering and input aggregation. In contrast to previous work using sentence planning, our model is still end-to-end: AGGGEN performs sentence planning at the same time as generating text by learning latent alignments (via semantic facts) between input representation and target text. Experiments on the WebNLG and E2E challenge data show that by using fact-based alignments our approach is more interpretable, expressive, robust to noise, and easier to control, while retaining the advantages of end-to-end systems in terms of fluency. Our code is available at


page 1

page 2

page 3

page 4


End-to-End Neural Sentence Ordering Using Pointer Network

Sentence ordering is one of important tasks in NLP. Previous works mainl...

Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation

Data-to-text generation can be conceptually divided into two parts: orde...

Neural Sentence Ordering Based on Constraint Graphs

Sentence ordering aims at arranging a list of sentences in the correct o...

Getting the Most out of Simile Recognition

Simile recognition involves two subtasks: simile sentence classification...

Concept Algebra for Text-Controlled Vision Models

This paper concerns the control of text-guided generative models, where ...

Hierarchical Text Generation and Planning for Strategic Dialogue

End-to-end models for strategic dialogue are challenging to train, becau...

Noisy Channel for Automatic Text Simplification

In this paper we present a simple re-ranking method for Automatic Senten...

Please sign up or login with your details

Forgot password? Click here to reset