NeuralREG: An end-to-end approach to referring expression generation

05/21/2018
by   Thiago castro Ferreira, et al.
0

Traditionally, Referring Expression Generation (REG) models first decide on the form and then on the content of references to discourse entities in text, typically relying on features such as salience and grammatical function. In this paper, we present a new approach (NeuralREG), relying on deep neural networks, which makes decisions about form and content in one go without explicit feature extraction. Using a delexicalized version of the WebNLG corpus, we show that the neural model substantially improves over two strong baselines. Data and models are publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/03/2018

Data-to-Text Generation with Content Selection and Planning

Recent advances in data-to-text generation have led to the use of large-...
research
08/23/2019

Neural data-to-text generation: A comparison between pipeline and end-to-end architectures

Traditionally, most data-to-text applications have been designed using a...
research
10/08/2022

Comparing Computational Architectures for Automated Journalism

The majority of NLG systems have been designed following either a templa...
research
08/15/2021

What can Neural Referential Form Selectors Learn?

Despite achieving encouraging results, neural Referring Expression Gener...
research
10/11/2019

Neural Generation for Czech: Data and Baselines

We present the first dataset targeted at end-to-end NLG in Czech in the ...
research
09/04/2019

Referring Expression Generation Using Entity Profiles

Referring Expression Generation (REG) is the task of generating contextu...
research
06/02/2021

OntoGUM: Evaluating Contextualized SOTA Coreference Resolution on 12 More Genres

SOTA coreference resolution produces increasingly impressive scores on t...

Please sign up or login with your details

Forgot password? Click here to reset