A Graph-to-Sequence Model for AMR-to-Text Generation

by   Linfeng Song, et al.

The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph. The current state-of-the-art method uses a sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR structure. Although being able to model non-local semantic information, a sequence LSTM can lose information from the AMR graph structure, and thus faces challenges with large graphs, which result in long sequences. We introduce a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics. On a standard benchmark, our model shows superior results to existing methods in the literature.


page 1

page 2

page 3

page 4


Deep Graph Convolutional Encoders for Structured Data to Text Generation

Most previous work on neural text generation from graph-structured data ...

AMR-to-Text Generation with Cache Transition Systems

Text generation from AMR involves emitting sentences that reflect the me...

Online Back-Parsing for AMR-to-Text Generation

AMR-to-text generation aims to recover a text containing the same meanin...

Clinical Text Generation through Leveraging Medical Concept and Relations

With a neural sequence generation model, this study aims to develop a me...

Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation

AMR-to-text generation is used to transduce Abstract Meaning Representat...

Sparse Graph to Sequence Learning for Vision Conditioned Long Textual Sequence Generation

Generating longer textual sequences when conditioned on the visual infor...

Structural Information Preserving for Graph-to-Text Generation

The task of graph-to-text generation aims at producing sentences that pr...

Please sign up or login with your details

Forgot password? Click here to reset