Modeling Graph Structure via Relative Position for Better Text Generation from Knowledge Graphs

by   Martin Schmitt, et al.

We present a novel encoder-decoder architecture for graph-to-text generation based on Transformer, called the Graformer. With our novel graph self-attention, every node in the input graph is taken into account for the encoding of every other node - not only direct neighbors, facilitating the detection of global patterns. For this, the relation between any two nodes is characterized by the length of the shortest path between them, including the special case when there is no such path. The Graformer learns to weigh these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate the Graformer on two graph-to-text generation benchmarks, the AGENDA dataset and the WebNLG challenge dataset, where it achieves strong performance while using significantly less parameters than other approaches.


page 1

page 2

page 3

page 4


Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs

Recent graph-to-text models generate text from graph-based data using ei...

Modeling Graph Structure in Transformer for Better AMR-to-Text Generation

Recent studies on AMR-to-text generation often formalize the task as a s...

Latent Tree Decomposition Parsers for AMR-to-Text Generation

Graph encoders in AMR-to-text generation models often rely on neighborho...

Text Generation from Knowledge Graphs with Graph Transformers

Generating texts which express complex ideas spanning multiple sentences...

SMLSOM: The shrinking maximum likelihood self-organizing map

Determining the number of clusters in a dataset is a fundamental issue i...

Graph-to-Text Generation with Dynamic Structure Pruning

Most graph-to-text works are built on the encoder-decoder framework with...

R2D2: Relational Text Decoding with Transformers

We propose a novel framework for modeling the interaction between graphi...