Modeling Graph Structure via Relative Position for Better Text Generation from Knowledge Graphs

06/16/2020
by   Martin Schmitt, et al.
0

We present a novel encoder-decoder architecture for graph-to-text generation based on Transformer, called the Graformer. With our novel graph self-attention, every node in the input graph is taken into account for the encoding of every other node - not only direct neighbors, facilitating the detection of global patterns. For this, the relation between any two nodes is characterized by the length of the shortest path between them, including the special case when there is no such path. The Graformer learns to weigh these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate the Graformer on two graph-to-text generation benchmarks, the AGENDA dataset and the WebNLG challenge dataset, where it achieves strong performance while using significantly less parameters than other approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2020

Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs

Recent graph-to-text models generate text from graph-based data using ei...
research
08/31/2019

Modeling Graph Structure in Transformer for Better AMR-to-Text Generation

Recent studies on AMR-to-text generation often formalize the task as a s...
research
08/27/2021

Latent Tree Decomposition Parsers for AMR-to-Text Generation

Graph encoders in AMR-to-text generation models often rely on neighborho...
research
04/04/2019

Text Generation from Knowledge Graphs with Graph Transformers

Generating texts which express complex ideas spanning multiple sentences...
research
09/15/2022

Graph-to-Text Generation with Dynamic Structure Pruning

Most graph-to-text works are built on the encoder-decoder framework with...
research
02/01/2017

AMR-to-text Generation with Synchronous Node Replacement Grammar

This paper addresses the task of AMR-to-text generation by leveraging sy...
research
05/10/2021

R2D2: Relational Text Decoding with Transformers

We propose a novel framework for modeling the interaction between graphi...

Please sign up or login with your details

Forgot password? Click here to reset