GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation

04/13/2022
by   Anthony Colas, et al.
0

Recent improvements in KG-to-text generation are due to additional auxiliary pre-trained tasks designed to give the fine-tune task a boost in performance. These tasks require extensive computational resources while only suggesting marginal improvements. Here, we demonstrate that by fusing graph-aware elements into existing pre-trained language models, we are able to outperform state-of-the-art models and close the gap imposed by additional pre-train tasks. We do so by proposing a mask structure to capture neighborhood information and a novel type encoder that adds a bias to the graph-attention weights depending on the connection type. Experiments on two KG-to-text benchmark datasets show these models to be superior in quality while involving fewer parameters and no additional pre-trained tasks. By formulating the problem as a framework, we can interchange the various proposed components and begin interpreting KG-to-text generative models based on the topological and type information found in a graph.

READ FULL TEXT
research
05/17/2021

Stage-wise Fine-tuning for Graph-to-Text Generation

Graph-to-text generation has benefited from pre-trained language models ...
research
06/19/2021

JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs

Existing pre-trained models for knowledge-graph-to-text (KG-to-text) gen...
research
08/12/2023

Generating Faithful Text From a Knowledge Graph with Noisy Reference Text

Knowledge Graph (KG)-to-Text generation aims at generating fluent natura...
research
06/30/2020

Technical Report: Auxiliary Tuning and its Application to Conditional Text Generation

We introduce a simple and efficient method, called Auxiliary Tuning, for...
research
05/24/2023

Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language Models

The mission of open knowledge graph (KG) completion is to draw new findi...
research
10/09/2020

Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation

AMR-to-text generation is used to transduce Abstract Meaning Representat...
research
09/30/2021

Self-conditioning pre-trained language models

We study the presence of expert units in pre-trained Transformer-based L...

Please sign up or login with your details

Forgot password? Click here to reset