JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs

06/19/2021
by   Pei Ke, et al.
0

Existing pre-trained models for knowledge-graph-to-text (KG-to-text) generation simply fine-tune text-to-text pre-trained models such as BART or T5 on KG-to-text datasets, which largely ignore the graph structure during encoding and lack elaborate pre-training tasks to explicitly model graph-text alignments. To tackle these problems, we propose a graph-text joint representation learning model called JointGT. During encoding, we devise a structure-aware semantic aggregation module which is plugged into each Transformer layer to preserve the graph structure. Furthermore, we propose three new pre-training tasks to explicitly enhance the graph-text alignment including respective text / graph reconstruction, and graph-text alignment in the embedding space via Optimal Transport. Experiments show that JointGT obtains new state-of-the-art performance on various KG-to-text datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/15/2022

Graph Pre-training for AMR Parsing and Generation

Abstract meaning representation (AMR) highlights the core semantic infor...
04/13/2022

GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation

Recent improvements in KG-to-text generation are due to additional auxil...
03/16/2021

Structural Adapters in Pretrained Language Models for AMR-to-text Generation

Previous work on text generation from graph-structured data relies on pr...
04/30/2020

Knowledge Graph Empowered Entity Description Generation

Existing works on KG-to-text generation take as input a few RDF triples ...
09/21/2021

Representation Learning for Short Text Clustering

Effective representation learning is critical for short text clustering ...
09/09/2021

Graphine: A Dataset for Graph-aware Terminology Definition Generation

Precisely defining the terminology is the first step in scientific commu...
12/01/2021

Domain-oriented Language Pre-training with Adaptive Hybrid Masking and Optimal Transport Alignment

Motivated by the success of pre-trained language models such as BERT in ...