DeepAI AI Chat
Log In Sign Up

JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs

by   Pei Ke, et al.

Existing pre-trained models for knowledge-graph-to-text (KG-to-text) generation simply fine-tune text-to-text pre-trained models such as BART or T5 on KG-to-text datasets, which largely ignore the graph structure during encoding and lack elaborate pre-training tasks to explicitly model graph-text alignments. To tackle these problems, we propose a graph-text joint representation learning model called JointGT. During encoding, we devise a structure-aware semantic aggregation module which is plugged into each Transformer layer to preserve the graph structure. Furthermore, we propose three new pre-training tasks to explicitly enhance the graph-text alignment including respective text / graph reconstruction, and graph-text alignment in the embedding space via Optimal Transport. Experiments show that JointGT obtains new state-of-the-art performance on various KG-to-text datasets.


page 1

page 2

page 3

page 4


Graph Pre-training for AMR Parsing and Generation

Abstract meaning representation (AMR) highlights the core semantic infor...

Self-supervised Graph Masking Pre-training for Graph-to-Text Generation

Large-scale pre-trained language models (PLMs) have advanced Graph-to-Te...

GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation

Recent improvements in KG-to-text generation are due to additional auxil...

Graphine: A Dataset for Graph-aware Terminology Definition Generation

Precisely defining the terminology is the first step in scientific commu...

Representation Learning for Short Text Clustering

Effective representation learning is critical for short text clustering ...

Knowledge Graph Empowered Entity Description Generation

Existing works on KG-to-text generation take as input a few RDF triples ...

Graph-to-Text Generation with Dynamic Structure Pruning

Most graph-to-text works are built on the encoder-decoder framework with...