Language Generation with Multi-Hop Reasoning on Commonsense Knowledge Graph

by   Haozhe Ji, et al.

Despite the success of generative pre-trained language models on a series of text generation tasks, they still suffer in cases where reasoning over underlying commonsense knowledge is required during generation. Existing approaches that integrate commonsense knowledge into generative pre-trained language models simply transfer relational knowledge by post-training on individual knowledge triples while ignoring rich connections within the knowledge graph. We argue that exploiting both the structural and semantic information of the knowledge graph facilitates commonsense-aware text generation. In this paper, we propose Generation with Multi-Hop Reasoning Flow (GRF) that enables pre-trained models with dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense knowledge graph. We empirically show that our model outperforms existing baselines on three text generation tasks that require reasoning over commonsense knowledge. We also demonstrate the effectiveness of the dynamic multi-hop reasoning module with reasoning paths inferred by the model that provide rationale to the generation.


page 1

page 2

page 3

page 4


KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense Reasoning

Generative commonsense reasoning which aims to empower machines to gener...

Graph-based Multi-hop Reasoning for Long Text Generation

Long text generation is an important but challenging task.The main probl...

Commonsense Knowledge Graph Reasoning by Selection or Generation? Why?

Commonsense knowledge graph reasoning(CKGR) is the task of predicting a ...

CoSe-Co: Text Conditioned Generative CommonSense Contextualizer

Pre-trained Language Models (PTLMs) have been shown to perform well on n...

Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs

To make machines better understand sentiments, research needs to move fr...

Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach

Pre-trained models (PTMs) have lead to great improvements in natural lan...