Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning

04/11/2022
by   Swarnadeep Saha, et al.
11

Pre-trained sequence-to-sequence language models have led to widespread success in many natural language generation tasks. However, there has been relatively less work on analyzing their ability to generate structured outputs such as graphs. Unlike natural language, graphs have distinct structural and semantic properties in the context of a downstream NLP task, e.g., generating a graph that is connected and acyclic can be attributed to its structural constraints, while the semantics of a graph can refer to how meaningfully an edge represents the relation between two node concepts. In this work, we study pre-trained language models that generate explanation graphs in an end-to-end manner and analyze their ability to learn the structural constraints and semantics of such graphs. We first show that with limited supervision, pre-trained language models often generate graphs that either violate these constraints or are semantically incoherent. Since curating large amount of human-annotated graphs is expensive and tedious, we propose simple yet effective ways of graph perturbations via node and edge edit operations that lead to structurally and semantically positive and negative graphs. Next, we leverage these graphs in different contrastive learning models with Max-Margin and InfoNCE losses. Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs and also generalize to other similar graph generation tasks. Lastly, we show that human errors are the best negatives for contrastive learning and also that automatically generating more such human-like negative graphs can lead to further improvements. Our code and models are publicly available at https://github.com/swarnaHub/ExplagraphGen

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2020

Neural Language Modeling for Contextualized Temporal Graph Generation

This paper presents the first study on using large-scale pre-trained lan...
research
02/21/2021

Automatic Code Generation using Pre-Trained Language Models

Recent advancements in natural language processing <cit.> <cit.> have le...
research
06/01/2023

Explanation Graph Generation via Generative Pre-training over Synthetic Graphs

The generation of explanation graphs is a significant task that aims to ...
research
07/01/2021

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

Despite pre-trained language models have proven useful for learning high...
research
10/09/2022

Analogy Generation by Prompting Large Language Models: A Case Study of InstructGPT

We propose a novel application of prompting Pre-trained Language Models ...
research
05/05/2022

Assistive Recipe Editing through Critiquing

There has recently been growing interest in the automatic generation of ...
research
08/04/2022

LaTTe: Language Trajectory TransformEr

Natural language is one of the most intuitive ways to express human inte...

Please sign up or login with your details

Forgot password? Click here to reset